GENERAL KEYWORDS AND TERMS USED IN DEEP LEARNING

Deep learning is a broad topics, for new comers and beginners it is very hard to understand basic terms and keywords because they haven't heard it before and kind of less interactive because of it English meaning. I will try to explain those terms and give you basic idea what those terms are.

DATASET

Dataset is referred to as set of data in which both input data and label in combined in it.

TRAIN DATASET

This is dataset in which input data and label arebunched as one and used to train model from this dataset you train your model. So that yout model will learn weights depending on inputs and label.

VALIDATION DATASET

This dataset is sometime created from training dataset or sometimes provided by user. This dataset is used to verify and check our model. That how much accurracy is achieved by our model in predicting unseen data.

TESTING DATASET

This dataset is actually used to test our model efficiency and accuracy before deploying to development. This test dataset will verify our models learning and give us comfidence in our model. This test dataset should contain all possible examples a modele can face in future course.

LOSS FUNCTION

This function is used to calculate losses means calculate how much our model is predicting wrong from actual value's let say we have a regression model which output a number. Our actual label or value is let say is 12 and we are getting 11 from model then our loss function will tell its difference. Lets say 12-11=1. This is very simple example. In real life it does'nt happened like this. We have some other loss function to calculate loss. But this is general idea.

OPTIMIZER

This function is used to calculate learning weights. It is used to learn weights of models. This weights are learned through loss function. We differentiate loss function and calculate gradient. Through that gradient optimizer increase or decrease weight for us. We use many different optimizer such as Adam, Stochastic gradient, Descent, Adagrad etc

LEARNING RATE

Learning rate is a numeric number which is used to learn weights of model. We need to substract already weights with learning rate multiply by derivative (gradient) which is calculated by differentation of loss function.

ARCHITECTURE

Architecture is a combination of layers used ease learning processes for deep learning model. In which sequence each layers in put also affected model's efficiency and learning power. Popular achitecture which we use in our deep learning models are RESNET34, INCEPTION, DENSENET, UNET and many more.

DATA PIPLINE

Data pipeling is a terms which is usually used to describe process of data supply to the model. Because no data is clean data which we directly apply to model. We first need to cleaned and format data to certain format so that data can be easily consumed by our model (Architecture) and soetimes we need to little bit modify data also before putting into our architecture. So the process of cleaning, formating and modifing data is called data pipelining.

DEPENDENT VARIABLE -Y - LABEL

All of this terms are used inter changably. We always refer this to known prediction's which we input in our model at the time of training so that model can learn. Let say we want have dog and cat classifier we have 1000 dogs images and 1000 cat images. 1000 dog images are stored in a folder name as dog. and 1000cat images are stored in a cat folder. Now deep learning model will treat this folder as label which gives him this understanding that all of this picture in dog folder are dog and we hace to learned it as a dog. Dog will be considered as a label here.

INDEPENDENT VARIABLE - X - INPUT

All of this terms are used to demoustrate input. Input is always divided into features. All inputs whether image, audio, video, text or tabular each input has features. In images each pixel is a feature which is independent of all pixel. likewise in tabular format each tabular column is a feature which is independent of other columns. That why it is called as independent variable, x or input. Dependent variable is dependent on this independent variable or features.

COMPILE

This is the function which is used in keras to merge all hyperparameters like loss function, optimizer and metrics to mak graph on. Its backend which can be any framwork ( tensorflow, theono, CNTK, MXNET) This function in Keras take pain of creating this graph from us and compile all of them properly.

FIT

This is function is used in keras and fastai train our architecture using epochs, batch size, iteration X,Y of input data. It time our model and show as losses and metric value which we have submitted to it.

EPOCHS

Each data is divided into batch size means after seeing this batch size input we do back propogation ( create derivatines) when all batches are completed in the model for one time it is called EPOCHS or we can also tell that full data is traveresed for one time then it is called EPOCHS measn if we provide 5 epochs then full dataset is travered 5 times in model. Model will sww data five times.

BATCH SIZE

It is number in which we group input in one so that after reciving this full group. Derivation operation can be done. It bunches that number of input in one.

ITERATION

This is number of steps taken to complete one full epoch.

VERBOSE

This is defined in keras in fit function to see different type of outputs.

LEARNER

This function is used in FASTAI to find all hyper parameter in a one class, which will be albe to find out more hyper parameters using this types of class such as best learning rate.

CONCLUSION

In this post we have seen different type keywords and terms used in deep learning. For hope this post gives you some inside of deep learning.

 



Taher Ali Badnawarwala

Taher Ali, drives to create something special, He loves swimming ,family and AI from depth of his heart . He loves to write and make videos about AI and its usage


Leave a Comment


No Comments Yet

Leave a Reply

Your email address will not be published. Required fields are marked *