Deep learning is a broad topics, for new comers and beginners it is very hard to understand basic terms and keywords because they haven't heard it before and kind of less interactive because of it English meaning. I will try to explain those terms and give you basic idea what those terms are.
Dataset is referred to as set of data in which both input data and label in combined in it.
This is dataset in which input data and label arebunched as one and used to train model from this dataset you train your model. So that yout model will learn weights depending on inputs and label.
This dataset is sometime created from training dataset or sometimes provided by user. This dataset is used to verify and check our model. That how much accurracy is achieved by our model in predicting unseen data.
This dataset is actually used to test our model efficiency and accuracy before deploying to development. This test dataset will verify our models learning and give us comfidence in our model. This test dataset should contain all possible examples a modele can face in future course.
This function is used to calculate losses means calculate how much our model is predicting wrong from actual value's let say we have a regression model which output a number. Our actual label or value is let say is 12 and we are getting 11 from model then our loss function will tell its difference. Lets say 12-11=1. This is very simple example. In real life it does'nt happened like this. We have some other loss function to calculate loss. But this is general idea.
This function is used to calculate learning weights. It is used to learn weights of models. This weights are learned through loss function. We differentiate loss function and calculate gradient. Through that gradient optimizer increase or decrease weight for us. We use many different optimizer such as Adam, Stochastic gradient, Descent, Adagrad etc
Learning rate is a numeric number which is used to learn weights of model. We need to substract already weights with learning rate multiply by derivative (gradient) which is calculated by differentation of loss function.
Architecture is a combination of layers used ease learning processes for deep learning model. In which sequence each layers in put also affected model's efficiency and learning power. Popular achitecture which we use in our deep learning models are RESNET34, INCEPTION, DENSENET, UNET and many more.
Data pipeling is a terms which is usually used to describe process of data supply to the model. Because no data is clean data which we directly apply to model. We first need to cleaned and format data to certain format so that data can be easily consumed by our model (Architecture) and soetimes we need to little bit modify data also before putting into our architecture. So the process of cleaning, formating and modifing data is called data pipelining.
All of this terms are used inter changably. We always refer this to known prediction's which we input in our model at the time of training so that model can learn. Let say we want have dog and cat classifier we have 1000 dogs images and 1000 cat images. 1000 dog images are stored in a folder name as dog. and 1000cat images are stored in a cat folder. Now deep learning model will treat this folder as label which gives him this understanding that all of this picture in dog folder are dog and we hace to learned it as a dog. Dog will be considered as a label here.
All of this terms are used to demoustrate input. Input is always divided into features. All inputs whether image, audio, video, text or tabular each input has features. In images each pixel is a feature which is independent of all pixel. likewise in tabular format each tabular column is a feature which is independent of other columns. That why it is called as independent variable, x or input. Dependent variable is dependent on this independent variable or features.
This is the function which is used in keras to merge all hyperparameters like loss function, optimizer and metrics to mak graph on. Its backend which can be any framwork ( tensorflow, theono, CNTK, MXNET) This function in Keras take pain of creating this graph from us and compile all of them properly.
This is function is used in keras and fastai train our architecture using epochs, batch size, iteration X,Y of input data. It time our model and show as losses and metric value which we have submitted to it.
Each data is divided into batch size means after seeing this batch size input we do back propogation ( create derivatines) when all batches are completed in the model for one time it is called EPOCHS or we can also tell that full data is traveresed for one time then it is called EPOCHS measn if we provide 5 epochs then full dataset is travered 5 times in model. Model will sww data five times.
It is number in which we group input in one so that after reciving this full group. Derivation operation can be done. It bunches that number of input in one.
This is number of steps taken to complete one full epoch.
This is defined in keras in fit function to see different type of outputs.
This function is used in FASTAI to find all hyper parameter in a one class, which will be albe to find out more hyper parameters using this types of class such as best learning rate.
In this post we have seen different type keywords and terms used in deep learning. For hope this post gives you some inside of deep learning.