WHT

Mseloss Pytorch | Pytorch Tutorial 06 – Training Pipeline: Model, Loss, And Optimizer

Python Engineer

Subscribe Here

Likes

544

Views

25,213

Pytorch Tutorial 06 - Training Pipeline: Model, Loss, And Optimizer

Transcript:

Hi, everybody, and welcome back to a new. Pi Touch Tutorial. In the last tutorial, we implemented logistic regression from scratch and then learned how we can use Pi Torch to calculate the gradients for us with backpropagation. Now we will continue where we left off. And now we are going to replace the manually computed loss and parameter updates by using the loss and OPTIMIZER classes in Pi Torch, and then we also replace the manually computed model prediction by implementing a PI torch model. Then Pi Torch can do the complete pipeline for us, so this video covers Steps 3 & 4 and please watch the previous tutorial first to see the steps 1 & 2 So now let’s start and first. I want to talk about the general training pipeline in Pi Torch. So typically we have three steps, so the first step is to design our model, so we design the number of inputs and output so input size and output size, and then also we design the forward pass with all the different operations or all the different layers, then as a second step, we design or we come up with their, so we construct the loss and the OPTIMIZER and then as a last step, we do our training loop, so this the training loop, so we start by doing our forward pass. So here we compute or lets. Write this down. Compute the prediction. Then we do the backward pass backward. Pass, so we get the gradients and Pi Torch can do everything for us. We only have to define or to design our model so and after we have the gradients, we can then update our weights. So now we update our weights, and then we iterate this a couple of time until we are done and that’s the whole pipeline. So now let’s continue and now let’s replace the loss and the optimization. So for this we import the neural network module, so we import torch dot N N S and N so we can use some functions for from this, and now we don’t want to define the loss manually anymore, so we can simply delete this, and now down here before our training, we still need to define our loss so we can say loss equals and here we can use a loss which is provided from Pi Torch, so we can say N N Dot MSE loss, which is exactly what we implemented before. So this is the mean squared error and this is a callable function, and then we also want a optimizer from Pi Charge, So we say OPTIMIZER equals torch dot Optim from the optimization module and then here we use SGD, which stands for stochastic gradient descent, which will need some params some parameters that it should optimize and it will need this as a list. So we put our W here, and then it also needs the LR. So the learning rate, which is our previously defined learning rate and then in our training loop. So the lost computation is now still the same because this is a callable function, which gets the actual. Y and the predicted Y. And then we don’t need to manually Update our weights anymore, so we can simply say OPTIMIZER DOT Step, which will do an optimization step, and then we also still have to empty our gradients after the optimization step, so we can say OPTIMIZER DOT Zero Grat, and now we are done with step three, So let’s run this to see if this is working and so, yeah, it’s still working. Our prediction is good after the training and let’s continue with step four and replace our manually implemented forward method with with a PI torch model. So for this, lets we also don’t need our weights anymore. Because then our PI torch model knows the parameters so here we say model equals N. N dot Linea. So usually we have to design this for ourself, but since this is very trivial for linear regression, so this is only one layer, this is already provided in PI Torch, so this is N n dot linear, and this needs an input size and an output size of our features and for this we need to do some modification. So now our. X&y need to have a different shape, so this must be a 2d array. Now, where the number of rows is the number of samples and for each row, we have the number of other, not the features, so this has a new shape. Sorry, a new shape. It looks like this and the same for our. Y so our Y, It’s the same shape. Now, so two, four, six and eight, so now let’s get the shape. So this is y have to be careful now, so we can say number of samples and number of features equals X dot shape, and now let’s print this so print the number of samples and the number of features, and now let’s run this so this will run into an error, but I think we get until here. So the shape is now four by one, so we have four samples and one feature for each sample and now we define our models, so this needs an input and an output size, so the input input size equals the number of features and the output size output size is still the same, so this is also the number of features, so this is one as an input size and one as an output size. Now we need to give this to our model, so we say here input size and output size, and then one more. Then when we want to get the prediction, we can simply say we can call the model, but now this cannot have a float value, so this must be a tensor, so let’s create a test Tenza. Let’s say X test equals torch tensor, which gets only one sample with five, and then it gets a data type of say Torche Dot float32, and then here we passed the test sample and since this is only one well has only one value we can call the dot item to get the actual float value them. So now let’s copy and paste this down here, and now we also have to modify our optimizing here, so we don’t have our weights now. So this lists with the parameters here. We can simply say model dot parameters and call this function and now here. FRA, for the prediction, we also we simply call the model and now we are done so now we are using the PI torch model to get this and also down here now. If you want to print them again, we have to unpack them, so let’s say. W and an optional bias equals model parameters. This will unpack them. And then if we want to print the actual, this will be a list of lists, so let’s get the first or the actual first weight with this, and we can also call the item Because we don’t want to see the Tenza and now. I think we are done, so lets. Run this to see if this is working and yeah, so the final output is not perfect so this might be because the initialization now is randomly and also this OPTIMIZER technique might be a little different, so you might want to play out play around with the learning rate and the number of iterations, but basically it works and it gets better and better with every step and, yeah, so this is how we can construct the whole training pipeline and one more thing. So in this case, we didn’t have to have to come up with the model for ourselves so here we only had one layer and this was already provided in Pi Torch, but let’s say we need a custom model, so let’s write a custom linear regression model. Then we have to derive this from N N Dot module and this will get a init method which has self and which gets the input dimensions and the output dimensions. And then here we call super the superclass so super of linear regression with self and then dot in it. This is how we call the super constructor and here we would define our layers. So in this case, we say our self dot lin or linear layer equals N N dot linear, and this will get the input dimension and the output dimension, and then we store them here, and then we also have to implement the forward pass in our model class, so itself and X and here we can simply return self dot linear of X, and this is the whole thing, and now we can say our model equals linear regret with the input size and the output size, And now this will do the same thing, so now this is just a dummy example, because this is a simple wrapper that will do exactly the same, but basically, this is how we design our pie touch model. So now let’s comment this out and use this class to see if this is working and, yeah, so it’s there working, so that’s all for now and now. Pi Touch can do most of the work for us. Of course, we still have to design our model and have to know which loss and OPTIMIZER we want to use, but we don’t have to worry about the underlying algorithms anymore. So, yeah, you can find all the code on Github. And if you like this, please subscribe to the channel and see you next time bye.

0.3.0 | Wor Build 0.3.0 Installation Guide

Transcript: [MUSIC] Okay, so in this video? I want to take a look at the new windows on Raspberry Pi build 0.3.0 and this is the latest version. It's just been released today and this version you have to build by yourself. You have to get your own whim, and then you...

read more