Hello, everybody! Welcome to another video and this one. We’re gonna be talking about the super fun. Deep learning framework called pipe torch. So what we’re gonna do is we’re going to define a simple model and we’re going to sort of say what the forward path looks like and then give it a loss function and an optimizer and then run a trading loop training loop. Excuse me, so let’s just get started. I know that’s a bit confusing because I didn’t really do anything, but, uh, let’s go. Sorry, let’s first import some libraries. So we’re going to need torch. I’ve seen that we’re going to need towards Dot and N. We’re gonna use it as an N alias it. Then let’s see we’ll probably need pandas and to import some some data and let’s also get numpy as well. Just in case you need that, and I’m not gonna. I’ll probably plot something. Maybe we’ll do that in the next video. So okay, let me see. Let me show you what we’re doing here so. I’ve got this file. It is a just a basic. Excel CSV file of some of a year’s experience column in a salary column showing the correlation between, uh, the relationship between how much experience you have and what your salary should be. According to how long you’ve worked in a certain industry or whatever, so we’re going to basically train a model to find this relationship and then basically predict how what’s your salary should be based on how much experience you have, okay, so very simple model. Many would argue that. I’m not gonna have any hidden layers or anything and many you’re gonna say. Oh, what a waste of time you. Probably you don’t even need. I mean, I guess it kind of defeats the purpose of a deep neural network. If you don’t have like any nonlinearities or a lot of layers, but that’s okay, this is just so we can learn. Basically, how apply torch works, so lets. Go ahead and keep going and so first thing we’ll do is. I guess we’ll import it. So we’ll say data set equals PD. Read CSV and I have it in the same folder. It’s called salaries. Okay, then what we can do is sort of extract our columns, so let’s just say. X and I call this X temp because I’m gonna be doing stuff to it before. I actually have the real X that I want so for now. X temp when I say dataset I lock and we’re going to want all the rows and up to the right before the last not included column, so we’ll say Dot, and we want the values of that, which is our dependent independent variable. Excuse me in this case salaries. So then we’ll say. Y temp. Which is what we’re going to be predicting later is all the rows from the first to the end, so we only have two columns. So basically, this is going to be the last column, which in our case? Let me bring this back. Is these salaries that we want to predict? Okay, so we’ve got our? X’s and Ys now I’m moving a little fast to you. I apologize! Maybe you want to watch it in a slower, slower speed. So next thing we can do is actually turn these into. Torche tensors, which are sort of like the matrix data structures that we used in Pi Torch. Okay, so just like in intensive flow. Actually, so we’re gonna say Torch Dot float tensor and actually. I think you could probably just get away, which is doing tensor and it defaults to a float tensor, but we’re gonna say float tensor. I’m going to pass the next time, so we’re kind of converting these. So then we’ll do the same for our Y variable. We’ll say Y train equals torch dot float tensor and keep in mind here. I am NOT going to be splitting this into a training and a test set just because it’s a simple example. Okay, and we’re just training. Yes, this is probably bad practice. You want to have a test set? So that later, you can see if it works, but I mean, we only have a couple data points and I’ll be able to see if like, it’s correctly fitting the line we will see. We don’t need a test set right now. So let’s keep going here. I could probably run. This and you would see. Let’s do it. Let me show you we’ll just do. Python a simple rigged up pie. Let’s actually print something out, so let’s do like print X train. Mmm, warming up. Sorry, come on, computer. Let me pause this here. Okay, there we go so now you can see. There’s our tensor of basically our X column, our X variable, which is the year’s experience. Okay, so this is the independent variable. All right, same would happen if we did why you would get the salary data. So now what we kind of do is we say what the model architecture is so? I’m hoping you got a little bit of a knowledge of like a neural network. Basically, you have all these layers, so that’s. What we’re going to say here? So we’ll say class model and we’re going to inherit from the end module from Pi Torch So that we can use all its cool functions and then we’ll create an init. And then we’re gonna have to do one of these super in. It’s so that everything is okay in the method resolution order so that it’s so that there’s no like huge conflicts. Then we’ll say self dot linear, and this is just gonna be the name of our layer. You can call it. Whatever one we’re just gonna have a simple, linear layer, so I’m gonna say torch and then linear now and that’s it. That’s it for like the structure of our model now. I’m probably going to get complaints. That’s okay about people saying. Hey, this this totally defeats the purpose of a neural network. You know, we’re just trying to see the PI torch. You know, kind of flowy here, so you can just as usually add more layers, you know, and then after that, add some nonlinearities, but let’s just do this for now, okay, so, after this, we’d say what our forward path looks like, so I’m gonna say def forward himself, and we’re gonna say that our prediction is going to be calling linear with the model and then let’s just return that wipe. Read, okay, cool. I guess I kind of glossed over this. I should tell you it’s so here. We’re just basically saying, we’re gonna take one and output one. Okay, so that’s what you should know. You’re putting one feature in getting one feature out. Okay, so we’ve got our forward pass to find very simple, Okay, Like I said again you could from F you could say torched and an F. I believe and AD like a rail. Ooh, or a sigmoid or something. Depending on the problem you’re doing, so let’s just continue here, so we’ll say our model is gonna be an instance of model and then what we can do is define our loss functions and the OPTIMIZER that we’ll use. Okay, basically, the the method of gradient descent we’re gonna use, so let’s just say our loss. Func equals two and and MSE loss when use mean squared error loss. And that’s it. Okay, then we’ll say the optimizer was calling up. Tim, and by the way, I think it’s Pi Torch convention to call this like criteria or something. I forget, but anyway, read the docs and see if this should be lost function or criteria or criterion. Maybe it’s criterion anyway. I’ll just call it loss function for now, so we’ll say opt in torch dot up dimmed SGD and we’re going to pass in model parameters, which is where all of our weights are or held. So basically, when you create this optimizer, you pass in all of your model parang all of your weights. You know this? This has all the learn herbal parameters of your model and we’re passing that to our OPTIMIZER, which, in this case we’re gonna use to cast gradient descent. And you could actually see all these if you want, but it would, probably if your model is big enough, it’s gonna be like a, you know, like thousands like, or it’s hundreds of thousands even millions. So and you get this from the an end module that you sub that your sub class in here, So lets then also we’re girls are going to need to pass in a learning rate, so we’ll just say. L are equals, and I’m gonna do 0 0 0 1 just to make sure that it works and we don’t get like infinite values or not available or something so after that, we can then sort of like, define our training loop, which is like a really explicit way. Think of setting up like your whole training process, so I like it, so we say we’ll say here training, so it’s for epoch in range. Let’s just do like 200 for now, and obviously the longer you sort of run this, it’ll approximate better, so we’ll say. Y pred is equals to model passing in X train, and our loss will be equals to the loss function that we defined with the past in wipe read, which we just calculated and y train, which were going to be comparing it to. So we did our forward pass and we got our loss function where we’re basically calculating what we got to what it really should be. And then this is the cool part you can do. You can use this backward function to compute the this to get the sum of the gradients using the chain rule, so this is a really cool function in Pi Torch, which makes like limits, which makes it so easy because it’s basically getting like all the all the calculations you need like all the way down the line with just this one function. Okay, so also something you should probably recommended. Is that you do you zero out OPTIMIZER Zero Grad. So that they don’t. What am I, what’s the word on a compound? Okay, so you do that, then you run the the backward, and after that, you can update the weights and I call this Opt-in Not Optimizer lets. Just call it Optimizer. I can’t, why am I being fancy here? OPTIMIZER and then what we’ll say is law we got Updates will say OPTIMIZER DOT Step. Okay, and that’s it Now. Let’s run this to see if I get any errors. Go into my handy-dandy console. Here, let’s run! This see what happens, okay. Nothing, no errors. No, nothing, okay, awesome! Now, let’s actually. I guess see if it worked. So let me open up. Excel here and see what we got. Hmm, okay, year’s, experience and salary. All right, so let’s say we want to calculate. I don’t know, estimated six years of experience. What your salary should be which in this case is? I don’t know around ninety something thousand. Okay, so what we’re gonna do is we’re going to basically create some dummy data. We’re going to say test experience test expert, and then we’ll say as he goes through a torch float tensor. And what did we say six? So let’s do six point zero, okay, then we could just print this out making really explicit, so we’re saying if you have six years experience salary is alright. Now we’ll say model, we’re going to pass in test experience and then to sort of get this information that we want. We have to access some things. I’ll say zero zero using just simple Python array, indexing notation. And then we’re gonna get the item the actual data in there. So let’s see what it says, you know? What, let’s run a couple more. Let’s print this so that you can actually see what’s happening at each iteration. So what I’m gonna do? Is something like right after here? I’m going to say print and we’ll say lost data. Okay, and we’ll see what’s in there, and hopefully if we did something right, it should be getting smaller. So, hmm, so here. We see, let’s see nine nine places to the right nine zeros. And then is this getting smaller? I think it is so very smart. Okay, so lets we didn’t get a good result, so let’s let’s do like, more iteration. See what happens, let’s see like 500 and what were we trying to get again you would do? You would definitely you would like if you’re doing this for real. You would want like a test set, okay to actually test it, but we’re here just for manually testing so six years 90 something thousand should be in the 90s So let’s do 500 and see. How cool is you can just change and iterations. You want to do and training them any more time? So, okay, so not beautiful, but what? I kind of wanted to show us Just how pi torch sort of like the flow is and you could have just done this with like. Oh, you could just fit. It fit a line with, like scikit-learn or something. You don’t need Pi doors for this. But anyway, it’s fun. Let’s just see right now. Just for the heck of it. I have matplotlib imported Pi plot and what I’m gonna do is. I want to see what this loss looks like, so I’m gonna do a plot. We’re gonna make a scatter plot and pass in the epoch and let’s see, let’s make it just read as in these parameters that it means that pipe lot needs and will make us do marker now. I think this should work. I just want to basically plot. The the loss function as it decreases. So then we’ll just do peel T touch. Yeah, let’s try that okay there. We go so as you can see as more epochs happen. The loss decreases. Let’s make this more explicit, even so that you can make sense, you know? T dot X label epochs. And we’ll just do y u pox. Let me see if this changes anything. I don’t think so, okay, alright, so there we go as we pox happen. As more training happens, you can see the loss decreases at which point it starts like kind of slowing down, and it doesn’t make sense to do more epochs. That’s why if I did like a thousand here, I probably still get like a pretty similar result to this. So anyway, guys, that’s it. That’s a totally unnecessary simple regression in Pi Torch. I hope you learned something if I missed something like, totally fundamental, and I screw something up. Which is a hundred percent possible. Let me know, and please correct me. Still learning Pi Torch. I really like it, though. Give it a look, check out. The doc’s got pretty good. Doc’s, alright. Thank you for watching, catch you next time bye.