Our online welcome back today we are going to. I’m gonna try to finish this. Exercise see Vartan exercise. It’s part of the Pi Thoughts, Udacity scholarship that I get about, like two three weeks ago, so it’s we’re dealing with the convolutional neural networks. If any of you guys have just went following me, so I’ve been learning about deep learning since the past. I don’t know, but B six months six month or so. Yeah, it gives me a headache, so right, now we are. I’m trying to understand or I’m trying to share with you. What I’ve learning right now as we go so bear with me, It’s the deep learning network. It’s called CNN. CNN is network in deep learning that’s being used to classify images, especially rich real-world images so see. Vartan is a database is a famous database. So we are and trying to understand how these things work, okay. So, in this images, there are small color images. I think it’s a 32 by 32 pixels and then in has ten classes. Okay, and these are the example of the classes, So yes, airplane automobile bird cat deer, duck frog course ship turns at ten one, two, three, four, five, six, seven, eight, nine ten. Okay, cool. So was this multi class classic wealthy class classifications multi-object, yeah. Multi multi Multi multi classifications. Yeah, okay, transfer period. Oh, yeah, by the way this is using. Python, it’s a free Facebook keep learning framework. It’s kind of competitor of key tensorflow, which is by Google, which is gaining ground over tensorflow from what I heard from the people already, like been using it. They said it’s more flexible the framework 10 Panzer flow. But tensorflow is gonna release their dance floor. I think two points Type 20 I’m not sure it’s already released, but they already like announcing it, so it’s gonna come soon and it’s gonna be like, also as flexible, supposed to be and even better anyway, so we are using path. Arch is kind of like it’s called PI because it’s using kind of like a Python kind of syntax. OK, so test for Kudiye Kudiye is it’s a parallel computing platform. They are philip. I think and Fidya because deep learning model or deep, deep learning training or keep learning. When you learn when you train like, keep learning datasets, it’s very data-driven data intensive, right so parallel computing up. It’s kind of necessary, otherwise, we’re going to wait for like. I don’t know days weeks, right, So this is where GPU shines so we are having a 32 by 332 pixels of images and times 3 because it’s a color images because color images has 3 channel red green and blue. Okay, so so as usual here we’re an import torch. Is the the by touch and an import numpy. SNP and you want to check if CUDA is available, so it starts? CUDA, hold on. I think I cannot run this. You know this it’s gonna throw some error, yeah? I have to I have to Pip install, adding torch. This is the problem of a cougar column. Every time you is, it’s free. But a pie chart is not installed by default on Google Collab. So sometimes I prefer to use cargo kernel this one because in cargo kernel, you don’t have to. You don’t have to import the Python and all the other. The libraries of Pottage, Everything is already pre-installed, but, okay, let’s see how far this will go. Okay, Otherwise we might we might. We might have to switch to cargo color or probably. I can just I can fire this thing up anything since you are waiting anyway, right. I’m not sure if there’s a better or not. Convulsion a layer. No, this is not what I won anyway, okay. I got this like that. Sorry, guys, okay, so so over here, we are just asking if CUDA is available and then well written that as a boolean, whether true or false, right, and then this one is just going to Train if there is a CUDA or not, so if good is is not available, then we will, if not if this satisfy right so well, we’ll check, we’ll run this, and it’s as good as available. All right, so if you if you if you print this, for example, print train on GPU. This should give true answer there. We go true, okay, So now we are loading the data. Sorry for the noises. Those where? I’m gonna I’m gonna library. Somebody’s like screaming or okay, So downloading will take a minute, okay. I think I have to import again. The tour fission. This one. Because if I run this, it will not work, okay. It says imports fell into the missing package. So you need to import it. I will import it one more time over here. So over here, we already imported torch, which is the Python’s package, right, But since we are using a third fission, which is a library of deep learning library, especially for fission, deep learning. So we have to do Pipin saw tart vision. Yeah, this is the problem in Google call app. You just have to keep on doing this because it’s not installed by default. Okay, so anyway, it’s quite fast now. You can run this, okay, right, so if you run this. It should work so now it’s downloading the data right from cs100 Edu is this. Ivar, 10 so let’s go through this one, okay, So this is just importing a bunch of packages from transient data sets and then importing a transform library or packets, and then this one, it’s an important subset. Random sampler. Okay, we’ll see where this is going, okay. So is it. Yeah, so it’s only downloaded here, okay, so now we are going to just define. So the number of workers is 0 it says a number of sub processes to use for data loading. I’m not sure what is issues. How many samples per Betts to load okay, The bezels 20 as I understood. The validation size is 0.2 which is 20% so the training set will be 80% and then we are defining here. A transform transform is a normalization, A pre a pre prop pre-processing stage of your Terra. See if you get this input image, right, which is 32 by 32 pixels and three layers are red, green and blue because its color. You cannot just input this image into the model into the lectern at their computer. The computer doesn’t understand that, and even if it’s understand if you just input it just SS. It will take a long time to train it because it’s not kind of it’s not efficient, the input data. So you need to kind of, like, make it more efficient. They we call it normalization right so here so to convert data to our normalized towards float. Answer so what we do is we go to transform, and we want to call compose right and inside compose. We passed in these parameters. We want to try. We call transform Dot two tensor. We are converting into tensors and then after that we passed in these parameters to normalize whatever data that we are going to put to there. The image pixel 30 32 by 32 right. And this is 0.5 is refers to the MIN. I think, yeah, it is the min. And then this is refers to the standard deviation. Okay, why 3 because there are three channels, so this for the red grid and and blue, so they they’ll just make it the same. Y point 5 is just it’s just just to make it. You know, it’s it’s. I think it’s a kind of standard procedures. This one. You want to normalize it in such a way that you. I think this is whatever input did you put so if it’s that say? X is minus 0.5 right, and then divided by 0.5 -, It’s not a definition. I think it’s like that. Yeah, so we can. We can actually take a look at this. Transform, let’s say print, bring transform, lets. See, okay, so if you print transform, you can see all of these things, right, It’s it’s an object that you can call or instead a function that you can call that basically inside it. It’s converting whatever X you want to put here to tensor, and then you normalize it using these parameters, the mean and the standard deviation. Okay, and then so there is that, and then we are putting the training data into this variable, so we are calling data sets. Data sets is one of the packages that we just import from tours friendship taught fusion and we called out C Var 10 and because we wanna we’re using C Var 10 and then the bar medals that being said this is Tara here. Which is where is the data, which is? Yeah, it’s theta and then train is true. Download through the transform equals the transform Alright? So you’re using the transform here equals to the transform here, right, so we are applying all of these things basically into the sievert and error, right, and then the same thing we do for the test error, right, so obtain training indices that will be used for validation. Okay, so you wanna compute how? How much is the training error? We can actually print this one, so let’s see so we don’t have to. We don’t have this. We just still at this right well. Take a look at it here, So let’s say you want to look take a look at the train, right. Damn train. So we have 50,000 training data, okay, and then range now trend lists in indict indices. So what is that in business of the indices, okay. Wow, okay, so this is like the the index of all of those 50,000 okay, and then you want to shuffle that, okay and be random to shuffle, okay, and then split split is a valid sighs. I don’t know a number of train is 50,000 Pellet size is 0.2 felides in size. Right, and then you want to floor it integers that say what split is split. Okay, so you have 10,000 which is 20% of 50,000 right, yeah? I don’t know what to end. Peter floor. Let’s see and Pete dot floor. I’m not sure what does this to shift up? Shift up? Doesn’t work, okay. Oh, okay, written the floor of the input Eleven. Why’s the floor of the scalar? X is the largest integer. That’s that, okay, okay, so the okay, so floor. – 2.5 is – – number instead uses the arena floor where floor – 2 1 – 3 number area – 1 1 7 rows. OK, it’s gonna be minus 2 minus 2 minus 1 so, okay, so you kind of you kind of round it up to the to the floor to the nearest to the nearest decimal number. Okay, no! I know I’m saying, okay. What it’s trying to do, okay, okay. Basically taking out all the decimal into an integer okay, written the floor of the input. Element wise, okay. The floor of the scalar is the largest integer. I okay, such that I is less than equals to X. Okay, and integer is just integer is nothing is putting the positive value. I think where is that ain’t. Nothing so and let me see integer you can do this integer confer the number or string to an integer or it turns youve. Naga okay, okay. I don’t know why you have to do that. I think it’s already integer anyway anyway, anyway. This is the trend index and a validation index. Okay, and this is split because the split is from 2000 onwards. And then this one is a semicolon to 10,000 okay, so the training is from from 10,000 onwards, which is basically, you got the 40,000 and this one is only from 0 to 10,000 Yeah, okay, and this one. I think is already being the split. I things being randomized is being shuffled, right, split The Nam train in. This is less strange. Yeah, it’s already being shuffled. So the index is very variable. Okay, in this is slack. Okay, okay, so lets lets lets. Bring you this again. I’m still a bit confused, okay, That’s an Indian. This is like zero, okay, and this is zero to one. Okay, zero to two, okay, okay, So this is basically print printing The indices, which is already being a shuffle, so it’s already like kind of like randomize from from from the the 10,000 the 10,000 to to the rests, right, which is in to the rest, which is basically until 50,000 which is business, which is basically 40,000 in indices over here. So if you look at the trend index, it’s gonna be a randomized from those indices of the first 40,000 right, That’s what it is. Yeah, there you go, right. It’s just randomized, right, and then the same thing as the valid index here is it’s the next. It’s the next 40,000 to 50,000 which is which is 10,000 right. So so if you if you if you do like this, it will not work. I think, yeah, there is none, right. It should be like nine, nine, nine, nine like that. It gives you like one problem. No, no, like, wait. Yeah, it’s their 1001 Yeah, it doesn’t work because it’s only there’s only 10,000 members, so this is like the last member’s. Right, so even if you put a 10,000 won it, it doesn’t work. It’s just putting the last members of that, OK? I got that one so ok? So subset random sampler, OK? The subset random sampler is a function what I don’t know. What is this, ok? Let me let me see what’s a different there. OK, so, yeah, this is this is what I encourage you to do. So if you wondered? If you don’t understand what this function does you just spring it out and see what’s the result. And then you see before and after, and then you can see what that function do right so. I’m trying to see what is this function, that’s. The subset random sampler does to this to this train indices. I understand what what this function really does. Write to the train for the 390x right, So I’m just gonna, I’m just gonna understand. Let’s say, can I do this? No, there’s no ship, OK? Because it’s a factor. I think right, but I can I can I can. I can look at their code like this. This there’s two thing you can do. You can just give the input output, right or you can just type type the code and then type question Mark, and then you, you execute it. We just shift enter. Let’s see oh subset random sampler. Is that correct? Doesn’t Work. Probe is over here there. You go, okay. So where is this subset that subset an item sampler south and the indices its sample elements randomly from a given list of indices without replacement? Okay, actually, you don’t have to do this because it’s very random anyway. Yeah, you know why because it’s already random, so if you? I bet the output of this trend sampler, right, I don’t know why the author to do this like it’s like Doublings is if it’s like doubling shuffling. You know you, you shuffle two times, use this. I mean, I don’t know if it’s necessary, right, just print out like from from zero to like two [Music]. Oh, is it putting it into a lack of function? Okay, putting it into a function Turn sampler. Okay, you cannot even get it like. Can you like, say print trans ampler? Don’t you tell that assemblers on an object is an object, okay, it’s putting it into an object, okay, okay, so you cannot access it directly. It’s not an indices like they’re in A DX. So this one you can put it on it. You can put it in a like you can see. What’s the inside, right, okay, and she’s gonna bring out the whole thing. Can you check the ship? No, okay. Can you check the type know? What can you do it? Insert pop, okay, okay, and the other one is train sampler. I need to understand this. Sorry, guys enjoy sis happen. Come Index, Pop. Oh, okay, so is this like, like, zero to two? I got oh, okay, so this thing is is. It’s a way of putting it into an object. Okay, so when you will call it, you have to call the. I have to go into that thing because that thing is the same as the train. Idx string 90 X. This is the same thing basically, okay. I’m gonna I’m gonna put Komada cheese, okay. I’m gonna separate whatever. Do whatever I’m gonna separate the thing. Yeah, happy, see, it’s exactly the same, huh? It’s just putting it into an object. It doesn’t even it doesn’t even shovel it anymore. It’s the same, It’s the same thing, like if I put like I said from 10 indices to say 13 okay, and then this one time from 10 to 13 it’s the same thing. Yeah, exactly the same thing, hmm, okay. I don’t know why you wanna do that, yeah? I’m still thinking yeah. I’m still confused about this pre-processing. Sometimes it’s just, I mean, it’s it’s kind of like simple, but it’s kind of complicated, too. At the same time. I mean, I don’t understand. What’s the logic of doing this okay, anyway. We’re we’re too long here, so let’s just move on, okay, okay, because, okay, because this thing, if you look at the if you look at the the function subset, random sampler, it says it’s supposed to. It’s supposed to random random eyes. Look at this sample elements randomly. What are you talking about? You didn’t sample it randomly. You’re just taking it as is. So this is kind of wrong, it’s not true. I don’t know, the documentation sucks. I don’t know anyway, okay, so so I learned one thing you cannot restrict documentation or am I wrong? I don’t know you tell me it says it says randomly. But when I when I when I print it, it’s the same thing. The indices is the same. Okay, so it’s not random, okay, so so prepare data loaders. Okay, combine the result and sampler. Okay, so you call the torch Details, Doterra? I think it’s from here. Yeah, okay, from here And then Dot. What else, Dot data loader? And then you passed in the trend era trend era is is all the the training Veera? Basically, the the trainee data itself the the pixels the data, so let’s say it’s 32 by 32 it’s already finalized, let’s see. I can just print out for you and for myself. So trend era, it’s basically the actual image data that has been normalized so for example, if I put shaped. I think there should be a ship. No, no ship, okay, whatever okay, can. I just do 0 Yes, see, bartender doesn’t have ship. Oh, my gosh, that’s annoying, so you can see the the changes of this is basically, it’s the pixel data from emits the first image. I don’t know, I don’t know whether the first image aeroplane or not, but whatever image is the first one, this is what it looks like in a number form. You know, you know between. I think minus 1 to 1 right, So it’s already normalized because the normal the normal number is between 0 to 255 right because this Pancer or this image, the number 0 that, which is the first year is already normalized. Remember, we already transformed it over here, right, Using this function, transforming called transform right, and we specify the transform. Is we turn it into a dancer? Dancer is just a list of array is a special kind of format that the model can accept and then we minus 0.5 and divided by 0.5 right. The system, – min and / them standard deviation, and this is just because it’s a color image, so there’s three three three channels, so each of the channel has their own number kind of right, so if you have a image, for example of, I don’t know aeroplane, for example, yeah, that’s for example, this one right, and then the first pixel of that image, for example, is a I don’t know, it’s a is a white, really white, that’s a it’s. The okay, just this one Is there like a, you know, like a white cloud, right, So that one the pixel is like zero. You know, like, like zero one, right. Hey, wait! No zero is like a black. Yeah, the higher the view. So I think the white. Yeah, so so very humble. Is it zero? It’s a black image. Yeah, so you find the mean. Oh, point five. And then you divide it by 0.5 to standard deviation. Okay, so anyway. [MUSIC] Where am I am? I’m losing my train of thoughts. Okay, We’re looking at the Train. Dear okay, we are okay. We are trying to understand the Train loader. Okay, so you’re putting all of the images normalize pixel images into this function right there over there, and you’re also specifying the bed size of I think 20 the bed size. Yeah, so you want to train every 20 images you want to do like kind of backpropagation because you don’t want to do like their propagation like only once every epoch or which is like what forty thousand images training images it’s gonna take a long time, right, so you want to do it more frequent, so you specify like every 20 images you want to do like, compute the gradient and then update the width and all that stuff. Good, sir, and you sampler. The sampler is a TransAm blur, which is this guy is an object basically of the indict in the Indian Dices basically after the training indices, Right, We just randomize okay, And then the number of workers are, you know what’s number of workers is zero? Okay, the same thing we do for the validation loader. So you put all that the trend data, okay. Tara, Tara sets the TAS Tara. I don’t know what’s that even with him. Oh, this is training goes through. This is trained equals false, okay. The number look at zero. That size is bad. Sigh’s, okay. The sampler is in. This is a valid sampler, which is which is in Dezs of the validation, which is only 20% Okay, you still get it from the training data. Because during that is the whole thing, right St. The trend era is a it’s the whole 50,000 So if you compute land, that’s a LAN LAN. It’s the whole 50,000 Yeah, it’s the whole 50,000 right, but you’re only taking, so you need unit and index. So you need this. This like as a pointer where where – which one to get out of those 50,000 which mean you’re only getting a specific number, which is only 10,000 and that number that that index is already being are randomized, right. [MUSIC] The test loader is the same thing is except with the test error and there is no sampler. See the test error here? The training is equal to false. So, okay, let’s see. How much is a testator that’s? The oh, it’s ten thousand. Okay, So we have a training data for a thousand where I mean. The whole is few thousand. But you divide by 10 divided by 2 training and validation is kind of confusing so 40,000 to train the model 10,000 to validate and another 10,000 to test. Okay, sounds good. Yeah, and then you specify the image classes. Okay, you can lets lets. Find out about this. That’s final about this. Okay, can. I just do this. No, it’s wrong, okay. I’ll just do this, okay. So what is this data? Loader, it’s a self data set that size shovel sampler. None let me see what’s this that’s. I’m a scientist era and stir it up. Add size number of workers. Okay, do data set. That says is 1 by default shuffle falls. So you don’t have to shuffle or you can actually travel it from here. OK, so, actually, if you shuffle through here, then you don’t have to do this. You don’t have to do this thing anyway. What is number of workers, OK? It’s combining a data set and sampler and provides a single multi-process. Operators offered a data set. So it’s an iterator. OK, the terror set and a sampler. The sampler is is index to the terror. Okay, cut it. The slack is like a pointer which data to pick, and this is like an iterator. Okay, that size, 10300 does reshuffle at every paçoca at sampler number of workers. How many sub process to use for data loading 0 means that the data will be loaded in a main process? How many sub process to use for generally? What do you mean by sub process? I’m not sure, but that number of workers is 0 is by default. What is the number of workers? OK, yeah. I’m not sure what it’s mean. What is number of sub process, what’s? Our process, probably. There’s a further documentation that you have to read, OK? I’m not gonna go into that. Sorry, okay, all right, so this is good, okay. Next we’re gonna import the Matt plot lip library just to show helper function to analyze and display minutes. Okay, okay, and then obtain one batch of training images so from train loader, either the images level image is not by convert images to not by for display. And then you want to display the first 20 images. Okay, okay, okay, fine, so this is the first and images, okay, and then few an image in more detail. Okay, this can’t be squeezed. I don’t know what is that. XM show annotate. Okay, so we look at the normalized red green and blue color channel as three separate. Okay, let’s see. What is this, come on, come on? So it’s taking figures. That’s 36 by 36 Whoa, this is so cool. Wow, okay! There is six, is it? That is 32 by 32 Oh, its 32 battery -. Why’s it becomes 3 6 by 36 Well, that where is this 36 by 36 Oh, this this one, right. There is just by 36 Okay, okay, okay, so this is the red channel with all the the pixel values already normalized. Hmm, oh, it’s 32 actually, yeah. I think that’s just the base canvas is like 36 Okay, so let me see this. So this is and this is green channel. So if you if you look at the value closely. I don’t if you can see it from there. I mean, it’s it’s It’s like from negative one point, eight, seven and until positive one and they have a look at the white one. Its creator, right, So if you if you look at the value of the black one, it’s negative, right, The white one is like point. Five eight point seven, okay. Oh, Oh, what did I do? Oh, my gosh, did. I do something or no. Okay, okay, okay. Where are we right now? Um, okay, so we’re looking at the different channel, right, you can see they’re slightly. I’m just going to make it a bit. They’re slightly different, right, It’s not exactly the same, like if you look at here. This is 0.53 here. This is 0.55 This is point one two. It’s the same one -. Yeah, this is point 11 This is point two -. Yeah, this is 0.46 This point five four. Yeah, so the actual number. The pixel number is different, But if you look at like from far away, it’s kind of the same. All right, see, it’s kind of the same. Okay, that’s what you don’t want to throw that information. It looks the same by this guy well. The Blue Channel is a very different. If you look at the red and green, it’s kind of the same, right, although if you look at closely, it’s different, especially you look at at this area here compared to this area. The one here it’s totally different. The number right so over here is like 0.24 over here. 0.33 Is different, but if you look at the blue. Channel compared to the green Channel is is you can see it’s obvious. Is, you know it’s a it’s a different pictures, right so, okay. The Blue Channel is like very different, but the red and green is kind of difficult to tell the difference. Unless unless probably you eyeball it. Okay, all right, okay, So now we are want to define the network architecture. Okay, this time you live on a CNN architecture instead of an MLP multi-layer perceptron, which use linear, fully connected layers, You’ll use the following. I’m gonna need some more tea here. Convolutional layer, which can be taught as a stack of filtered images. Okay, Max, pooling layer would reduce the XY size of an input, so the the XY size you can reduce it using Max pooling, right, The Max Pooling is basically a filter that get the maximum number. If you convolve that filter to an image, it will get just the maximum number, so for example, if you imagine there’s a two by two max pooling filter, right, and then you can fold that into a into an image. Let’s say that Amit has one four to six So the maximum number of that particular subset. If you convol’ve fit in with the Max Pooling 2 by 2 because 1 4 to 6 the maximum is 6 right, so the output will be just 6 so it will reduce the ratio. Yeah, so it will just get the maximum anyway. Okay, so keeping only the most active pixels so okay, that’s another way of saying it. Because the most active pixel is kind of the kind of the brightest one, the one that has the highest number right the maximum number because the higher number is the brightest areas right from 0 to 255 the usual linear and drop out layers to avoid overfitting. So this one Max willing layer is also being used. Why, why do you want to reduce it right the other reasons? Why do you want to reduce it? It’s because you want to speed up the competition the computations when you’re actually training the model right, because the the size the X Y is going to be reduced quite significantly, depending on the test, Right, I think it’s just depending on the stride. After your max polling layer, OK, and then because you want to avoid overeating, you want to generalize your model, so you need to use fully connected the linear plus dropout. So this is what it looks like so a network with two convolutional layer is shown in the image below and in the code and you’ve been given starter cut with one convolutional and one Max polling. OK, so this is the input image and then go through one convolutional here, the first convolutional layer, and then you go to the pulling layer Max polling. And then you go through another the second convolutional layer and then get another the second max pooling layer. See if you notice every time it goes to the max pooling layer. The X&Y size, it becomes smaller, but the def that F it’s supposed to be as you go through the model as you could Olivia from input-output that Davis has to be bigger and bigger because the DEF represents extracted patterns, the DEF represent the feature represented the representation of that image. So for example, if this image like a dock, right and then and dock has like nose as like the tongue and the mouth, so as you go through all of those, the Dev represents all of the filtered images, all of and of each of those filtered images represent like a different patterns of that talk. So for example, were we one of the filtered? Images represents like the nose. The second of filtered images represents like the the eye. The third filter images represents the other the saliva, probably the fourth filter images represent. Pablito, for the four and so on, right, it’s just basically represent a different kind of patterns for that particular object, right, so that’s why you have many extracted patterns, so the actual pattern it, so your layer become like deeper and deeper, just made many of them. Well, while the size the matrici’s, the X&Y become smaller and smaller and then the last one, you want to connect that to a fully connected with a drop out layer to avoid overfitting and produce a 10 dimensional up because you want to create 10 glasses, okay, so to do define a model with multiple convolutional layers and define the feed-forward. Network behavior, Okay, the mark on vision layers. You include the more complex pattern in color and shape a module can detect. Okay, it suggested that your final model includes two or three convolutional layers, as well as linear layers and drop up in between to avoid overfitting, it’s good practice to look at existing research and implementation of related models as a starting point for defining your own models. You may find it useful to look at the spider class. You get an example oh. I got disconnected seriously. Oh, no, no see, okay. Let’s see it’s writing what this is what? I don’t like about using Google call up. Sometimes just get disconnected randomly is it’s not stable, But I think it’s getting better and better so like that one, I don’t know why it appears, but it’s still working, actually okay, so yeah, so, okay, where are ya? Okay, so, okay, So output volume for our conversion layer to compute the output size of a given convolutional layer we can perform the following calculation. Okay, we can complete the special size of the output volume as a function of the input volume size. W the kernel F the stripe in which they are applied Kimon of 0 pairing the correct formula for coding How many neurons define the output? W Given by input volume size – kernel size plus 2 [Music] padding I’m debating divided by stride plus 1 for example, for US 7 by 7 input and 3×3 filter with Strat 1 and PAD 0 We would get five by five. Hut word which tried to. We would get three by three output. Okay, because if you put seven here 7 minus 3 is 4 + 2 P The P is 0 so still 4 divided by S plus 1 – wait! Wait, wait! 7 minus 1/2 kernels. S minus 3 is 4 plus 0 to P still 4 divided by Stratus 1 so 1 plus 1 2 so 4 divided by 2 is 2 Why is it 5 by 5 output? Oh, the input volume size. W hmm, okay, anyway. [MUSIC] Okay, so this is where we define, so you’re importing the torch? Dot NM, the neural network package. And then you’re importing importing also the function S. F and then you’re defining the same architecture. You’re creating a class called net and then you, you’re defining it up in it, so you’re dividing the first conversion layer with this Conf 2 T so 3 because the input is 3 because it’s a it’s a its color channel with three input RGB and anyone the output to be 16 This is kind of arbitrary you can put, like 32 or 64 in this case. You want to output to be 16 filtered images and a new one, the filter size to be 3 by 3 I think this is the filter size, and you want the padding as 1 because you want to. You want to convulse everything over that images because if you have a filter size of 3 by 3 in order to fully convulse that filters as 3 by 3 you need to pad to pad the images by 1 which is your you’re putting additional like one to the left one to the right and one to the top and one to the bottom, right. I don’t know if I can show you an example here. No, there’s no example here. We’re going to show you this. No, let me show you. Why is it or here? So this is the filter 3×3 right, so if you can see so first if it goes here and then second goes yours, let’s say the struts one right third goes here and then fourth, and then it will stop here, right, right, but actually, you still have like two more rows, right, so you need to a pad like, for example, if this is the original image like this is what one two three, four five, which it is a five by five images. And then you have three by three filter, right so York involving from here to here, right because three by three first you convolve it over here and then come fall over here and I’ll come for you and then, and then you stop there if you don’t pad at all right, but actually, you still not need to control another to tomorrow’s right. This one and this one right, so in order to do that you have to. Pat, one, it goes to one because Pat One equals to 1 means you put additional role on the left original row on. Oh, sorry on the left and the right and then additional sorry, original column, right left and right and an original row, top and bottom, so you can come forth until the end, so you can keep on going instead of stopping here over here, the filter you can still convoke here right over here, and then you can come fall over over here, too, so you control everything. Yeah, so, or you, you still preserve kind of preserve the preserved the dimension. Otherwise, if you don’t pad, say you convolve you from here, you got, you got one two three, so you got only three by three right so and then down one two three, so you’ll get three by three output, But if you if you add the padding, then you will get instead of three by three. You still get five by five. I don’t know, why is this. I think they just they just took it out. Oh, the strat is -. Okay, this is the Stratus -. Okay, that’s why you got three by three. Okay, got it, okay, so, yeah, so that’s the reason why you want to? Pat, because you still want to preserve the X&Y? Where’s the you still want to preserve the XY dimension over here? So as you go through the conversion a layer if the input is 32 by 32 you don’t want to reduce like if you have to reduce you can reduce it, probably just by a little bit, but in in in in a normal practice, the X&Y doesn’t get reduce so especially in the early layers of the control conversion a layer, so you want to still to maintain its history. – battery – is still one dominant early to vault elite. Oh, so that’s why you need to pair it right so over here. The bearing is 1 If the the end of the filter is 3 which means it’s still the the the matrix still in same, so if the input images is 32 by 32 right over here 32 by 32 What is it? Yeah, it’s 1 32 by 32 then you can actually look at, and then you can look at the training decks Dot chip. Can you do? Yeah, sure, no, there’s no chip. I don’t like this. I don’t like this. Thank you anyway. Okay, okay, so, okay, so that’s why you need to preserve the so you put this padding, right, and then you do Max pooling so Max Pooling is basically to reduce the X&Y. You look at their after the Max. Pooling the convolutional layer is reduced significantly like it’s the same size as the the size of the max pooling in terms of the X&Y. Right, so over here is a filter size of two and the Stratus -, so this one D basically defines the strategy finds the the the output of that. So if it’s two, you just divide the input with with this number, so if the input from here is still 32 from this layer right because you still preserve with this padding, so tear it away Torito. And then you divide by 2 so it’s going to be 16 so the output of this can be 16 by 16 and then you define the forward function over here, so you apply Relu on the first convolution. Right if you look at here. Oh, yeah, it doesn’t. Bring so you want to put the Rallo over here. There’s a Rallo on the after the output of the convolution before you input it to the pulling layer and then yeah, so and then you pull it. You put it on the pulling layer, and then you return that that acts, okay, and then so this is to create the model and then sit. Okay, let’s import this. So now if you can see now, this is The model is basically consists of first is convolution layer, and then the second one is a pooling layer, but the convolutional the the pooling layer has has has a random function Right over here it doesn’t, it doesn’t actually say anyway, but it has a, you know, it’s being defined there, Okay, and then you specify the loss function here, and then you train the network, okay. I think we have to continue this another way. This is taking too long already, okay.