Pytorch Linear | Callable Neural Networks – Linear Layers In Depth


Subscribe Here





Callable Neural Networks - Linear Layers In Depth


Welcome to this series on neural network programming with hydrology in this one will learn about Pi Torch Callable Neural network modules. What this means and how it informs us about the way our network and layer. Ford methods are called without further ado. Let’s get started [Music]! In the last post of this series, we learned about how linear layers use matrix multiplication to transform their end features to out features when the input features are received by a linear layer. They are passed in the form of a flattened one-dimensional tensor and are then multiplied by the weight matrix. This matrix multiplication produces the output features. Let’s see an example of this in code. [music] Here we have created a one-dimensional tensor called in features. We’ve also created a weight matrix, which, of course is a two dimensional tensor. Then we’ve used the map mole function to perform the matrix multiplication operation that produces a one-dimensional tensor in general. The weight matrix defines a linear function that maps a one-dimensional tensor with four elements to a one-dimensional tensor that has three elements we can think of this function as a mapping from four dimensional. Euclidean space to the Rida mention –kw clitty in space. This is how linear layers work as well. They map an in feature space to an out feature space using a weight matrix. Let’s see how to create a pie chart slinger layer. That will do this same operation. Here We have it. We’ve defined a linear layer that accepts 4 in features and transforms these into 3 out features, so we go from 4 dimensional space to 3 dimensional space. Now we know that a weight matrix is used to perform this operation. But where is the weight matrix? The weight matrix lives inside the PI torch linear layer class and it’s created by Pi Torch, the PI torch linear layer class uses the numbers 4 and 3 that are passed into the constructor to create a 3 by 4 weight matrix. Let’s verify this by taking a look at the PI Torch source code. We have a small program here. That’s just initializing our linear layer, so we’re gonna step into the linear layer code by just clicking. Now we’re inside the pie chart source code and we can scroll down to the init class constructor. Here we can see that in features and the out features are both being passed into the constructor they’re being saved in there being used to create a tenser that is then wrapped by this parameter class and ultimately we have a weight tensor being created, so we have a tensor with a shape of three by four as we’ve seen when we multiply a 3 by 4 matrix with a 4 by 1 matrix. The result is a 3 by 1 matrix. This is why. I torch builds the weight matrix in this way. These are linear algebra rules for matrix multiplication. So let’s see how we can call our layer. Now by passing the in features tensor, we can call the object instance like this because Pi Torch Neural network modules are callable. Python objects, we’ll look at this important detail more closely in a minute, but first check out this output, we did indeed get a one dimensional tensor with three elements, however, different values were produced. Here this is because Pi Torch creates a weight matrix and initializes it with random values. This means that the linear functions from the two examples are indeed different, so we are using different functions to produce these outputs. Remember the values inside the weight matrix defined the linear function. This basically demonstrates how the network’s mapping changes as the weights are updated during the training process. When we update the weights, we are changing. The function let’s explicitly set the weight matrix now for this linear layer. We’ll make it be the same as the one we used. In the other example, hi, torch module weights need to be parameters that means they need to be instances of the parameter class that lives inside the neural network module. This is why we wrap the weight matrix tensor inside a parameter class instance. Let’s see now how this layer transforms the input using the new weight matrix. What we hope to see here is the same results. As in our previous example, this time, we are much closer to the 30 40 and 50 values. However, we aren’t exactly on point here. Why is this well? This is not exact because the linear layer is adding a biased tensor to the output. Watch what happens when we turn the bias off. We do this by passing a false flag to the constructor there. Now we have an exact match and this is how linear layers work, sometimes we’ll see a linear layer operation referred to using mathematical notation. We may see y equals ax plus B In this equation, the capital A represents the weight matrix and the X represents the input tensor. The B represents the bias tensor and the Y represents the output tensor note that this equation is a more general form of the equation for a line Y equals MX plus B. We pointed out before how it was kind of strange that we called the layer object instance, as if it were a function. What makes this possible is that Pi Torch module classes implement another special Python method called. Double underscore, call double underscore. If a class implements this special call method, the special call method will be invoked any time. The object instance is called. This is an important. Pi Torch concept. Because of the way, the special call method interacts with the form method for our layers networks. Instead of calling a for method directly, we call the object instance, instead after the object instance, is called, the special call method is invoked under the by PI Torch and the call method, in turn, invokes the form method for us and this applies to all pie chart, neural network modules, namely networks and layers. So let’s see this by debugging the PI torch linear layer, so we’re back here in this little program where we created a linear layer and now we’re just gonna call this linear layer to do that. We’ll add some more code first. We’ll create a tensor, call it. T now we’ll, just call our layer and pass in the tensor T to our layer and we’ll save the output. Then we’ll print the output. Alright to debug this. We’ll set a breakpoint and now we’re ready, so the debugger has stopped the code execution on the line where we create our layer. Instead of stepping over this code, we’re gonna step into this code and just have a look at what we saw earlier so now we’re in the init class constructor. We save our in features as attributes and our out features as attributes and now we’re ready to create our weight tensor, so we’ll step over this, and then on the Left, we can inspect this weight tensor. We can see that it has a shape of three by four and that was determined by the passed in in features and out features, so the out features came first. That’s the three in features came next. That’s a four, so that’s why our tensor has a shape of three by four. Let’s continue on now. We will create our bias and we are ready to step out. Okay, So now we have our linear layer created will create a tensor and we’re ready to pass our tensor to our linear layer now. The important piece here is to watch what happens when we execute this code, so we’re gonna step into this call and this is the key, so we have stepped in to the call method that lives inside the module. Pi Torch class. So you can see there’s quite a lot going on inside this call method, but what we’re interested in is the call to our or method, so if we just stepped down a couple of lines, we’ll see here. This is where our actual four method is called, so let’s step into this now. We’re in linear dot. P this is our four method implementation. We can see here that the four method implementation uses the linear function that lives in the functional package. That’s what the capital! F is Torche dot functional note that we’re passing the input tensor, the weight tensor and the bias tensor. These are the three elements needed in order to produce the output. Think back to the mathematical notation. We talked about before, lets. Go ahead now and step into this call. Now we’re sitting inside a functional dot pi where we have the implementation of the linear method now. I’m going to step over a few lines and just take a look at what we have here. We have our output that’s being generated using the map MOL function because our input Tensor is on the left side of this operation. Our weight tensor needs to be transposed for the operation to work. This is based on the linear algebra rules that we discussed before that make this operation valid, so we’ll step over the operation and we will return so now we’re back into the special call method and we’re ready to return back to the main program. We have a result so here we are back in the main program with our output, which is a rank 1 tensor with three elements so notice that we started out with a tensor here that is a rank one tensor with four elements, and we have produced or outputted a tensor that is also a rank one tensor but has three elements and Pi Torques did some other things under the hood, but ultimately the map mobile function that we saw before was used to do this operation. The extra code that pie-chart runs inside the special call method is why we never invoke the four method directly If we did. The additional ply torch code would not be executed as a result. Anytime we want to invoke our four method, we call the object instance, instead this applies to both layers and networks because they are both. Torche neural network modules in the next post were ready to implement our four method. If you haven’t already be sure to check out the deep lizard hivemind, where you can get exclusive perks and rewards hit the like button to support collective intelligence. And I’ll see you in the next one. This is a giant data problem. I’ve already just said each. One of these cars are collecting petabytes and petabytes of data every single week and from that data, we label it in our data. Factory, we use all kinds of tools and AI tools and trained professionals to label each and every one of these images so carefully and the reason for that is this we label at once and cars in the millions get to benefit it forever. And so we take labeling incredibly seriously. This is not a Crowdsourcing problem. This is a professional labeling problem. We have fifteen hundred. People were labeling about a million images per month and we’re just going to keep on labeling and from these labels were able to create the perception networks that that you’ll see of incredible levels in the past. Your source code is your code in the future. Your source code is your data and all the code and all the training methodologies and recipes that you applied in order to create that model. Because you want to be able to trace it. You want to be able, repeat it? [MUSIC].

Wide Resnet | Wide Resnet Explained!

Transcript: [MUSIC] This video will explain the wide residual networks otherwise known as wide ResNet and then frequently abbreviated in papers as wrn - the number of layers - the widening factor, so the headline from this paper is that a simple 16 layer wide. Resnet...

read more