Welcome back to this series on neural network programming. Starting with this video of the series, we’ll begin using the knowledge we’ve learned about tensor’s up to this point and start covering essential tensor operations for neural networks in deep learning, we’ll kick things off with reshaping operations without further ado, let’s get started before we dive in with reshaping operations. Let’s get a high-level overview of the types of operations we do with tensors for neural networks in deep learning. The primary operations we use generally fall into four high-level categories, reshaping operations, element-wise operations reduction and access operations. There are a lot of individual operations out there, but grouping similar operations into categories based on their likeness can help make learning about tensor operations more manageable. The reason for showcasing these categories now is to give you the goal of understanding all four of these categories by the end of this section in the series. Keep this in mind As we progress and work towards building an understanding of each of these categories, reshaping operations are perhaps the most important type of tensor operation. This is because the shape of a tensor gives us something concrete that we can use to shape an intuition for our tensors clever. Fox on the blog post for this video on deep lizard comm. I’ve written out a little analogy. That compares neural network programmers to Bakers. The purpose of this analogy is to motivate the importance of reshaping, so be sure to check that out after finishing this video. I’m in a notebook now and let’s kick things off by supposing that we have the following tensor [Music]. The shape of this tensor is three by four. This allows us to see that this tensor is a rank two tensor with two axes. The first axis has a length of 3 and the second axis has a length of 4 The elements of the first axis are arrays and the elements of the second axis are numbers in Pi Torch. We have two ways to access the shape programmatically we can use a size method or we can use this shape. Attribute also remember that we can obtain the tensor’s rank by checking the length of its shape in the video on rank axes in shape, we saw how a tensor shape encodes all of the information about the tensors, axes, rank and indices and another important feature that a tensor shape gives us is the number of elements contained within the tensor. This can be deduced by taking the product of the component values in the shape. In the first piece of code, we converted the shape to a tensor and then ask for the product to see that the tensor contains 12 components. Sometimes these are referred to as the scalar components of the tensor. We can also use another function that is designed specially for this purpose called num. L, which is short for number of elements and yields the same result in terms of reshaping. There is a reason we care about the number of elements. Let me explain since our tensor has 12 elements. Any reshaping must account for all 12 of these elements, remember when we quickly touched on reshaping in videos past, we saw how reshaping does not change the underlying data, only the shape of this data. Let’s look now at all the ways in which this tensor T can be reshaped without changing the rank. [MUSIC] The important feature to notice about all of these is that the component values in the shape are factors of 12 so this means that their product is 12 and as a result, we have 12 spots for all of the original 12 values after the reshaping. In all of these examples, we use two factors which kept the rank at 2 however, we can change the rank if we were to use, say 3 factors like so, the next way we can change. The shape of our tensors is by squeezing and unscrewing them. Squeezing a tensor removes all of the axes to have a length of 1 while unceasing a tensor adds a dimension with a length of 1 these functions allow us to expand or shrink the rank of our tensor. Let’s look at a very common use case for squeezing attempts or by building a flatten function and this word flatten is very common with, you know, kind of working with tensors, so when you flatten a tensor, it just means that you’re turning it into a lower rank tensor than you started with. In this case. We started with a rank two tensor, a matrix for each image, and we turned each one into a rank 1 tensor, ie. The vector flattening a tensor means to remove all of the axes, except for one, which creates another tensor with a single axis, which contains the elements of the tensor so essentially, when we flatten a tensor, we create a 1d array that contains all of the scalar components of the tensor. A flatten operation is an operation that must occur inside of a neural network when we transition from a convolutional layer to a fully connected layer. We take the output from the convolutional layer, which is given to a in the form of output channels and we flatten these channels out into a single 1d array, So this image is not of length 7 8 4 it’s at size 28 by 28 they did was they took the second row and concatenated to the first row and the third row and concatenated to that and the fourth row and cake additives of that. So in other words, they took this whole 28 by 28 and flattened it out into a single 1d array that makes sense, so it’s going to be of size 28 squared. We’ll see this in action when we build our CN N. Let’s implement this from scratch now. This flattened function takes in a tensor called T as an argument and since the input tensor can be any shape, we pass a minus 1 for the second argument of the reshape function with Pi Torch, The minus one tells the reshape function to figure out what the value should be based on the other value, and the number of elements contain with an intense ER. Since our tensor T has 12 elements, the reshape function will be able to figure out that a 12 is required for the length to the second axis to ensure that there’s enough room for all of the elements in the tensor. Let’s see this flattened function in action after squeezing the first axis is removed and we obtain our desired Result. A 1d array of length 12 There’s actually another way that this flattened operation can be achieved by only using the reshape method. I challenge you to answer this one in the comments. Do it before you look at any of the other answers, though it’s a good practice to put yourself out there. So give it a try another way to test. Our understanding of the concept of shape is via concatenation operations. We can do this by combining two tensors using the cat function in Python in terms of the two tensor shape. The way we concatenate them will affect the resulting shape of the output tensor. Check out the blog post for this video on V blazer comm to see more on concatenation. There are a couple examples there that you should check out for now. We should have good understanding of what it means to reshape a tensor and what the requirements are anytime we change a tensor shape. We are said to be reshaping the tensor. Remember to check out the analogy on the blogpost. There you’ll find that. Baker’s work with dough and neural network programmers work with tensors, even though the concept of shaping is the same instead of creating baked goods. We are creating intelligence, don’t. Forget to check out the deep lizard hive mind as well where you can get deep lizard perks and rewards, thanks for contributing to collective intelligence by watching this video. I’ll see in the next one. Over the course of the next 20 years, more will change around the way we do. Our work than has happened in the last 2,000 In fact, I think worth the dawn of a new age in human history. Things are moving really fast. I mean, consider in the space of a human lifetime. Computers have gone from a child’s game. – what’s recognized as a pinnacle of strategic thought. Would you cross this bridge? Most of you are saying. Oh, hell, no, and you arrived at that decision. In a split second? You just sort of knew that that bridge was unsafe and that’s exactly the kind of intuition that our deep learning systems are starting to develop right now. Very soon. You’ll literally be able to show something that you’ve made. You’ve designed to a computer and it’ll look at it and say, mm, sorry, homie, that’ll never work. You have to try again. [MUSIC].