WHT

Torch Cat | Stack Vs Concat In Pytorch, Tensorflow & Numpy – Deep Learning Tensor Ops

Deeplizard

Subscribe Here

Likes

280

Views

19,399

Stack Vs Concat In Pytorch, Tensorflow & Numpy - Deep Learning Tensor Ops

Transcript:

Welcome to this neural network programming series. In this episode, we will dissect the difference between concatenate and stacking tensors. We’ll look at three examples, one with PI torch one with tensor flow and one with numpy. Hey, by the way, do you know that Beep lizard has a vlog? If you want to connect with us in a totally different light, then come check out. The ball can say hi link in the description. Alright, let’s get to it. The difference between stacking and concatenated tensors can be described in a single sentence. So here goes Concatenate Enjoins a sequence of tenses along an existing axis and stacking joins a sequence of tensors along a new axis. So let’s look at some examples to get a handle on what exactly this means. We’ll look at stacking and concatenate in three frameworks Pi Torch, Tensorflow and Numpy, So let’s get started for the most part concatenated along an existing axis of a sequence of tensors is pretty straightforward. The confusion usually arises when we want to concatenate along a new axis for this. We stack another way of saying that we stack is to say that we create a new axis inside of all of our tensors, and then we can cat along this new axis for this reason. Let’s be sure that we know how to create a new axis for any given tensor and then we’ll start stacking in concatenative to demonstrate this idea. We’ll start out by adding an axis to a PI dortch tensor here we’re importing Pi Torch and we’re creating a simple tensor that has a single axis of length 3 now to add an axis to a tensor in Pi Torch. We use the unscrews function. We talked about this quite a lot in Section 1 where we looked at squeezing and unceasing tensors here. We are adding an axis aka dimension at index zero of this tensor. This gives us a tensor with a shape of one by three, so our original tensor had a shape of three, and now our tensor has a shape of one by three, so we went from a Rank 1 tensor to a rank two tensor now. We can also add an axis at the second index of this tensor. This also gives us a rank two tensor, but this time the shape is three by one, adding an axis like this changes the way the data is organized inside the tensor, but it does not change the data itself. Basically, all three of these operations are just reshaping a tensor. We can see that by checking the shapes of each of these now thinking back about concatenating versus stacking. We said that when we concatenate, we said that we are joining a sequence of tensors along an existing axis, this means that we are going to be extending the length of one of the existing axes when we stack, we are creating a brand new axis that didn’t exist before, and this happens across all the tensors in our sequence, and then we can catenate along this new axis. Let’s see how this is done in Pi Torch with Pi Torch. The two functions we use for these two operations are called Stack and Cat, so let’s create ourselves a sequence of tensors. Now let’s concatenate these tensors with one Another notice that each of these tensors have a single axis. This means that the result of the cap function will also have a single axis. This is because when we concatenate, we are doing it along an existing axis notice that, in this example, the only existing axis is the single first axis. All right, so we took three single axis tensors each, having an axis linked at 3 and now we have a single axis tensor with an axis length of 9 Now. Let’s stack these tensors and to do this. We’ll need a new axis, we’ll insert an axis at the first index, and then we’ll stack these tensors that axis this, of course, will be happening behind the scenes of a stack function and this gives us a new tensor that has a shape of three by three notice how the three tensors are concatenated along the first axis of this tensor, the first axis has a length of three look as we concatenate it, three tensors along this axis to see that this statement is true, Let’s add a new axis of length, one to all of our tensors by unscrewing them and then cat along the first axis. In this case, we can see that we get this same result that we got by stacking. However, the call to the stack function was much cleaner and this is because the new axis insertion was handled by the stack function. Let’s try this now. Along the second axis note, though, that it’s not possible to concatenate this sequence of tensors along the second axis because there currently is no second axis. But we can stack them. In fact, stacking is our only option here, all right, we stacked with respect to the second axis, and this is a result to understand this think back to what it looked like when we inserted a new axis at the end of our tensor. Now we just do that to all of our tensors, and then we can cap them like this. Here are three concrete examples that we can encounter in real life. Let’s decide when we need to stack and when we need to conca’t, suppose we have three individual images as tensor’s each image Tensor has three dimensions, a channel axis, a height axis, a width axis note that each of these tensors are separate from one another now assume that our task is to join these tensors together to form a single batch tensor of three images. Do we concat or do we stack? We’ll notice that, in this example, there are only three dimensions in existence and for a batch, we need four dimensions. This means that the answer is to stack the tensors along a new axis. This new axis will be the batch axis. This will give us a single tensor with four dimensions by adding one for the batch note that if we join these three along any of the existing dimensions, we would be messing up either. The channels, the height or the width. We don’t want to mess our data up like that. Let’s see a second example. Now suppose we have the same three images as before. But this time, the images already have a dimension for the batch. This actually means we have three batches of size. One assume that it is our task to obtain a single batch of three images. Do we concat or stack? We’ll notice how there is an existing dimension that we can concat on. This means that we concat these along the batch dimension. In this case, there is no need to stack. Let’s see a third. This one is hard or at least more advanced. You will see why suppose we have the same three separate image tensor’s only this time we already have a batch tensor assume our task is to join these three separate images with the batch. Do we concat or do we stack? We’ll notice how the batch axis already exists inside the batch tensor, however, for the images, there is no batch axis in existence. This means neither of these will work to join with stack or cat. We need the tensors to have matching shapes. So then are we stuck? Is this impossible? It is indeed possible it’s actually a very common task. The answer is to first stack and then to concat, we first stack the three image tensors with respect to the first dimension. This creates a new batch dimension of length 3 then we can concat this new tensor with the batch tensor. I hope this helps and you get it now. Check on deep lizard. Calm to see these examples implemented in code. Alright, so let’s take a look at this in tensor flow, and then we’ll look at it in Numpy. What you’re gonna find is that it’s pretty similar, if not almost identical to what we just saw in Pi Torch, So the first thing do is import tits or flow as TF alright. So now Tensor flow is imported and then we’re ready to go ahead now and create three tensors so we can have a sequence of three tensors, so we do this with TF dot constant, and you notice that we’re just creating the same tensors as before. This syntax is very similar to what we saw with Pi Torch now, instead of just torch cat in terms of flow. It’s TF dot Khun Kat. So in Pi Torch, we have cat and intensive flow. We have Khun Kat. And in the only other difference about making this call is that we say axis instead of dimension or dim, so we have excess zero. We want to can catch these three tensors along Axis zero, and so we know what we’re going to get. Each one of these tensors has a single axis, so our result from this conca’t call is gonna have a single axis and it’s going to be longer than each one of these single ones individually because we’re going to have concatenated them. So let’s run this code. We have a single dimensional ranked 1 tensor, where the data from each one of these was concatenated together along that single dimension, all right, so now the stack function is going to be exactly the same as what we saw in Pi Torch, Except that again, we have an access parameter instead of a dim parameter. They both mean the same thing. Though, so let’s run this stack and we can see that, indeed. We got the same thing that we got in Pi Torch. So both of these functions work exactly the same. Now, let’s see if we can manually insert an axis first and then concatenate to get the same result that we get here with the stack function, so just like we didn’t apply torch. We’re going to do the same thing in tensor flow here. We’re going to we’re going to caulk and cat. We’re gonna pass our sequence of tensors in, but we’re going to expand the dimensions of each one of these tensors and we do that with. Tf dot expand dims now. This is the same thing as unseen tensor and Pi torch. So in pie charts, we call it unscrews and tensorflow we call it Expand. Dims we’re expanding our dimensions with respect to axis zero and then we’re also concatenated with respect to this new axis. X is zero, all right, and indeed we get the same thing that we get with th stack, ok? Next we’re just going to stuck with respect to the Second Axis or the axis at index. One, okay, and then we can do the same thing by expanding the dimensions of each one of these tensor’s first and then concatenating second with respect to the second dimension. OK, and we get the same thing so identical behavior between torch stack and torch cap and TF dot stack in TF Khun Kat. So now let’s see how this is done in numpy. So with a numpy, we have MP Dot Stack and we have MP dot concatenate, so in all three Pi Torch tensorflow and Numpy, they all have a stock function, but each one of these libraries have called their concatenation catenate function, A little bit different in Pi Torch. We have cat and then intensive flow we have. Khun Kat, Hinton and Numpy We get the whole shebang concatenate so we’ll go ahead and well import Numpy as MP we’re going to create three Tensors t1 t2 t3 and this is the same logic as what we did before. This is just doing it with Numpy and then we can call Numpy Dot concatenate passing our sequence of three tensors, and then we do this with respect to the first axis, So Numpy uses the parameter name axis, just like tensor flow. Okay, and then the behavior is exactly the same as the other two. Now we will stack. MP Dot Stack with respect to the first axis and again. The behavior is the same as the last two now. If we want to achieve the stack functionality with the concatenate function, we need to manually expand the dimensions of each one of our tensor’s first, and then we can concatenate with respect to this new dimension. So just like Tensorflow Numpy uses Numpy has named their function expand dims, so if we want to add an axis or expand the dimensions of a tensor we use expand dims. Okay, so we’ll run this code and we’ll see that just like the other two libraries or getting the same behavior, and this is cool, We can also see this in the numpy source code. We do this in Jupiter notebook by putting two question marks before the function name and running the cell here, we can see that the stack function receives some arrays, Then it checks that all the arrays have the same shape. This is a requirement next. The expanded version of the arrays is generated. Then these expanded arrays are passed to the concatenate function. So there now, you know, and it is cool, All right, finally, we’ll just stack with respect to the second axis, okay, and then expand the dimensions at the second axis and then concatenate and we have the same result once again, so this is how stacking and concatenate eeen we can think of the stack function as kind of additional functionality added on to the cap function, and so we’ve seen this demonstrated in all three libraries or whether we’re using Pi Torch whether we’re using tensorflow or whether we’re using numpy. The concept is going to be the same, and this is often what we find when we’re working with different libraries is it’s all conceptually the same thing we just might have a little bit different syntax, different naming schemes or different notations, but that shouldn’t kind of get in your way of actually understanding the concept and being able to seamlessly work with in any framework or library. If you haven’t already be sure to check out Deep Lizard column where there’s blog posts for each episode, There’s even quizzes that you can use to test your understanding of the content and don’t forget about the deep lizard hivemind, where you can get exclusive perks and rewards thanks for contributing to collective intelligence. I’ll see you in the next one l-o-l. It is pretty funny, an ironic. How concatenation showed up in this video. I mean, we were talking about concatenation and methods that implement it, and as we moved from Pi Torch to tensorflow and then to numpy, the method name for concatenation was literally morphing in a way that can only be explained through the term itself. Pi Torch uses cat. Tensorflow uses Concat numpy uses concatenate. That’s deep. It’s so meta [Music].

Wide Resnet | Wide Resnet Explained!

Transcript: [MUSIC] This video will explain the wide residual networks otherwise known as wide ResNet and then frequently abbreviated in papers as wrn - the number of layers - the widening factor, so the headline from this paper is that a simple 16 layer wide. Resnet...

read more