Transcript:

Hi, everybody and welcome to a new. Pi Touch Tutorial In this video. We are going to learn how to work with tens of us, so how we can create tens. OS and some basic operations that we need. We will also learn how to convert from Nampa arise to PI touch sensors and vice versa. So let’s start so in Pi Torch. Everything is based on tens of operations from Nampa. You probably know arrays and vectors, and now in Pi torch, everything is a tensor, so a tensor can have different dimensions, so it can be 1d 2d or even 3d or have more dimensions, so let’s create an empty e10 Zoa, so first of all, we import torch, of course, and then we say X equals torch dot empty, and then we have to give it a size. So, for example, if you just say 1 then this is like a scalar value, so let’s print our 10 ZOA. So this will print an empty tensor. So the value is not initialized yet, and now we can change the size, so for example, if we say 3 here, then this is like a 1d vector with three elements. So now if you run this, we see three items in our ten zone, and now we can also make it 2 D. So for example, let’s say the size is 2 by 3 so this is like a 2d matrix, so and then. I’ll run this. And, of course we can put even more dimensions in it. So now it would be 3d and now, for example, now it would be 40 but now I don’t print it anymore because it’s hard to see the four dimensions, but yeah, this is how we can create an empty Tenza and we can also, for example, create a tensor with random values by saying Torche Dot Rand and then give it the size, so let’s say 2 by 2 and let’s print our tensor again. We can also the same, like in numpy we can say Torch Dot zero, so this will put all zeros in it, or we can say torch Dot one, so this will put once in all the items, then we can also give it a specific data type so first of all, we can have a look at the data type by saying X DOT D type. So if we run this, then we see by default. It’s a float32 but we can also give it the D type parameter and here we can say, for example, torch dot in so now it’s all integers or we can say Torch Dot double. Now it is doubles or we can also say, for example, float16. Just yeah, and now if you want to have a look at the size, we can do this by saying. X dot size and this is a function, so we have to use parenthesis, so this will print the size of it, and we can also construct a tensor from data, so for example from a Python list. So, for example, here we can say X equals torch dot ten SAR and then here we put a list with some elements, so let’s say two point five zero point one and then print our tens on so this is also how we can create a Tenza and now let’s talk about some basic operations that we can do, so let’s create two tans us with random values of size 2 by 2 so X and Y equal. Torche, don’t rant – bye -, so let’s print X and let’s print Y, and yeah, so now we can do a simple addition, for example, by saying set equals X plus Y so and now let’s print our C. So this will do element wise addition, so it will add up each of the entries and we could also use set equals torch dot at and then X + Y. So this would do the same thing now. We could also do an in-place addition, so for example, if we say y dot and then add underscore X and then print Y so this will modify our Y and add all of the elements of X to our Y, And by the way in Pi Torch, every function that has a trailing underscore will do an in-place operation so this will modify the variable that it is applied on so yeah, so next to addition, of course, we could also use subtractions so we can say. C equals X minus Y, or this would be the same as C equals torch thought SAP X&Y. Now, if you print C then we can see the element wise subtraction. Then we can also do a multiplication of each element, So this would be torch Dot Mal and again. We can do everything in place by saying. Y dot mal underscore X. So this would what if by our why, and and we can also do elementwise division, so this would be torch touch, diff and yeah, so this is some basic operations that we can do with tensors, and then we can also do slicing operations like you are used to from Numpy erase, so let’s say we have a tensor of size. Let’s say five by three, and let’s print this first and now print X, and now, for example, we can simply all we can get all rows, but only one column, so let’s use slicing, so we here use a column for all the rows, but only the column zero, so let’s print the whole Tenza one and only this so here we see we have only the first column, but all the rows or we can just say, for example, let’s use the row number one, but all columns, so this would print the second row and all the columns, then we can also just get one element so the element at position 1 1 so this would be and this value, and by the way right now it prints the tensor, and if we have a tensor with only one element, we can also say we can call the dot item method, so this will get the actual value, But be careful, you can only use this. If you have only one element in your 10 zone, so this will get the actual value and yeah, now let’s talk about reach eight-ten sauce, so let’s say we have a tensor of size. Let’s say four by four and print our tensor, and now if you want to reshape it and we can do this by saying or by calling the view method, so we say y equals X dot view and then give it a size. So let’s say we only want one dimension now, so let’s print. Y so now it’s only a one D vector. And, of course, the number of elements must still be the same. So here we have four by four. So in total, it’s also 16 values and, for example, if we don’t want to put the dimension or the value in one dimension and we can simply say minus 1 and then specify the other dimension and Pi touch will automatically determine the right size for it. So now it must be a two by eight Tenza, so we can also print the size again to have a look at the size, so this is size 2 by 8 so it’s correctly determined the size. If you put a minus 1 here so yeah, this is how we can. Resize tends us, and now let’s talk about converting from numpy to a torch tensor and vice versa. So this is very easy so first of all, let’s import Numpy again or import Numpy. SNP and I think I have to, you know, know, it’s already installed here, so let’s create a TENS offers so a equals torch dot and let’s create a tensor with one society five, so let’s print our ten, ZOA, and now if we want to have a numpy array, we can simply say B equals a dot numpy and then print B so now we have a numpy array. So if we print the type of P then this will see and this will print that we have a numpy and the array, so yeah, this is how we can create from a tensor to Numpy array, but now we have to be careful, because if the tensor is on the CPU and not the GPU, then both objects will share the same memory location, so this means that if we change one, we will also change the other, so for example, if we print or if we modify B or a in place by saying a dot at underscore, remember all the underscore functions will modify our variable in place and at one, so if we add one to each element and now first, let’s have a look at our a Tenza, and now let’s also have a look at our B Numpy array. Then we see that it also added plus one to each of the elements here because they both point to the same memory location. So be careful here, and yeah, if you do want to do it the other way around, so if you have a numpy array in the beginning, so let’s say a equals numpy once of size five and then print a and now you want to have a torch tensor from a number arrayed and you can say B equals torch and then underscore numpy and then put the numpy array so now we have a tensor and this will. Yeah, by default, This will put in the datatype float64. Of course you could also specify the datatype here. If you want a different data type and now again, we have to be careful if we modify one, so if we modify, for example, the Numpy array by incrementing each element, so now print our numpy array. So we see that it incremented each value. And if we print B, then we see that our tens. I got modified too so again. Be careful here. Yeah, but this happens only if your tensor is on the GPU, and this is one thing that we haven’t talked about yet because you can also do the operations on the GPU, but only if this is available, So if you have also installed the CUDA toolkit and you can check that by saying if torch dot CUDA dot is available and so in my case on the Mac, it will, and this will return false, but for example, if you are on Windows and you have CUDA available, then you can specify your CUDA avail device by saying device equals torch dot device and then say CUDA here. And then if you want to create a tensor on the GPU, you can do this by saying X equals torch dot once, and then, for example, give it the size and then say device equals device, so this will create a Tenzer and put it on the cheap you or you can first create it so simply by saying Y equals torch dot once of size five. And then you move it to your device to your GPU by saying Y equals y dot two and then device. This will move it to the device. And now, if you do an operation, For example, C equals X plus Y then this will be performed on the GPU and might be much faster. Yeah, but now you have to be careful, because now if you would call C dot numpy, then this would return an error because Numpy can only handle CPU 10 sauce, so you cannot convert a GPU tensor back to numpy. So then again, we would have to move it back to the CPU, so we can do this by seeing C equals C Dot 2 and then as a string CPU So now it would be on the CPU again so yeah. This is all the basic operations that I wanted to show you and one more thing a lot of times. When a tensor is created, for example, torch dot once of size 5 then a lot of times, you see. The argument requires Graps equals true so by default, this is false. And now if we print this, then we will also see here in our tensor that it will print requires graph equals true, so a lot of times in code, you will see this, and this will tell Pi Torch that it will need to calculate the gradients for this tensor later in your optimization steps, So whenever this means that whenever you have a variable in your model that you want to optimize, then you need the gradient, so you need to specify requires grad equals true, but yeah, we will talk about this more in the next tutorial so. I hope you enjoyed this tutorial, and if you liked it, please subscribe to the channel and see you next time bye.