Tensor To Array | Tensors Explained – Data Structures Of Deep Learning


Subscribe Here





Tensors Explained - Data Structures Of Deep Learning


Welcome back to this series on no network programming with Pi Torch in this video, we will kick off Section 2 of the series, which is all about tensors. We’ll talk tensor’s terminology and look at tensor indices. This will give us the knowledge we need to look at some fundamental tensor attributes that are used in deep learning without further. Ado, let’s get started. [MUSIC] Tensors are the primary data structures used by neural networks. The inputs outputs and transformations within neural networks are all represented using tensors as a result, Neural network programming utilizes them heavily. The concept of a tensor is a mathematical generalization of other more specific concepts. Each of these examples are specific instances of the more general concept of a tensor. Let’s organize this list, of example, tensors into two groups. The first group of three terms number array in 2d array are all terms that are typically used in computer science, while the second group scalar vector and matrix are terms that are typically used in mathematics. We often see this kind of thing where different areas of study use different words for this same concept. The terms in each group correspond to one another as we move from left to right to show this correspondence. We can reshape our list of terms to get three groups of two terms. The relationship within each of these pairs has to do with the number of indices required to access a specific element. There are 0 indices required for a number and a scalar because you just refer to the actual number or scalar value. You don’t need an index when we move to an array or a vector. We need one index to refer to a specific element and then when we move to a 2d array or a matrix, we need two indices to refer to a specific element. Let’s suppose we have an array called a with four elements. Now suppose we want to access the number three in this data structure, we can do it using a single index like so as another example. Let’s suppose we have. This 2d array called D D notice that we need two indices to refer to the number three in this 2d array. When more than two indices are required to access a specific element, we stop giving specific names to the data structures and begin using more general language in mathematics. We stop using words like scalar vector and matrix, and we start using the word tensor or in the tensor. The N tells us the number of indices required to access a specific element within the structure in computer science. We stop using words like number array in 2d array and we start using the word multi-dimensional array or the word in D Array. I very rarely used words like vector and Matrix, because like they’re kind of meaningless, specific examples of something more general, which is they’re all N dimensional tensors so let’s make this clear for practical purposes in deep learning and neural network programming, Tensors are multi-dimensional arrays, physicists. Get crazy when you say that because to a physicist. A tensor has quite a specific meaning, but in machine learning, we generally use it in the same way, so tensors are multi-dimensional arrays or in D arrays for short. The reason we say, a tensor is a generalization is because we use the word tensor for all values of n, for example, a scalar is a zero dimensional tensor. A vector is a one-dimensional tensor. A matrix is a two dimensional tensor in an India. Ray is an N dimensional tensor tensors allow us to drop these specific terms and just use an end to identify the number of dimensions. We are working with one thing to know about. The dimension of a tensor is that it differs from what we mean when we refer to the dimension of say a vector in a vector space. The dimension of a tensor does not tell us how many components exist within the tensor. If we have a three-dimensional vector from Three-dimensional Euclidean space, we have an ordered triple with three components, However, a three-dimensional tensor can have many more than three components. Our two-dimensional tensor DD, for example, had nine components in the next post. We will cover the concepts of rank axes and shape. These are the fundamental attributes of tensors that we use in deep learning. If reading’s your thing. I highly recommend you check out the blog post for this video. On deep lizard com also check out the deep lizard hivemind for exclusive perks and rewards thanks again for contributing to collective intelligence. I’ll see you in the next one. Well, it used to be that. If you wanted to get a computer to do something new, you would have to program it now. Programming for those of you who they haven’t done it yourself requires laying out in excruciating detail every single step that you want the computer to achieve to do in order to achieve your goal now. If you want to do something that you don’t know how to do yourself, then this is going to be a great challenge. So this was the challenge faced by this man. Arthur Samuel. In 1956 He wanted to get this computer to be able to beat him at checkers. How can you write a program layout in excruciating detail? How to be better than you? At checkers, so he came up with an idea he had to. Compute a play against itself, thousands of times and learn how to play checkers and indeed it worked [Music].

0.3.0 | Wor Build 0.3.0 Installation Guide

Transcript: [MUSIC] Okay, so in this video? I want to take a look at the new windows on Raspberry Pi build 0.3.0 and this is the latest version. It's just been released today and this version you have to build by yourself. You have to get your own whim, and then you...

read more