Torchvision Datasets | Pytorch Datasets And Dataloaders – Training Set Exploration For Deep Learning And Ai


Subscribe Here





Pytorch Datasets And Dataloaders - Training Set Exploration For Deep Learning And Ai


Welcome back to this series on neural network programming with Pi George. In this video, we will see how to work with a data set and data loader of Pi torch classes. Our goal in this video is to get comfortable working with these classes as well as to get a feel for the data inside our training set without further ado, let’s get started from a high level perspective. We are still in the preparing data stage of our deep learning project in this video. We’re going to see how we can work with the data set and data loader objects that we created in a previous video. Remember, we have two Pi. Torx objects, a data set and a data loader. We called the data set variable train set and we called the data loader variable train loader. Now that we’re all set. Let’s kick things off in a notebook with our imports, torch torch vision and transforms from within the torch vision package. [MUSIC] [Music] our train set is an instance of the fashion in this class that also lives inside the torch vision package in the constructor. We specify the directory where the data is located on disk that we want the data to be the training data that the data should be downloaded. If it doesn’t already exist on disk and finally, we define a transform that should be performed on our data elements. The composed class allows us to create a composition of transformations. In this case, we are just turning our data into a tensor, which is a single transformation for the train loader. We are using the data loader constructor and passing the train set, along with a batch size of 10 note that we didn’t specify a batch size in the last video, but we’re doing it here so we can see more images when we begin working with actual batches. The default size is 1 If we don’t specify an alternative here, all right, let’s look at some operations. We can perform to better understand our data. [MUSIC] First we have to import a couple more packages. We’ve imported numpy and we’ve also imported Pi pot from within matplotlib the first line of code here that’s not an import just sets the line width for Pi Torch output that is printed to the console. We use the Python Lin function to see how many images are in our training set and as we would expect. This sixty thousand number makes sense based on what we learned in the post about the fashion in this data set. If you haven’t seen that video where we cover the fashion in this data set and the importance of data and deep learning in general. I highly recommend you check that one out. The next piece of code gives us the label tensor for the data set. The first image is a nine and the next two are zeros. Remember from post passed these values in code, the actual class name or label. The nine, for example, represents an ankle boot while the zero represents a t-shirt the next called. AB in count is pretty interesting. Essentially, we can create bins and then count the frequency of occurrences within each bin we can call. Ben, Cal on a label tensor and it will give us the frequency distribution of the values inside the tensor. This shows us that the fashion in this data set is uniform with respect to the number of samples from each class as a result. This data set is said to be balanced. If the classes had a varying number of samples, we would call the set an unbalanced data set. Yeah, so in general, you know, your validation set and test set need to have the same mix or frequency observations that you’re going to see in production in the real world, and then your training set should have an equal number in each class, and if you don’t just replicate the less common one until it is equal, so this is. I think we’ve mentioned this paper. Before a very recent paper that came out, there tried lots of different approaches to training with unbalanced data sets and found consistently that over sampling the less common class until that is the same size as the more common class is always the right thing to do. So you could literally copy, you know, so like. I’ve only got a thousand, You know, ten examples of people with cancer and 100 without so I could just copy those 10 and other, you know, 90 times. That’s kind of a little memory and efficient, so a lot of things, including I think SK learns random. Forests have a class weights parameter or do everything and doing deep learning. You know, make sure in your mini batch. It’s not randomly sampled, but it’s a stratified sample, so the less common class is picked more often. Jeremy briefly mentioned a paper it’s. This one, a systematic study of the class and balance problem in convolutional neural networks, we can see the reference here in the abstract to over sampling, and if we jump into the paper and search for best method. [MUSIC] We’ll find this regarding performance of different methods for addressing imbalance and almost all of the situations over sampling emerged as the best method, the authors elaborate further on this in the paper’s conclusion section. Check this out if you’re interested to learn more. The link to this one is on the blog post for this video on deep Lester Comm class imbalance is a common problem, but in our case, we have just seen that the fashion in this dataset is indeed balanced, so we don’t have to worry about this with our project. Let’s see now how we can access one of our data samples in code to access an individual element from the trainset object. We first passed a train set object to Python’s Built-in eater function, which returns an object representing a stream of data that we can then iterate over with the stream of data we can use pythons built in next function to get the next data element in the stream from this call. We are expecting to get a single sample, so we named it The result accordingly, after checking the length of the sample, we can see that the sample contains two items. Hmm, this is kind of strange. What’s going on here well? This is because the dataset contains image label pairs each sample that we retrieved from the training set contains the image data as a tensor and the corresponding label as a tensor. Since the sample is a Python sequence type, we can use a concept known as sequence unpacking to assign the image and the label. This is a shortcut for accessing each item in the sequence using its index so instead of writing these two lines of code. We just write this one line. This shortcut is called sequence unpacking or list unpacking. You may also hear this as deconstructing the object. Let’s all get some more code. We’ll check the shape for both the label and the image and we’ll also plot the image But first before we do this. I want you to think about the shapes of these tensors try. I can figure out what you think. The shape for each of these. The image and the label will be opposed to RGB images that have three colour channels. Grayscale images have a single color Channel since we are dealing with grayscale images. This is why we have one channel with height and width of 28 the label is a scalar value and so we have a scalar value tensor with no shape, which is what this empty array here represents to plot the image. We use the image show function, we pass the image sensor with the color channel axis squeezed off and for the color map parameter. We pass grey for greyscale also just below we are printing the label, which in this case is a nine. This is what we expect, given the fact that we see an ankle boot in the plot. Alright, so let’s see how to work with patches and the data loader now. [MUSIC] All of the code here for the batch is nearly identical to what we saw for the single sample. We passed the Train loader in cents to the inner function, and we use the next function to get the next batch. Then we unpacked a batch into two variables since we are dealing with a batch this time, we are using the plural forms for our variable names, images and labels to indicate the fact that we expect to have multiple of each before we check the shape of these two tensors pulls the video and see if you can figure it out for yourself. The tents are containing. The images is a rank for tensor, the shape of 10 by 1 by 28 by 28 This tells us that we have 10 images that have a single color channel with a height and width of 28 for the tents are containing the labels we have a. Rank 1 tensor, whose single axis has a length of 10 running along this axis. We have labels one for each of the 10 images in our batch lets. See now how we can plot the whole batch of images at once using the torch vision make grid utility function. [MUSIC] Just like that we’re looking at a new wardrobe at the top. We can see that we’ve created a grid using the torch vision make grid utility function. We pass the images tensor as the first argument and for the in row parameter, we pass 10 so that all of our images will be displayed along a single row. The in row parameter specifies the number of images in each row since our batch size is 10 this gives us a single row of images after we have the grid, we specify some pipe lock configurations and we use numpy to transpose the grid so that the axes meet the specifications, the image so function requires. We also print the labels here so we can verify for ourselves that the data is as it should be, remember. We have the following table that shows us the label mappings to each of the class names. We should now have a good understanding of how to explore and interact with data sets and data loaders. Both of these will prove to be important as we begin building our convolutional neural network and our training loop. In fact, the data loader will be used directly inside our training loop as we are iterating over our data during the training process. So here’s a fun fact. If you want to see more data at once, try increasing the bat size on the data loader [Music] and be sure to check out deep lasercom where you can see the blog post for this video and give the deep lizard hotline to look where you can get deep lizard perks and rewards thanks for contributing to collective intelligence. I’ll see you in the next one. Imagine this you wake up in the morning and you take a breath of fresh air, and you smell bacon, eggs, toast, coffee. Your favorite breakfast already made for you and you didn’t even have to go bed. So then you get up and you go into your kitchen and you indulge yourself, but as you’re eating a screen pops up with your day already planned out for you and you didn’t lift a finger. So then, as per your schedule, you get up and you head into your car and as you open up the car door. You realize something that there’s actually not even any seats? In fact, there’s not even a steering wheel. Just a couch for you to lay down on. So you do as you lay down. The car takes off already knowing where your work is without you having to say a thing. So as it’s speeding along the road, you’re talking to someone, although you’re not just talking to anyone, you’re talking to a friend, the same friend that had made your breakfast this morning plan your schedule and is driving the car right now. And as you’re talking to me, you’re talking down through a computer, although through isn’t really the right word because your friend is the actual computer [Music].

0.3.0 | Wor Build 0.3.0 Installation Guide

Transcript: [MUSIC] Okay, so in this video? I want to take a look at the new windows on Raspberry Pi build 0.3.0 and this is the latest version. It's just been released today and this version you have to build by yourself. You have to get your own whim, and then you...

read more