Machine Learning Supply Chain | How I Automated A Supply Chain With Machine Learning, Aws, And Python

Derrick Sherrill

Subscribe Here





How I Automated A Supply Chain With Machine Learning, Aws, And Python


[MUSIC] Hello, everyone and welcome back to the channel today. I want to go over a video on how I automated a supply chain using Amazon Web Services machine learning and then Python script. I want this to be a brief overview of how I did. It not necessarily a tutorial on how to do it. So if you have questions on what I did specifically, please let me know in the comments below. If this video gets enough traction. I’ll be sure to make a full tutorial on how I did everything. So if you’re interested in that, please hit the like button below. There’s four main steps that I followed to implement a supply chain machine learning algorithm. The first is the data, everything that you do. When machine learning comes from the data, if you don’t have a data collection system and there’s no way that you’re ever going to be able to use that data to predict, so the first requirement is that you have a way to collect data in your working environment. The second step that I followed is data cleaning or feature engineering as machine learning engineers. Call it what this means is that you have to figure out the good data from the bad data. This means that you have to figure out a way to understand your system enough to only use the good data in training. Your model. The third step is picking the actual machine learning method that we want to use. There’s a ton of different algorithms out there and you have to find one that fits your problem. The fourth step is visualization of your results. You need to be able to look through the results of your machine. Learning algorithm. Find the ones that are important to you and act on those very quickly. I’ll try to cover all these steps very briefly in this video. If you want a full tutorial, please let me know down below the first thing we need to look at is our data collection so for me, The data that I had working in the supply chain was the consumption of a raw material given by the part numbers up here each month denoted by the month and the index column. So for example, our part number over 136 in the month of July 2011 consume forty one thousand three hundred and sixty, so essentially I have a time series for each of these part numbers. I have this data for multiple years and a lot of part numbers zooming out. We can see just how big this data set is the way that we personally got. This data was the SAT. We just exported monthly, the raw material consumption for each part number and put it into this spreadsheet. Step number two. The data cleaning and feature engineering is for us since we’re working in a supply chain. If there’s any demand, we need to be able to cover it. We don’t want to fall below. Any number of our predicted demand in the future that means that we want to have a complete customer availability and be ready to make whatever product that our customer wants, so even though some of these columns are very sparsely used. We still need to keep them in ourselves. Although a machine learning algorithm may not be too accurate and predicting this time series, we still need to give it a chance for the third step, picking the algorithm that we want to use. I picked the deep AAR forecasting algorithm through AWS. I’ll try to very briefly. Explain why I picked this one using a graphic on the screen, but please know that there’s a lot more that goes into this selection. Compared to other machine learning algorithms. The deep AR algorithm from AWS actually takes into account every time series in the set, so instead of using an algorithm like a remote, which just takes in one time series and predicts off it, the deep AR actually takes into account every time series in the set and makes one big model to predict the outcomes of each time series. You’re using one singular model for the visualization of the data. I use the Jupiter notebook in a Jupiter notebook, You’re able to execute Python code and collaborate with others. This is exactly what I needed for this project. So that’s why I picked this. There’s other options that you can pick for this as well. Google Collab is one that you might consider too so now. Let’s get into the specifics. How can we do it? The first thing that we need to do is to load in our excel data into an Amazon s3 bucket. Amazon s3 is just a simple storage solution. And what that means is that you just put in files here, and then you can actually access these files from other. Amazon Services. So I created a bucket and then I placed in the excel sheet into the bucket next. We need to use Amazon’s machine learning platform, which is called sage maker and sage maker. I just created a notebook that will hold my Python code. These notebooks are just Jupiter notebooks, so that means that we can collaborate with other users in these the way that our data will flow is that we will take the data from the s3 bucket that we just placed in feed it into sage maker and then take the output data from Sage maker and feed that back into an excel sheet in place of that intend s3 bucket. Once you place that excel sheet into an s3 bucket, were able to download it and then we can use it in our own. Python script to manipulate the values under graphs opening up the notebook. We’ll look at the code a little bit, but if you want a full tutorial on this, be sure to like the video, so essentially, we feed in all the packages that we want to use, and then we import our data here, so we say that our bucket is here, and then we specify where it is within the bucket, and then we load it in. We also designate an output path here, so we say this is where we want the output of our data to go now. We had to set some hyper parameters so here. I’ve set frequency as a month using the character M and I’ll talk about this at the end of the video, but there’s some improvements that can be made. If you’re trying to do this for yourself. I have a prediction length of five, so this is saying that I want to predict five months into the future and then a context length of twelve. A context length is just how many frequencies you want to look back. So I’m saying I want to look back. Twelve months into the past, however, these algorithms automatically take into account seasonality. So twelve months is more than enough for this application. Once we load the data in. We just need a simple check to make sure that it’s input correctly here. I’ve done that which is putting in a graph next. We can create the time series. There’s a few more complex operations that are happening in here, but we won’t go through them and dip. All we’re doing here is formatting our excel data into a way that sage maker can read it in once. We have the data formatted into a form that we can read it. We need to specify our estimator. An estimator is just a type of machine learning algorithm that you want to use here we’re specifying deep a. Are your the hyper parameters that. I’ve set for this project. Obviously, if you have more a box and then your script is most likely going to be better and that’s one change that. I would make to this the data I was working with. It was very large and we found out that everything. Past 80 was not necessarily beneficial since we were looking at thousands and thousands of pounds of these draw materials. We just needed to be in the ballpark. Additionally, there’s a lot more work that went into these hyper parameters, so be sure to look at these. Whenever you’re trying to apply this for yourself. When using these Amazon algorithms, they create an end point and end point just starts up that way you can make live predictions from your model here we’re defining the predict function that we’re going to use. This is just copy and pasted from Amazon and then our prediction times are a little modified for a specific case. Once we execute our predict function here, we return graphs that look like this. So the blue is the target, which is the actual values that we put in, and then the prediction median is what it actually predicted to happen in that month. Lastly, since I didn’t denote a risk assessment for these items, I tried to have different levels of the raw material that I would need for the different confidence levels, although I didn’t denote this specifically in the Python script if there was a critical raw material, and that knew that we often ran out of I would use the 90% confidence here instead of the 75 to purchase at that level. All you need to do here is to compare to. Excel sheets using Python. I have examples of how we can compare to excel sheets and other videos and I’ll link those in the description below. If your predicted consumption is higher than your inventory level and that should trigger you to make a purchase order since I was only in this role for a few months before moving on to my next opportunity, There’s a few things that I never got the chance to implement. But if you’re trying to implement this yourself, I would encourage you to think about these. In this example, we started from the raw material consumption, the better way to use a machine. Learning algorithm to predict what levels you should purchase at is to look at the customer orders. When you look at these orders, you should try to predict a level and that these orders will come in. Once you predict the orders that you will have. Then you should relate them to. Bill of Materials and then purchase your raw materials by these another thing that we didn’t cover in the script was the lead time of the raw materials. If your lead time is much longer, and then we should have a way to trigger a. Pio to happen much more in advance. Unfortunately, this algorithm didn’t take that into account. Thirdly, I mentioned it in the video, But a risk assessment was never done, so that means that we couldn’t truly automate this system because we don’t know what components we run out of in the algorithm. Lastly, the granularity of these calculations wasn’t the best in reality. If you’re running a supply chain, you should have the lowest amount of granularity in your calculation, So that just means where I use monthly consumption. Because that’s all I had. If you have daily consumption, then your algorithm will be much more exact. And that’s all for this video. I’ll link all the services that I used in the description below. Remember if you want a full tutorial on how I did this step-by-step leave a like on this video If you have any questions or comments in the meantime, be sure to post them in the comments until next time [Music].

0.3.0 | Wor Build 0.3.0 Installation Guide

Transcript: [MUSIC] Okay, so in this video? I want to take a look at the new windows on Raspberry Pi build 0.3.0 and this is the latest version. It's just been released today and this version you have to build by yourself. You have to get your own whim, and then you...

read more