WHT

Markov Chain Machine Learning | Python & Machine Learning | Introduction To Markov Chains | Part 1 | Eduonix

Eduonix Learning Solutions

Subscribe Here

Likes

113

Views

7,641

Python & Machine Learning | Introduction To Markov Chains | Part 1 | Eduonix

Transcript:

[MUSIC] Hello, everyone! Welcome to your next chapter in this one. We are going to be talking all about these things called Markov models and then we’re going to model A specific kind of Markov model called a hidden Markov model well. I said model a lot of times there didn’t. I well don’t worry about that for now. What we’re gonna do in this class is? I’m going to introduce you to what this thing called. Markov processes are exactly what we’re talking about when we’re talking about hidden Markov models and then trying to give you some scenarios or ideas about how these kinds of models work when they’re appropriate and why you’d want to use them, okay, So, Markov models come from a very long history Of probability theory, it’s essentially a tight it’s not is can see what we’re gonna be doing is considered an unsupervised and method for it, but you can kind of consider Markov models to be very different than a lot of the other types of learning that we have learned. Previously, it will be more similar to the types of processes that occur in random forests. If you kind of look back now and think of all the different types of learning models we have, we have gone through. Random forests seemed to be the most different or have a completely different approach and to dealing with how to make decisions and learn from data hidden. Markov model is going to be closer to that beign in terms of its conceptualization and what we’re actually trying to extract from that information so first off, why don’t we start with a list of definitions first so firstly? Markov process. Basically what we’re doing is we have a sequence of events or States? The example we can use is whether or not to go outside to play sports, given the weather or certain values of the weather. So if we are in a state of going to sports, we can have some information of like whether it’s you know. Fog year sunny, and we can use all that kind of information to determine the likelihood that we are going to be moving from one state to another either continuing to play sports or to move on to not playing sports and so the probability of moving from state to state given some information or time we build these kind of probabilities around and we’ll show you a visualization of what that looks like. But the probabilities that are built around the likelihood to stay golfing versus the likelihood to transition to another state like not golfing art work or something, These probabilities, all sum to 1 or 100% and the transition probabilities for a Markov model are essentially a large array of the likelihoods that a model will move from one state to another. Now remember that Markov processes are kind of each event is independent. You kind of use the information, you know about the past to inform a future event and that these different types of information tend to be independent from each other, meaning that you know the how a state moves is not influenced only influenced by the the factors that are involved in making that decision or at least in the data that we have. Okay, so the next slide we have under cost function again, not really the best way to represent this this this information, however, this is what we’re talking about when we talk about a simple. Markov process. So what you’re seeing here are three circles, one, two and three, and you’re seeing arrows going both to the state we this is would be our state so we could say that like one is not golfing. Two is our foot books and sports Two is not sports on three is work and we can say that the Pro. If we look at this state one and look at where it’s pointing to, we can see that the probability of moving if you are already in state one moving to state two is 34% the probability of moving from state one to state three in the next if in the next state or the next iteration is almost six percent, and there’s about a 6% chance of this of staying in this state here and State 1 You’ll notice that for State 2 it doesn’t have any probability of returning back to itself. It only has a it’s only basically, if you’re if this is State 2 if you’re in State 2 there’s a 0% chance of staying and stay – for this Markov chain. This is a this is the Markov processes would be considered a simple Markov chain and the Markov process in this example is showing like how you can think about moving from state to state over time. You will notice that the some of the arrows that are pointing out from a state some to one because there isn’t. There can’t be a hundred percent 110 percent chance of doing of moving from one state to another, but the probability of states coming in does not have to necessarily sum to one, so it’s really the transition probabilities sum to one when we’re going from one state to another, but not not to a state. It’s always from a specific point. Okay, that’s basically how the Markov process is working in this illustration. So why don’t we now go into our syntax section and start talking about another example of this? Okay, everyone, so in this example. I’m going to craft our own Markov chains so that we looking, so we had that original image and I was kind of describing what each of the numbers are, but let’s actually kind of create our own example and see what we can do from there so first off. I’m going to create an example that has three that has four states 1 2 3 4 and each of these states is going to be basic. We will do this with weather, so let’s not do this with other because weather is going to be the same every time. Let’s do this with something like, you know, Mental states, happiness sadness, you name it, so State 1 state to State 3 and state for and so we’ll say that one is happy – is sad. 3 is angry and 4 is afraid fear. OK, so in this example, we’ll say that the likelihood if you’re happy and you stay happy, this would be a generally happy person. It’s about 20% the likelihood that you’re happy and let’s actually put this a little larger. Let’s make this positive. This will be. This would be 4 percent so almost like this 2.4 or 40% The likelihood that you go from happy to fearful we can say is pretty high can say at Point 2 because like oh! I’m so happy that this is scary me or something. A happy person. This this happy person. So this person’s emotional? Markov chain is not going to have any probability of going from happy to angry. If they’re happy, they’re just happy, and then sometimes this individual, like goes from happy to sad very quickly. So in this case, this state is going to be 0.4 and so there’s. Basically, if this person is happy, the highest likelihood is that they’re going to either stay happy or move into sadness and they’re when their mood changes next lets. Now talk about anger. We’ll go up to anger here. We’ll say that a person never stays angry for long, so we don’t have any backwards it doesn’t. It won’t stay in to this state, but a person can go from angry to sad, very, very often because they feel regretful or something or they can go from angry to happy, and because they’re angry to happy, that’s something around 0.2 like they, you know, they got their anger out in a very healthy way or something, so it makes them help a happy that time, and occasionally they become afraid because I don’t know, maybe their anger got them into trouble or something like that, and we’re gonna build this for all of these states. All right, so I’ll have four to two so from fearful to sad, we can set high, stay fearful We can set to, you know, 0.3 and then we can have going to three at point one and going to four or going from four to one at point one. I know, remember. All of the states moving from the transition probabilities are the all of these numbers. Okay, the point 4.4.2 We need these transition probabilities, or at least we need to where they’re trying to approximate these transition probabilities, given some state data that we’re going to receive so with our hidden. Markov model We can either or our regular Markov model. There are two ways to go about this. We can either give our model. The transition probabilities ahead of time or some hypothetical ones and then feed into it. Some states like some data and you can give us the likelihood that those states occurred like the sequence of events happened like, let’s say we are passing in some data about about a person using a mood tracking app or something, and it’s what we pass in as data is, there’s their mood given you know that they were tracking it, so lets some daily mood counters. So this is their mood at every 24-hour period, so we could calculate the probability that this sequence occurred in our Markov model and produce some sort of percentage likelihood. I don’t think it actually be 73 percent in this case. It’d be much lower or we can give it. These states and a bunch of them and it can derive the transition probabilities. The examples that we gonna be talking about in our coding section are usually going to include some of this information. This is one of the trickier things about using hidden. Markov models This turns out in order to like Scikit-learn started to have a hidden markov package, but they realized that it was beyond the scope of what they were trying to do and kind of the models that they were generally providing and so they built a whole separate library called. Hmm, learn and so basically what? I’m trying to tell you with all of this guys. Is that how we build hidden? Markov model, who’s gonna be a little different than how we’re building other models? We either have to feed it. The states that we suspect it’s going to be, and then we give it our feed of the transition, probably that we expect it’s going to be and then feed into it a state and we can get some likelihood we could build a model that then says, okay, well, you know, give it a bunch of states and adjust adjust parameters, according to the transition probability likelihood, or we have to give it a bunch of these different states as data and and have it learn the transition probabilities. They’re at, okay, so you can see how this might actually be one last thing. These discrete values. These states here, like 1 2 3 4 are a most common way to describe the Markov models and the simplest way to, however you don’t necessarily these could be continuous values. They don’t have to be discrete labels where ever want to only work with discrete labels in this chapter. But the point is we want. I want to show you the conceptualization of Markov chains and Markov models and why they might be useful for what you’re doing now or like. What kinds of things are useful in terms of the problems that they’re there’s they’re addressing so in the next class. Guys, we’re going to talk about hidden. Markov models where we have a state, you know, all a bunch of like latent variables like some other pieces of data that influence the likelihood of moving from once the to another. We call these hidden variables, so you know, you’ll have these states. You’ll have some transition probabilities, but there’s kind of like an extra a hidden layer that is influencing the outcome of whether or not it moves from one to another. Okay, so with that, let’s just do a brief review, so in this class guys, we did, we gave you a couple of we’ve introduced you to the concept of Markov processes and Markov chains. We described how basic Markov processes work with three and four state systems for like the very simple versions. And I introduced to you a little bit about the concept of hidden Markov models. So we’ll work on those in our next class with that guys. Thank you very much and see you then [Music] you!

Wide Resnet | Wide Resnet Explained!

Transcript: [MUSIC] This video will explain the wide residual networks otherwise known as wide ResNet and then frequently abbreviated in papers as wrn - the number of layers - the widening factor, so the headline from this paper is that a simple 16 layer wide. Resnet...

read more