Keras Custom Metrics | Optimizers Losses And Metrics – Keras

Data Talks

Subscribe Here





Optimizers Losses And Metrics - Keras


You’ll hear and welcome back to a bit of deep learning and Kari’s, where we learned. Just a teeny, tiny bit of deep learning and a whole lot of Kari’s. Today we’re gonna be talking about today. We’re gonna be talking about the nitty-gritty parts of models, including the optimizers, the losses and the metrics. So if you guys remember the compilation of the models, this is, this is all the goodies that are involved with that. This is pretty important so. I hope you guys stay tuned, so let’s get into it, okay, USAGE OPTIMIZER. So we’re just gonna get in here. This is not too much to sort of displaying, but we make a model. We can make it with all sorts of crazy and complex layers. Okay, we then want to compile our model and we want to have it trained in a very specific way, and there’s lots of ways to train a model. The important thing to note is that, um, is that to figure out how to how to train these models? Check out the optimizer’s so carousel optimizers so we can go ahead, we can go. Kerastase optimizers. We find the OPTIMIZER that we want the SGD. Whatever this magic mumbo jumbo is stochastic gradient descent, We specified lots of hyper parameters for it and then we, we pass it in so optimizing equals S treating. So you can always just go ahead and do the sort of normal thing. This is generally what? I’ll do unless I’m fine-tuning it. So you just pass in the OPTIMIZER by the name. So Xtreme Team? This does the exact same thing. All optimizers have a couple of very common parameters. I’m not going to be going into all these OPTIMIZER ticks, except for in deep learning the history of deep learning. I’m probably gonna be going over a couple of these. SGD stochastic gradient descent. Very much. So, Adam, maybe momentum very much so or a mess prop. Maybe we’ll see, these are like incredibly important and like generally good to know stuff, regardless of what type of learning you’re trying to do. Okay, so some things that a lot of optimizers have all of optimizers have a learning rate, so the learning rate is, it’s basically like how fast it trains it’s important to sort of optimize this because going too fast can mean you can overshoot your destination going too slow. I mean, it’s gonna take forever to get there clip. Norma, so so what this specifically means is, Let’s say you’ve got a. You’ve got a gradient. You don’t want to be ready to see it too big. So sometimes you want like a very, very steep. Plateau and the gradient here is like in credit. It’s like, man. You should go really far to get to get there, so so you want? You want to clip the norms here? You don’t want it to go too far, so so you’ll go ahead. You’ll learn this specific great, but but you’re gonna have to. You’re gonna have to slow down if you go too fast. So those are two things these are. These are comments, all sort of optimizers and you can. You can even use tensorflow optimizers as well so you can use optimizers. TF OPTIMIZER this. This is the TF Optimizer. We’ll take a tense flow OPTIMIZER and we’ll wrap it and you can give it. The proc small gradient descent OPTIMIZER. It’s something like that. So that’s that’s optimizers again. I’m sorry we’re only learning a tiny bit of deep learning here, so I’m not going to be going over exactly what these things mean, but just just a high level thing you can give your model lots of optimizers you can read about them and go figure out which one you like best or will work best with your application. Optimizers are the way that you learn. They’re the way that you train. All optimizers have a learning rate and a clip norm, but the learning rate is the speed at which you learn clip. Norm is the Max speed at which you can learn in a particular step. So, okay, that’s optimizers the other thing that so there’s two more things In the compilation function. We’ve got loss functions when we’ve got metrics so loss functions are here so loss functions need to be minimized. Remember, so this is your training to minimize this loss function, so it needs to be minimized. As for example, you can use all sorts of one’s mean squared error is one of them. You can also look at the losses and pick out the particular function that you want some of these and you can. You can even just include anything that takes a white roux and a white bread and spits back a loss. You can include that so the TFN n log. Poisson losses is one example. It’s something that you can spit in here so lost. This is what you want to optimize. So generally it is some sort of distance metric between what you output what your model outputs and the target. What’s your model should output so okay, and then user metrics was the last thing so metrics any loss function can be a metric very simple. You can anything that takes in a white ruin, a wipe, red and spits back at a value is good, so metrics can be even be getting more positive it as things get better, you can, of course, make your own. So for example, you can have the mean prediction, which is like, somewhat weird. It’s like you don’t even look at the true value. You just spit out. What you what you think is the normal value, so that’s it. It’s pretty short, It’s pretty short and the reason why it short is because we’re only learning a teeny tiny bit of deep learning. I guess what’s important to remember here. Is that optimizers? There’s a lot of them. You can pick anything that you want. They all have their learning rain. They all have the clip. Norm loss functions it’s sort of a distance metric between the true value and the and their predicted value. And you’re trying to minimize that. You’re always trying to minimize it. And then metrics is just anything. You just spit it out. It’s just okay. I hope this was useful. You guys now know the nitty-gritty on both layers and models. Were gonna be getting into a lot more fun stuff now that we have this done. I think we’re probably going to be moving into. Um, we might be going to be into pre-processing well. I’ve done well to wait and see, ok? I hope you guys enjoyed, and if you haven’t already. If you’ve enjoyed these videos to any extent, please go ahead and subscribe. There’s only going to be more fun to come. We’re covering the technical nitty-gritty stuff. I I sort of treat this. As it’s sort of a catechism that we can go back and refer to as well as when we go over the more intuitive stuff later on. But I think I’ll probably just do it another video. Oh, no, this as well, just sort of a directional high-level direction video. Okay, thank you, and as always, it’s always a pleasure.

0.3.0 | Wor Build 0.3.0 Installation Guide

Transcript: [MUSIC] Okay, so in this video? I want to take a look at the new windows on Raspberry Pi build 0.3.0 and this is the latest version. It's just been released today and this version you have to build by yourself. You have to get your own whim, and then you...

read more