What’s going on everybody? And welcome to part 4 of the deep learning with Python Tensorflow and KAOS tutorial series in this video and the next video, what we’re doing is talking about how we can analyze and optimize our models using Tensor Board. Now, Tensor Board is a way for us to visualize basically the training of our model over time, and basically, you’re mostly going to be using this to watch things like accuracy versus validation accuracy and loss versus validation loss, but there’s also some more tricky, tricky things that we can do that. Maybe we’ll cover more in time, but the idea of tensor board is to just kind of help you shed some light on what’s going on in your model so with that, let’s go ahead and get started the first thing that you might notice is. I’m just adding these two lines in here. It’s really not important for this tutorial, but if you’re curious, you can do this, and this is basically so like any model. Even small models are gonna try to hog a hundred percent of your vram so but if you wanted to run like multiple models at the exact same time, you could specify a fraction of your. GPU that you want that model to take up in this case. A third, mostly. I’m just doing this. So when I rerun a model, Sometimes it takes a minute for whatever reason, especially if I’m using like sublime or something like that, it just takes a minute to clear out the the vram so anyways, and then when I’m recording, I’ve just got no more space left in things crash, but but also, if you’re trying to run multiple models like with the Python plays GTA series and stuff like that. This was super useful so we could run many models on the same GPU at the same time like an object detection and the self driving so on so just a little tip. If you find yourself needing that anyway, let’s get started. So the first thing I want to go ahead and do is also a little bit of housekeeping. We need an activation function after this dense layer. It was kind of silly of me to forget that someone pointed that out and yeah, that’s definitely gonna be very problematic, Basically. Without the activation function, it becomes like a linear activation function, which is pretty much useless in this case if anything is probably causing trouble like, we’re definitely not trying to do any sort of regression or anything like that. So doing that right before the dense layer out, so we don’t really want to do that, so let’s go ahead and save this and even and with just that change. We should find that accuracy is quite a bit better now. I’m going to let that run. And then I’m gonna pop on over to the tensorflow documentation for the various Kari’s callbacks, so the way that we can interface with Tensor Board is with a Karas’s callback in specifically, we’re going for the Tensor Board one, but take note of the other ones that exist here. There’s quite a few you can early stop based on parameters you can change. You can do like a learning rate scheduler. You can also reduce a learning rate when things plateau trying to think if there’s anything else. The the model checkpoint. I forget if I point out. It’s early in the morning. Everybody anyway, model. Check point, that’s super useful, so you can actually checkpoint a model like as it’s training. So this is super useful like if you’ve got a model and you don’t know how many pox to do and you don’t want to like over train the model, right, so you might say okay, A hundred epochs or 50 epochs, and you know, it’s probably gonna Max out around somewhere between ten and twenty, but what you can do with the model checkpoint is you can save based on a very like various parameters, so you can save on the best loss it’s ever seen or the best validation accuracy and so on so super useful, But anyways, right now, what we’re looking at is tensor forward. So with this tensor board callback we can save based on the the various parameters for now. We’re just gonna focus on log. Durer, that’s it! You’re not going to touch any of these other parameters, but they’re pretty cool, and maybe at the end of the next video, I’ll briefly touch on some of these, but anyways, let’s move this over here and let’s get started so the first thing that we want to go ahead and do is bring in Tensor board and so we’re gonna go from tensor Flow. Dot Karens don’t callbacks. We want to import import Tensor Board. Okay, so we’ve brought that in, and I just want to point out validation accuracy, 72 and then in sample accuracy basically said so much better than before and again. That was only three epochs, right. Yeah, anyways, okay, So once we’ve done that the next thing that we want to do is, we’re going to be trying to Train. At least we’re gonna train a new model, and then also anytime you’re using tensor board, It’s just a good idea to give your model a useful name. So like, in this case, we might call our model. You know, 2 times 64 confident, or you know something like that that lets us know. What model was that again? Because you’re gonna train a bunch of different models, so I’m just going to come up to the very tippity-top here, and I’m gonna say name equals, and then we’re just gonna call this cat’s first dog. CNN, 64 by 2 Good enough. And then I’m gonna go ahead and add one more little parameter here to have four minutes. We don’t have time yet, but time that time, and in fact, let’s go ahead and convert that to an INT time dot time. Now the reason why I want to do, this is sometimes you’ll either forget to change the name, or you’ll just train that model again for whatever reason, and it’s useful to have the timestamp in there because that just makes it totally unique, so you don’t end up overriding a model and worse, if they have the exact same name, you’re not really overwriting. You’re actually appending so itll. Do this like funky zigzag in the graph. And you just you don’t want that, so we’ve got a name. I’m gonna zoom in a little bit. Just make things easier Also. Let’s make some more space, okay. Once you’ve got the name now we’re ready to specify the actual callback object itself, So Tensor Bull Tensor board is going to equal, and it’s a tensor board object and then we’re gonna say these wherever we want that log der to be. I’m gonna say logs select slash and then dot format’s name one of these days. I really need to get into those the. F strings is the latest way to format strings. I think anyway I am. This is a little outdated, but forgive me so once we’ve got that callback defined and also keep do take in mind you, you could customize these callbacks or create your own callbacks, and maybe one of these is the parent class, and then you’re gonna modify it somewhat, so you can do all kinds of funky stuff with this. So just keep that in mind once you’ve got it. All you need to do is pass it into your fitment and the way that we do that is by passing in Callbacks equals and this is a list, So it’s a list of all your callbacks. In our case, We just have the one tensor board, so I’m gonna paste that in, fix my styling there save and let’s go ahead and run that, and in fact. I think what we’ll do is we’ll run for 10 EP aux Ill. Go ahead and save it. Hopefully we don’t crash the video as this goes type air in it. Eyes, it log underscore. Durer, probably yeah. Log on underscore. Durer’s the parameter there. Let’s try one more time we must you. Alright, looks good. We are training, okay, so, as we train, you should now have a new directory right here called logs. And if we click on that, we can see others are our model right now so to view it as it’s going, you want to open up your command window either A terminal or command prompt rather in at least our windows. The way you do this and you can just type. Cmd from there, just make sure you’re in that main directory, right, so you’re in the directory that houses logs now to get tents of board running just type tensor board, – – log, dirt law locker equals, and then logs. What if that has an underscore to it? Yeah, okay, that must be why. I didn’t do the underscore. Okay, so then it says, okay, it’s running at this address, so you can just like copy and paste that address if you want into a browser or you could just manually type it if it’s as simple as mine was. It looks like mine. For whatever reason, let’s see logs, more logs or logs for some reason. Mine is not showing up deep learning basics logs. I’m looking at the log there. Wonder if I’m running another ten support now. Hmm, let me break. This tensor for log dirt equals lets. Try without the quotes. I don’t think that’ll change it. Sometimes it can be picky, but I don’t think that’s gonna be it. No, maybe that was it anyway. On Windows. It can get kind of finicky on you. That’s weird anyway. So no quotes on Windows. Sorry about that for a while. You had to specify, like a little variable to is very annoying. Okay, so there’s our model all nice and done, and you know, so basically what you can do. First of all is see. I think this is just simple Ray gags. Yeah, so so what we can do is we can just do this and we can visualize all the graphs together, so let’s do this so here. You’ve got your in sample accuracy and you’ve got your in sample loss and then over here, you’ve got your out-of-sample accuracy and your out-of-sample loss and what you’re hoping for is that in both cases, accuracy goes up. Loss goes down now. When we see something happening like this, we can clearly see after the fifth epoch validation loss starts creeping back up. Interestingly enough, the validation Accuracie’s still improved about two points, but it’s questionable whether or not this is actually better. It probably is because that’s out-of-sample. It’d be nice if we had a much much, maybe a larger out-of-sample, but eventually as this creeps up, it’s this will eventually fall like if we did. Maybe 20 epochs now. The question you might ask is well, why, why after what? Why is this happening well? The model is only getting the you know in sample training data. So at some point, it goes from generalizing and in really seeing patterns to memorizing all of the input samples or many of the input samples. So so what you’re gonna want to do is watch always validation loss, at least for me when I’m like evaluating a model, I’m looking almost purely at validation loss and really nothing else. So now what you could do is you could begin to kind of change things up. So for example, we could do one. We could do a model without this dense layer at all, so let’s go ahead and just remove the dense layer and this time. Let’s do, Let’s do twenty epochs. I don’t know if I’m gonna let it finish. This might take a while. I don’t want to just run up a bunch of oh. We ran out of memory, fabulous. No, okay, so it crashed. Okay, let’s see here. Hopefully that didn’t kill my recording. I will find out. I’m gonna be very angry if that killed me. So what were we on? Tensor, let’s see. I gotta find this stupid file that we were working on, so I think it didn’t even close after all. That -. Alright, Twenty Epochs. Okay, we’ll try it. Let me try to run out one more time. It’s a stupid recording at the same time. Also, maybe in the future. I’ll run via the console. For some reason, the sublime doesn’t want to like, clear away, Even though you’ve cancelled it and you’re done, it just doesn’t want to clear it away, so that’s kind of annoying, okay, So as that’s running tensor board is still up. I have no idea what I do with it. Maybe it crashed. I don’t know where I put it. I’ll just open another one. I guess HPC okay. So here is the next model in training. I wonder, do we save that? We didn’t want that’s looking really similar. Yeah, okay, no, no dents layer interesting. All right, we’re actually doing worse. You probably also would have wanted to update the name to be like no dents or something like that. Just keep that in mind. You always want to be thinking about that. Also, why is our regular expression suddenly not working? How about any word there or any letter? Okay, So here we have at ten at the tenth epoch. See, we’re still in samples. Worse, out-of-sample is actually slightly better in most cases, which is curious and the loss seems to be doing much much better. We made it. You know, basically to ten epochs without without too much issue. Let’s refresh again. This normally well do it every like, 60 seconds or ninety or something like that. Okay, we’re done, okay, so we can see here. The validation accuracy kind of starts to plateau, which makes sense because after the eighth epoch, in this case, it starts creeping up. We can also see in general loss was kind of worse than in sample, but out-of-sample loss was better so again personally. I would say without the dense layer is clearly victorious. Because I’m I’m more interested in what’s going on with the out-of-sample than I am in the in sample because this is just too easy to cheat on the in-sample like you can always like it’s almost like the better. The in sample looks to the out-of-sample like that’s clearly the worse. The model is like that that means over fitment is occurring. So this is what? I tend to look for just just for some other quick notes, like if you just wanted to look at one you could check or uncheck or you can also like, just click this, like if you had like 50 you could click just one quickly you can. Toggle all runs down here. This is the smoothing. So as you can see, there’s like these like subtle lines out here. No clue what what just happened there. I think our sum’s wrong actually. Oh yeah. I don’t the graph definitely broke for a minute. Anyway, you can change the smoothing, so this would be like no smoothing, so it’s very raw and you can see how jaggedy everything is, and then or you can add just like a ton of smoothing to really smooth things out. I’m trying to think there’s anything else, then if you add like a bunch like, like, for example, if you only wanted to find your 64 by 2 models, you could type that in, but in this case, we only have that many, but let’s say you want the run at 344 It should just find ya that that one specific, for example, so you can do stuff like that -. You know, filter through a bunch of a bunch of models, ok? I think that’s enough for now. In the next tutorial, we’re going to talk about optimizing a model because we could go back and forth and manually do things then rerun things and do things again, but it’s going to be much more simple – to just kind of automate that entire process as well. So that’s what we’re gonna be talking about in the next tutorial. If you’ve got questions, comments concerns, whatever feel free to leave them below. Also, we have a discourse, just discord, dot. G G. Slash Centex. Come in there! Hang out with us chat with us. Ask questions, whatever otherwise. I will see you in the next video.