WHT

Tensorflow Object Detection Evaluation | Object Detection Part 5 – Evaluation And Tensorboard [tensorflow]

Daniel Persson

Subscribe Here

Likes

56

Views

7,458

Object Detection Part 5 - Evaluation And Tensorboard [tensorflow]

Transcript:

Hello, explorers and welcome to another video today. We’re gonna talk about evaluation, so we’re gonna evaluate what we have done so far. How good are we? How close to the truth, are we? What is the actual status of our trained model and so to evaluate our model? We actually need to install some more stuff when we run the model and so on, you just need tensorflow. Python and a few pip packages. But when you evaluate the model for some reason, you actually need a library. That is a graphical library if I can understand it correctly. Active TCL in Windows and you need TCL support or TK support in your Python so in Windows, you need to install active. Tcl and this could be found at the download site of the Active State. So you download this version here and it they ask for quote, but you can download this free version, and when you have install this, you need to install Python again, so you download Python and install that, and if you have Python installed, you just run the Installer and then say that you want to change some features and the thing that you need to check. Is that off to install TCL in Windows that you have? This checkbox checked here, tcl/tk and idle so that you install this tkinter because tkinter will be used by the evaluation script, so that’s important for you to install, so you don’t get any failures during the run of this script and I teased the other day how to run. The evaluation process is quite easy, just run in the model research directory object detection evaluation. And it’s after you have set up your system there, so you run that script and then you log to standard error. You have the pipeline configuration that will look that many times you run a checkpoint directory and evaluation directory, so that’s where to put the result and it runs and it doesn’t give you that much of an output. It actually just says that it’s starting to process stuff and then it ends. It doesn’t give you any pop-up’s any extra information. It just creates a directory of it with the evaluation result. So in order to look at this, you need to use tensorboard and tensorboard can be run like this. You have tensorboard the log directory and then the training out the directory or the evaluation directory. So if we look at some of these that I set up here so here, we have the evaluation directory, so I’ve run tensorboard Exe. This could be found in your Python scripts directory under Windows Log directory, and then I run this under Port 6006 that’s the standard port, and I also have one running that runs under 6005 and there I run my training directory, so we can look at both of these, so let’s go to the browser and take a look at what you can see in these overviews. So this is the training directory and here we see that we get some scalars and these. I can understand you see how the learning rate actually was performed during the run here, so we used the same learning rate over the whole set, and if you have a training or a learning rate that is actually scaling, and it’s moving down during the running. You can see this curve here, but it’s pretty flat for me, and then we have this classification loss. So this is how good of a classification we actually got and we see that it actually stays pretty stable all over here. It doesn’t really go that low. It starts really high, and then it keeps around a value of 0.7 and it’s down to 0.1 in some instances, but it goes up and down and then we have localization loss. This is how close to the actually where the box should be located in the image, and this is all over the place and we have the same on the RPN loss. So this is the suggestion for boxes and localizations. These are related, and then we have Objectnes’s loss is the actual. What kind of objects you found? And how far from the result it is, and then we have total loss and total clones, so these values. I can understand a little bit about, and I can also see that our training wasn’t that successful when you look at these values so that could be improvement either in the model in the setup and in other things to actually improve this. And then you have the graph so here you can look at the actual graph of the model that we ran and this is quite a complicated model, and if you like to be really professional or really go into the field more, you can go in and change this model and figure out how a model should be efficiently built in order to actually make it perform better, but one thing that you can do. If you don’t want to dive that deep is that you can actually see here. You have some colors on this side that there we go so device here. If you turn that on, you can actually see different colors. So you have some bluish color here that’s run on the CPU. So you know that this specific box runs on the CPU and then you have some stuff over here. That’s green and these things are run on the GPU, so you can actually see how much of the workload is run on the CPU and the GPU can actually figure out how to set up your environment for the best performance and you can also. You also have a little bit of an interesting check here. You can check if you are. TPU compliant so capability with TPU. And here you see that this model is fully compatible with the TPU so you can run this on the Tensorflow processing unit. So if you have your own model that you have built and you open it up in tensorboard, you can actually look in the graph and see if you can run this more efficiently on the Google Cloud. This is good to know that you can actually both. Look what kind of things in my graph have a setup that will run on a GPU or a CPU. And can I run it in the cloud? So these kind of questions you can find in the tensorboard utility, and then we come to distributions And this is also a little more low-level. If you want to debug your specific model, it goes into a lot of the different stages in this model and they are quite advanced and I can say that I’m not really sure what all these values actually say. So I will not talk about these. The same goes for histogram’s here. You can also debug your model and see how well it was training in the different stages, But as you see here, it’s 70 pages of values to go through, so you need to have some know-how of the actual model in order to debug it and the projectors slide here. I was actually thinking that I would see something that would tell me what kind of a distribution I have and or how well it actually trained and how far from the actual classification that it got, but it only shows me values that is box encoding predictive weights and momentum and the same goes for class predictor weights and momentum. I’m not really sure if these values actually tells me some information that is useful. If you are more advanced, you can perhaps use this to actually see how the weights are distributed and if that correlates to the actual result that you expect, but I’m not that well-versed in this specific models. I can’t really tell if these weights are reasonable or not, and because this is a pretty pre setup model and I’m not looked into it specifically, and then we look at the evaluation directory. And here we have a tensorboard. It shows that kind of data here. We have some scalars and these tell me nothing pretty much because they are just one value. So this is on the frozen model, so they will not show you that much information, so it’s not that interesting. You can see that you have a classification loss that is under or it’s like, actually 0.3669 so that where was what it actually ended up, but without seeing the actual progression up to this point. I think these specific points in time is not that interesting, and in this, you also have the graph, so you have the same kind of look at this graph. You can see that this graph is a lot simpler because this is a frozen state and is also a frozen model. So if you freeze your model and run it, you can see that it’s a lot less, and it’s also in this case. We have a lot of things that are not compatible with the TPU, so their specific evaluation model is not compatible with TPU. That’s good to know, but you will not run on this specific model. It’s not the model that you actually trained on, and it could be interesting to see when you freeze a model later on if that model can be run on the on the TPU or if you, but then again, why would you like to run a model that is trained on a TPU to train it? You wouldn’t run inference on the TPU, right and so, and I think that’s okay that it’s not compatible with TPU and but an a thing that is really interesting Here is that you can actually look at the images and look at the specific prediction and this is very interesting to see how good it actually went here. We have the first image and it doesn’t know at all what that is. I think it’s a TV monitor or perhaps oven or something showing the Sun. Perhaps, and then it’s things that this railroad car or this train is a bus, not really sure that that was a great one. And then we have this little harbor crane. I think that person not that good either and here it doesn’t find anything in this image and here we have a TV monitor. That’s pretty good and it’s fifty-five percent Sure that it’s a TV monitor, so that’s at least right and here we have an aeroplane well done. You found an aeroplane by 65 percent. So that’s a correct one. So I wonder if these also its actual. Oh, a person right on, found a good one there too. And then we have a person and a car. So I wonder if it’s actually person in the car there I don’t think so so this was not a good match for person, but a pretty good match for a car, but I think it should actually take the full image for a car. That’s what’s not a good one either. Here, it’s very confused. It could either be a car or a bus or both. Well, it’s a vehicle, so it’s at least right there, but it should be a bus, everything and last, but not least here’s a lot of persons, and it’s actually right. It’s found too many of them, but the bounding boxes. Some of the bounding boxes are pretty good at finding persons, but as you see here. The training of this data set was not good so that that’s an interesting thing that you can actually go in look at specific image set and see how well did your data actually train? So this is a good set a way to evaluate did. I do a good training or do. I need to train this model again. Um, so this was pretty much what? I wanted to talk about today, and I hope that you found this video interesting to check out this tool tensorboard and use it and that you have a little bit more insight in what you can do with tensorboard and how you evaluate your models. I hope that you learned something today. If you like this video, please give it a like, share it with your friends and colleagues if you haven’t subscribed yet and want to follow along in this little tutorial or this little series, then please do that, and I really hope to see you in the next video.

0.3.0 | Wor Build 0.3.0 Installation Guide

Transcript: [MUSIC] Okay, so in this video? I want to take a look at the new windows on Raspberry Pi build 0.3.0 and this is the latest version. It's just been released today and this version you have to build by yourself. You have to get your own whim, and then you...

read more