Hey, guys, in the last two lectures we discussed what? Siammask is and how it works from our literature Review of the Siammask paper today. We’re going to learn how to set up the dependencies and run SiamMask on your own PC. So, for all your newcomers, Siammask can be used to track the mass segmentations of an object with the state-of-the-art performance in terms of robustness accuracy and real-time frame rate. So Dr. Humera will be showing you what you’ll need how to set up your environment on your PC and how to get the Siammask demo up and running. Humera will be your instructor for not only this tutorial series, but also for the comprehensive SiamMask course, which you can find at the link down below This course deals with the implementation data set preparation as well as training and testing for your own applications. Check it out, so here is. Humera, with Siammask implementation, also, please like comment. If you would like to see more tutorials on Siammask. Hello, everyone and welcome back to the Siammask Tutorial, a series in which we are covering object tracking and video segmentation using SiamMask. In the previous session, we saw what Siammask is how it works. What’s the underlying maths behind it? And how was the performance gain achieved over the previous benchmarks? In this session, we are going to cover how we can set it up on our own machine and play a little bit with it, so let’s get started. What you need is a Github account because this is where the code resides. In case. You don’t have one just feel free to create a free account right now. As far as system requirements are concerned, the authors used an Ubuntu machine with Python 3.6 and they ran their code on GPU For the purpose of this demo or this tutorial. We use Mac Os, also with Python 3.6 and we used an Intel Core i7 so in case you don’t have access to Hifi GPUs, It’s not a problem your code would execute a little bit slower, but it would still run, so let’s get started. It’s a fairly simple process. You need to get the code, so we’ll clone the repository. We need to have the python environment set up for that. We’ll use anaconda and finally we’ll be good to go, so we’ll run the demo. So first we get the code. You can get the code from the augmented startup’s repository. Siammask so we come here and we get the code once we have the code in our hand. All we have to do is to go to the terminal and clone the repo, but before we do that, let’s have a quick glance at the repository and from the look of it, it looks like a very well documented repo. They have apparently very clear-cut instructions on how to set up the environment. How to run the demo how to train the system and so on and so forth, so let’s try it out, so we come to our terminal and we clone the report for that. We use the git clone command and we paste the link that we just copied. Now we have the code. We move to the right directory. Siammask and we export the right path. So we specify the SiamMask variable to be equal to the present working directory. All right, that’s done. The next step is to get anaconda and use it for the installation. If you don’t have it already, you can visit The Anaconda website there from products under individual edition. We scroll a little bit to the download section and here we can download the installer of our choice for our purposes of this tutorial. I am using the graphical installer, of course, based on your preferences and your system. You can select the right installer that suits you best now. We have the installer downloaded. We have the readme which we accept license. We agree, select the destination. The installation that we typically do is the standard installation. Once you are done with the installation, it will ask you to download some editor, which you can select to install or not, and once you are done with it, it will provide you a summary of what was done once. We have the anaconda setup on our machine. Now we can use it to create a virtual environment for ourselves and proceed with the installation, so lets. Check out the documentation. We can close this. As per documentation, we use Conda to create the environment. So let’s just do this. So Conda create you specify the name of the environment, and you specify which Python version you want to use for this setup. If you have an environment with this name, feel free to overwrite it as a new developer. Maybe it’s important for you to understand that. Having virtual environments and setting up or your machines in in containers is always a good idea because this way your multiple installations do not conflict with each other, because imagine if you have multiple projects running in parallel and each one of them have their own requirements, then having them all under a global environment will be actually a mess to resolve, so it’s always better to have separate environments for different projects and work on them, so we have our environment ready. We simply activate it and now you can see that we have moved from the base to CM mask environment and here you can actually go ahead with the installation as we can see here. There are a number of requirements for this particular project, including but not limited to siphon torch matplotlib. We need opencv. We need torch vision for that, and there are many more packages that needs to be installed. And imagine if you have to install each one of them by yourself. And if you have to take care of the different versions as well, then it becomes really cumbersome to handle, so it’s always a good idea to have a package manager that can take over all this load from you and then get your environment ready for you. And now we are done with this environment installation, so the final step in this process is to execute the bash. Make Dot Sh, and once that’s done, we export the python path to the present working directory with this. We are done setting up our system, so let’s see if everything has worked out correctly by running a little demo. So the authors have provided some demo programs and they have some data available in the experiment section, so we first move to the right directory and execute some commands there so as mentioned, they have their experiments under experiments folder. So we move to that folder and under that, we have a cm mask, sharp folder. Next we need to download the models that they have pre-trained for the downloading. They are using wget. You can use your own favorite downloader, and if you still want to use wget and you don’t have it, you can use brew to install, and in case you don’t even have brew you can download and install it too, So our first model is there. We download one more the python path. We have exported already now. We are good to go, lets. Execute the demopy script, which they have provided and as we can see, they are using the model that we have just downloaded along with a configuration file that they have prepared. Okay, so this opens up a video and we can define an initial bounding box to actually track and segment. We can see the object tracking via the green bounding box and the segmentation via the red mask. It seems to be doing quite well as we can notice that there are huge motions and huge deformations in the shape of the person and it’s kind of tracking them correctly. Let’s consider another example. Let’s try out another object, and this time we use something at the back end, so apparently it won’t be moving object, but since the camera is moving, so we should be able to see some motions. We want to see the effect of occlusion on the tracking, Of course, as long as the occlusion is happening. There is some kind of mismatch, but as soon as the occlusion is over, Cmos is able to resume tracking the right object and also the right video segmentation. Let’s try out some more examples and see if we can actually break the system somewhere. Let’s see what happens if we are trying to track the bracket, it starts quite well, but we can see as soon as the racket moves away from the body, there are some mismatches, but actually, I give them some credit because when we initially selected the bounding box the racket, the hand and the torso of the person were actually overlapping, and then, of course it was difficult to identify that we actually meant the racket and not the body. So I still give them the credit for this mishap. Let’s consider one more example, and the final example that I want to try out with you right now is what if we track one of the shoes of the person, so let’s track the left foot or right foot in this case, so it seems to have started correctly and soon we’ll see an occlusion. It seems to have handled the occlusion quite well, which is actually amazing and as you can notice. Even a 180 degree transformation was good, but now we just realized that with another overlap, basically, the tracker moved from the right foot to the left. One which is incorrect. And now there is again an overlap and now it switched back to the right foot again so apparently, when the two objects are very similar to each other and they overlap with each other, then the system kind of mistakes, uh, which object it was tracking, but this is actually good because this gives us the room for improvement. All right, with this, we come to an end to our today’s session. We learned how to set up the cm mask on our own machine, and we played around a little bit with the demo that they provided next time. We’ll, actually see how we can train the model ourselves and use our own applications to try them out. Meanwhile, you can set up your system and if you have any questions or issues, feel free to get in touch with us. Thank you and all the best awesome hope. You guys enjoyed the tutorial from Dr. Humera. So if you’re interested in learning more on cmrs, we have a comprehensive CMS pro course, which you can find at the link down below, so this course deals with the implementation, the dataset preparation as well as the training and deployment of your own applications. Thank you for watching and we’ll see you in the next lecture. [MUSIC].