Transcript:
Hey, guys, this is Shrini. And in this video, Let’s cover a topic that a few of you actually requested for, and this is how to compare, uh, two images in a quantifiable way, Right, I mean, in the past, I made videos on key points and descriptors, but the question is, how do we quantify? Is this image like 34 or what value can we give to consistently compare a couple of images? Okay for that, I’m going to use, uh, shift or orb and ssim metrics. Okay, if you know what they are, then obviously go ahead and watch other videos of mine, but if you really would like to learn a bit more about, how are we quickly implementing this with a few lines of code? Just go ahead and watch this video. I promise to not make this very long. Okay, so it’s worth your time, so let’s jump into the code and get to it, okay, so here. We have these few lines again as usual. I’m going to share this file with you. Look for the description, you know, you’ll find the Github link and you can download this and all other, you know, code of mine that I’ve generated as part of this video tutorial, so please do subscribe, Okay, okay, so here, let’s define the problem. I have a couple of images. Okay, the same images you saw on the opening screen. I have, for example, this warped monkey and then this rotated monkey images, And if I want to compare these two like what is the quantitative value and later on again, I’ll also open a few other images like scientific images. I have a very nice scanning electron microscope image here back, scattered image and here is artificially added noise, but pretty much the same image. So what type of values do we expect using exactly the same approach? Right now we can say, okay. This is what percentage are these images similar? And, of course, I have another image. That’s completely resized and blurred. Now we’ll see, you know what type of values can we actually expect? Okay, so this is, uh. This is the goal. And in fact, I also have another image. That’s completely different. Uh, compared to everything else so, uh, again. So this is the plan, okay, so, uh, one of the metrics. There are many metrics that claim to compare these. Well, I shouldn’t say claim, but that does, uh, you know. Try to compare features or different images, but the ones I totally rely on are two one is structural similarity. This is part of psychic image metrics. Go ahead and look at the documentation again. This is not a tutorial about structural similarities, so we’re not going to cover the you know, math side of it, but this is one of the ones that I’m going to use and the other one is open in Opencv. We have a orb key point descriptor. It used to be sift. I mean, I used to use sift quite a bit, but they pulled it in Opencv too. You don’t have it, you know, available anymore. I guess because the original authors don’t want it to give it away for free for whatever reason, not blaming them, but it’s not available anymore. Okay, so what we’re going to do is, uh, orb orb, which is a key point. Uh, you know, locator and descriptor, right. I mean, it tells us. Hey, these points are similar in these two images, and they tell us exactly where the locations are and also describes them. And please watch my video on these this specific topic where I talked about Homography and, uh, other stuff, but let’s get back here. So orb is going to give us the key points and describes them. It also describes them. So, uh, I create a function right here, so I can apply it on multiple images right later on so all it’s doing is from Opencv. We are creating an object for orb, and I’m going to apply that and detect and compute on my image number one, which I’m going to supply pretty soon and image number two, right, so I have key points and descriptors corresponding to each image very simple, this simple and then once I have the key points and descriptors. Now I’m going to perform a brute force. Matcher okay again. If none of these make sense, you really have to go back and read a bit of documentation about brute force matching and key point descriptors, but to use this, it’s very simple, Okay, So now we are defining a brute force Matcher object. Now, okay, remember, we defined an arb object and we applied that onto our images. Now I’m going to define a brute force matcher, and then I’m going to do something with this key points in a second. Okay, so the brute force, uh, Matcher is going to be applied onto our descriptor A and descriptor B descriptor a and descriptor B, right, I’m not just looking at key points. I don’t care about key points because now I want to see. If the, uh, images are similar, so I’m only looking at the Descriptors corresponding to those key points. Okay, and I’m going to match them, so that’s what brute force Matcher is and what it gives you is. It’s almost like a similarity numbers, and if the if the number is basically, the range goes from 0 to 100 Okay, I believe zero is perfect. Match and 100 is like not similar in in these matches so very good matches have like lower values, So what I’m going to do is now I’m going to extract similar regions where I set a threshold of what I called acceptable similarity. So in this case, I just say okay, show me all the matches where the distance within my matchdistance okay, is less than 50. It’s it’s just 50. Because it’s halfway between 0 to 100 It’s up to you what you want to choose and, uh, that’s that’s it after that to quantify that into a similarity value. All I’m trying to do here is look at how, uh, how many of these regions follow this criteria of distance less than 50. Divide that by the total number of matches, right, if I have only 20 of them below this threshold of 50. And I have total 100 of these, then I have a 20 match between these two, that’s it. That’s that’s pretty much it so this is using orb descriptor, okay, and then a structural structural similarity is going to give you a number okay again. Look at the documentation for structure, structural similarity. But, uh, the only thing with this is both images need to be of same dimensions. So you have to resize it if you’re trying to compare images that are of different dimensions, but otherwise structural similarity again, it’s returning your similarity and difference and structural similarity again. Very simple, just your reference image or image, one and image two and, uh, uh, similarity index. Okay, uh, right there so, uh, once we. We just established what we are trying to do. Now let’s define two images. These two images are the monkey distorted and rotated. So all I’m doing is just go ahead and reading. Sorry, I should run this entire entire code. Okay, all the way up to this. So now that we define our functions, we just need to apply the functions onto our images, so we need to read these two images. These are the monkey images, so let’s start with them and they both are same size and now orb similarity is we just defined the function. I just need to provide these two images as inputs and let’s go ahead and print out what it tells us, so it’s saying that the orb similarity is 0.36 you can think of this as 36 percent. If you just, uh, not change this 50. By the way if I change this to, I don’t know 80 for example. Then let’s run everything up to this point, right then. Obviously the value is going to go up right so now. My similarity is 99 between these two images. So now it’s it’s it comes down to you. What your application is? What you think is the right cutoff value is here. All I’m showing you is how to set this and it comes down to you. Uh, you know, defining your your specific threshold there, so let’s get back to 50. That’s a comfortable point for me, And we looked at orb similarity, lets. Do this one more time, okay, And then after this, let’s look at structural similarity, okay, 36 percent now structural similarity. It’s just, uh, uh, let’s not resize because these images are of same size in this case structure similarity. Uh, again, we already imported the library, so it’s given me 0.187 okay, of course, it’s not going to be same as orb. This is a completely different process, But now you know, at least a couple of ways to check this similarity. Let’s go ahead and run these. Uh, all of these load, all of these images and these are the ones. I just showed you right, the noisy. Uh, so image one versus, so let’s go ahead and do image one versus image, one same image or they’re similar. Obviously they should be if you do that. Your orb similarity is one that means hundred percent. Okay, now it gives you an idea of what the reference is, and let’s also look at structural similarity, which should be. Oh, sorry, I can change my image number, so let’s go ahead and do image one and image, one and structure similarity should be also one. Okay, because we are doing exactly the same image now. Uh, the same image versus the noisy image. Okay, so now we are just doing this, lets. Run all these lines. So, um, for orb, I’m getting eighty percent and, uh, ssim. I keep calling eighty percent a value of point eight and, uh, for structural similarity, I’m getting a value of 0.24 That kind of gives you an idea of what the range is, you know? In general so structural similarity, it completely went down from 1 to 0.24 and the only difference between these two images is. This one has about five sigma. Uh, noise, that’s pretty much it now. Let’s compare the smooth to one, which is image number three. I believe image one versus image three. Okay, so let’s do a couple more, and then I think you’ll have enough information, so let’s run these again. And it says, as you can see, input images must have same dimensions because my blurred image is much smaller than the other images as you can see and the edges are crappy and everything. Okay, so now let’s go ahead and resize this image and let’s run this one more time. I thought I’m resizing it. Uh, image, sorry. Resize, image three. I was resizing image four. Okay, uh, third time is a charm. So let’s do that, okay. I think I’m I have to focus here. Uh, resize, uh, image three. And, oh, my God, too many things to change, that’s. What happens when you don’t write the code in an efficient way where you have to change the same thing multiple times? Okay, so if this doesn’t work out, I don’t know what, okay. So there you go so here. It says, uh, similarity of orb is 0.33 but then the structural similarity is 60 five. So when I resize this, it’s it it. It’s it’s it’s just looking at the structural similarity right between these two. So once we resize. Apparently, it thinks, uh, it It’s. It’s, uh, similar, so I. If if you’re trying to follow this method, I definitely recommend using these two parameters when you’re looking at structural similarity. Maybe you want to take an average. Maybe you want to write another function to say if orb is this. And if Ssim is this then okay. I want to say that this. These are similar or now ill. Leave it to you, how to use these two, but you can see how these two are kind of different. They’re looking at different aspects. And they’re giving you these. Uh, you know these these, uh, numbers out there. Okay, so I hope. I hope this answers. You guy’s questions that you left in under comments. So if you have any such questions, please do leave here unless unless it’s a humongous project for me. If it is relatively easy for me to explain, I’ll record a video for you. Otherwise, I’ll just leave like a comment or something, but please do engage. Please do practice and subscribe to my channel and thank you very much very much for watching these videos and let’s meet again in the next video. Talking a different topic, thank you.