Fast Gradient Sign Method | Tutorial On The Fast Gradient Sign Method For Adversarial Samples

Federico Barbero

Subscribe Here





Tutorial On The Fast Gradient Sign Method For Adversarial Samples


Hello, everyone and welcome. This is Federico. Today, we’ll be going over the fast gradient sign method again today, a bit in more detail because we will be actually looking at some code very briefly to to try and look at. Actually how to do this in practice. So if you remember in my first video. I went over this paper by Goodfellow, Italic spinning and harnessing and reser examples. Definitely watch that video. If you haven’t already today, we’ll be looking at this section, but in code so so pretty much. Just go over this diagram, because I’m not sure I went actually over it in detail last time, but if we have this picture of a panda and it’s classified correctly as a panda, then we want to try to add some kind of vector alongside some scalar multiple of this vector such that it’s misclassified in this case to a given right, so if we look at it like from a geometric perspective, right, this is your point X. Then these are pandas. And then these are Gibbons. What we want to do is first we find the sign of the gradient of the last function this formula here, which is right that that thing, right, so this is the sign of the gradient of the last function, etc, and then we write along it by some factor Epsilon times this original vector to kind of write. Maybe let’s talk about really poorly, so if we have again this X and then this, this is the direction we write along it by some factor Epsilon times the sign, and this is the sign right and this is electorate, and it says this, and then the idea is that if we go at a large Rapson, we go deeper into this kind of space, and if we go in a shallower region and we just don’t go far enough, right and this is what’s. What we’re trying to do? So if we look at the code here talk is cheap. Show me the code pretty much. What they’re doing here is they will use a pre trend model. The mobile net via to model, which is pre-training image net right so image net is obviously the very famous data set on image classification as I can not sure how many classes, but in this case they take an a picture of a yellow Labrador and guess a Labrador Retriever. And and if you put it through image net, it will or through this pre train mobile net. It will correctly classified as a Labrador Retriever with 42 percent confidence so again, the fact that this is higher confidence, then this image doesn’t really matter. This method will always work. It’s just that if you. I guess it’s more likely that if it’s if it’s a higher percent confidence that you’ll need a larger perturbation to to kind of fool, the network, okay, and then here, what they do is pretty much what we were saying before they look at the gradient of the loss function with respect to this to this input image of this Labrador. And then they do this sign now again. The sign is just a function, which which? I think I drew it here. Where if you have a vector? Which is positive, positive, negative zero? Then it just maps it to 1 1 negative 1 0 and so on so it just takes kind of like the sign of each. I’d be a element-wise sign of each element, and then it returns this this sign gradient. And then what happens is the sign gradient is obviously the vector. We were talking about this. This unit. Well, it’s not really unit vector, but you can kind of think of it as like, a really small vector in the direction you want to go in and then just by plotting it. It looks just like some gibberish, but obviously it’s information about the gradient of the loss function with respect to the image. Right, so this is the direction you want to go in, and yeah, so here actually edited in some extra values of Epsilon. Let me just just to mess around with it. Let me just redraw it, but pretty much here. What we’re doing is is we’re actually taking this vector addition, and I’m looking at ad what it classifies. So if we go here, we have the image Labrador image, which is a without any perturbations again with 42 percent confidence, then with a small epsilon at point zero one. We get a Saluki, which is the wrong classification. I don’t even notice the Luke he is, but you get that then you get our way, more honor, and then you’re way, more honor, and and so on so as you can see what what’s happening here is is pretty much it’s running through like multiple kind of decision boundaries, and then it’s just doing bigger and bigger perturbation. So in this case, this would be like the golden retriever here like this this section here. Then this would be the Saluki. This would be the Weimar Honor. Then there’s maybe another one, so let’s try to add larger perturbations, just just to mess with it and we relook at it. So if we add larger perturbations, what you expect is that the image gets more and more distorted, so even here like, let’s look at the distortion. This is pretty much invisible. Distortion here. You can kind of see it and here you can definitely see it, but as you can see after the way. Marana, here It goes into ours right here. It goes into the prayer rug, Then the stole and so on, it’s it’s pretty much going deeper and deeper into this direction, and obviously the last image makes very little sense, right. This is barely recognizable. You can kind of see the shape of the dog, but you wouldn’t even be able to tell okay here. You can kind of tell, but still, then let’s say our perturbation is too small, so let’s let’s go for a really small perturbation, obviously for very small perturbations, it’s targeting it’s like numerical errors and stuff, but I think this is fine, so with a really small perturbation. Actually, this is just three significant figures, so you can’t even see it or three decimal places, but so this perturbation is extremely small and it’s actually still a Labrador Retriever. So as you can see our image, our geometric intuition was quite quite on point because we expected that first for too small of a perturbation. This actually won’t change the classification. It does lower the confidence a bit again. We are the loss function so it will always kind of lower the confidence, but in this case it, it wasn’t enough to actually cross the boundaries, so we ended up in something like here. Yeah, that’s kind of all. I wanted to say with this video. Just a very nice notebook. This is a tensor flow notebook. Yeah, hope you enjoyed the video, and if you did stick around, see you next time bye.

0.3.0 | Wor Build 0.3.0 Installation Guide

Transcript: [MUSIC] Okay, so in this video? I want to take a look at the new windows on Raspberry Pi build 0.3.0 and this is the latest version. It's just been released today and this version you have to build by yourself. You have to get your own whim, and then you...

read more