A very common operation that we’ll come across with deep learning is convolution. We’re going to explore what this means using our new. Gaussian kernel that we’ve just created for now just think of it. As a way of filtering information, we’re going to effectively filter an image using this Gaussian function as if the Gaussian function is the lens through which we’ll see our image data what it will do is at every location. We tell it to filter the image. It will average the image values around it based on what the kernel’s values are the. Gaussian kernel is basically saying, take a lot of the center and then decreasing Li less and less as you go farther away from the center. The effect of convolving, the image with this type of kernel is that the entire image will be blurred. If you would like an interactive exploration of convolution, this website is great. Let’s first load an image. We’re going to need a grayscale image to begin with and psychic image. Has some images that we can play with. If you don’t have the psychic image module, you can load your own image or gets academic by Pip, installing sidekick – image, so I’ll just grab the cameraman image and I’m going to cast that to float32. Let’s have a look at this image, great, and what’s the shape? Okay, so notice our image is two-dimensional. When we perform convolution intensive flow, we’ll need our images to be four dimensional and remember that when we load many images and combine them into a single numpy array, the resulting image has a shape of number of images than height than width 10 channels in order to perform 2d convolution with tensorflow. We’ll need the same dimensions for our image with just one grayscale image. This means that the shape will be 1 by H by W by 1 we could use the numpy reshape function to reshape or numpy array, but since we’ll be using tensorflow, we can actually use the tensorflow reshape function like so instead of getting an umpire right back, We get a tensor flute ends. This means that we can access the shape parameter like we did with the numpy array, but instead we can use get shape and get shape as list. The height by width image is now part of a four dimensional array where the other dimensions of N and C are 1 so there’s only one image and only one channel will also have to reshape our Gaussian kernel to be four-dimensional as well and the dimensions for kernels are slightly different. Remember that the image is shaped by N by height by width by channels. This is for images for convolution kernel will need the height of the kernel times the width of the kernel times the image channel’s times a number of convolution filters. For now, we’ll stick with just one kernel as output, but we’ll see how this comes into play in later sessions, so our kernel already has a height and width of K size. So we’ll stick with that for now. The number of input channels should match the number of channels on the image. We want to convulse so for now, we’ll just keep the same number of output channels as the input channels, but later we’ll see how this comes into play. We can now use our previous. Gaussian kernel to evolve an image? Let’s create an operation, which will do this. So there are two new parameters here, strides and padding strides. Say how to move our colonel across the image and basically we’ll only ever use it For one of two sets of parameters. One by one by one by one basically says how to move across the number of images, the height of images, the width of the image and the number of channels or 1 by 2 by 2 by 1 which says that we’re going to skip every other pixel in our image array and convolve every other pixel. This has the effect of down sampling the image padding says what to do at the Borders. If we say same, that means we want the same dimensions going in as going out in order to do, this zeroes are powdered around the image. If we say valid, that means no padding is used and the image dimensions will actually change. Let’s have a look At the result of a convolution. Matplotlib can’t handle plotting 40 images, so we’ll have to convert them back to the original shape height by width and there are a few ways that we could do this. We could plot by squeezing the Singleton dimensions, or we could specify the exact dimensions that we want to visualize. We’ve now seen how to use tensorflow to create a set of operations which created Two-dimensional Gaussian kernel and how to use that kernel to filter convolve another image. Let’s create another interesting convolution of Cardinal called a Gabor. This is a lot like the Gaussian kernel, Except we use the sine wave to modulate it. So we’re going to draw a 1d Gaussian wave? We’ve seen what the Gaussian looks like, and we’re just going to modulate that by a second function, a sine wave. Then the sine wave is going to look like like that. The resulting wave is going to look like a modulated. Gaussian, where a part of it comes up and part of it comes down. We first use linspace to get a set of values the same range as our Gaussian, which should be from minus three standard deviations to positive three standard deviations. Well, then calculate the sign of these ions, which should give us a nice wave and for multiplication, we’ll need to convert this one-dimensional vector to a matrix, and by one we can then repeat this wave across the matrix by using a multiplication of ones, we can directly multiply our old. Gaussian kernel by this wave and get the Gabor kernel, So we’ve already gone through the work of convolving an image and the only thing that’s changed is the kernel that we want to convey with. We could have made life a lot easier by specifying in our graph which elements we want it to be specified later in tensorflow calls these placeholders meaning we’re not sure what these are yet, but we know they’ll fit into the graph like so generally, these are the input and output of the network. Let’s rewrite our convolution operation, using a placeholder for the image and the kernel and then we’ll see how the same operation could have been done. We’re going to set the image dimensions to none by none. This is something special for placeholders, which tells tensorflow let this dimension be any possible value. One 500,000 doesn’t matter. This is a placeholder, which will become part of the tensor flow graph, but which we’ll have to later explicitly define whenever we run or evaluate the graph and pretty much. Everything you do in tensor flow can have a name if we didn’t specify the name Tensor flow would have given it a default one like placeholder underscore zero. So we use a more useful name to just help us understand what’s happening. We’ll reshape the 2d image to a 3d tenser, just like before, except now we’ll make use of another tensor flow function expand ends, which adds a singleton dimension of the axis we specify we use it to reshape our height by width image to include a channel dimension of 1 and so our new dimensions will be end up being height by width by 1 and again to get one buy height by with buy one we can expand gyms on the Zeroth Axis. Let’s create another set of placeholders for our NGO bores parameters, and then finally we’ll redo the entire set of operations. We’ve done two convolve our image, except now with our placeholders and finally will convolve the to what we’ve done is create an entire graph from our placeholders, which is capable of convolving an image with the Gabor kernel in order to compute it. We’ll have to specify all of the placeholders required for its computation. If we try to evaluate it without specifying placeholders beforehand, we’ll get an error invalid argument error. You must feed a value for a placeholder tensor image with the destination type float and a shape 512 by 512 It’s saying that we didn’t specify a placeholder for image in order to feed a value will use the feed underscore. DIC parameter like so. Well, that’s not the only place holder in our graph, so it’s still complaining that we didn’t specify a value for mean with D type float. When I specify all of the placeholders we get our result and I can show the image. So now. Instead of specifying every value beforehand, we can just specify the different placeholders and the graph will compute everything as if that value had changed you you.