Let’s talk about some image processing algorithms that use the concepts that introduced in the previous two lectures the loops we’ve used so far in the past couple of lectures have all had a linear structure which really just means that they visit each element in a sequence or they count through a sequence of numbers using a single loop control variable on the other hand, some image processing algorithms actually just use a nested loop structure so that they can traverse the 2d grid of pixels so this image shows that grid it’s height is three rows numbered 0-2, it’s width is 5 columns numbered 0-4 so we call this a 3 by 5 grid each data value in the grid, each pixel is accessed with a pair of coordinates in the form column comma row ok so in this case we can see the black pixel here is (2,1) column 2, row 1 and the upper left corner, we’ll call that (0,0) again, that’s sort of the origin of our screen coordinate system we can use nested loops to traverse a grid like this whatever nested loop structure we use has to have two loops an outer one and an inner one and each loop has a different control variable so the outer loop iterates over 1 coordinate the inner loop iterates over the other coordinate so take a look at this code segment this prints the pairs of coordinates visited in some imaginary 3 by 5 grid when the outer loop traverses the y coordinates the inner loop traverses the x coordinates ok so the outer look traverses the y coordinates the inner loop traverses the x coordinates that means j is traversing y i is traversing x we can see the printed output here is (0,0) (1,0) (2,0) and so on all the way to (4,2) so this loop marches across the row in the grid then prints the coordinates at every column in that row and then it moves on to the next row we’ll call this a row major traversal. We can use this kind of traversal to visit every pixel in the image. So you can see right here we’re going from y=0 to the height and x from 0 to the width, and we’re gonna do something at every pixel in this image. So now that we have those nested loops which we can use to traverse every pixel in the image, now that we have that under our belt, let’s talk about a couple of different things we could do. First let’s consider an algorithm that builds a new image from an old one. Now to do this, you could create a new blank image of the same height and width as the original, but maybe it’s often easier to actually start, not with a blank image, but a copy of the original. So the most obvious way you might do this is this line of code right here: APImage newImage = oldImage. This however is a mistake: instead of ending up with two separate images, we actually have now two names oldImage and newImage for a single Image object. Fortunately the APImage class actually includes a clone method for creating copies. The method Clone builds and returns a new image with the same attributes as the old one, but with an empty string as the file name. That means that the two images are completely independent of each other, and the changes to the pixels in one image have no impact on changes to the pixels in the same position on the other image. So this is how you would do that, right, this little code segment right here. So fortunately, the APImage class includes a clone method for creating copies. That way we can avoid that problem of just adding a second name rather than an entirely different image object. So the method clone builds and returns a new image with all the same attributes as the other one, but it has an empty string as the file name. That means the two images we have are completely independent of each other, and that changes to the pixels in one image have no impact on the pixels in the same positions in the other image. So this code segment which displays an image and its copy demonstrates how we would use the clone methods. So we make our new image, APImage original = new APImage(“winbaby.png”), we draw it, that just shows the image. Then we make a new clone using the clone method. We want to clone the original image and we’ll use a variable called theClone to point at it, and then we can draw the clone. So now we have both the original and its copy as two entirely separate objects. We avoid the problem we saw in the last slide. Now maybe the question you have is how does the clone method actually do this? So we could write a little Java code segment that shows how we could do this without clone, without the clone method if we didn’t have it, and essentially the algorithm creates a new blank image the same dimensions as the original image. Then it uses the nested loop structure like the one we looked at before to copy the RGB values from each pixel in the original image to the corresponding pixel in the new image. So here’s some code for this algorithm. We start with a width, we get the height, and then we create a new image called theClone, and theClone actually has the same width and height as the original image. Then we go ahead and we say okay, let’s go through for the height, for the width, so we’re iterating through each pixel using nested loops. We’re saying first, get me the pixel from the original. So pixelInOriginal = original.getPixel at (x,y), at the current (x,y) location. Then, get me the corresponding pixel in theClone, which right now is gonna be blank, because it’ll just have the default color values. Then we set the value of the red in the clone current pixel to the value of the red in the original pixel. pixelInClone.setRed(pixelInOriginal.getRed()). So we’re getting the red value in the original and setting it to the red value in the clone. And we’re doing the same for green and blue as well. So now, let’s be clear. This algorithm transfers the integer red green and blue values from the original image to the new image. We could actually take a simpler route of transferring pixels instead, and then the two images would actually share the same pixel objects. So what that means is we could say, get the pixel from the original value and set it to the pixel in the new value. So now we have the same thing we had before two slides ago where we have one object, in this case it’s a pixel object, a single pixel object has two names. That means if i were to change the colors in one of the pixels, that same change would show up in the other image that uses that pixel, which is probably not what you want. Here’s another neat little one. When artists paint pictures, oftentimes they sketch an outline of the subject in pencil or in charcoal, something like that. And then they fill in and color over those outlines to actually finish the painting. So edge detection does the opposite of that. It takes away colors, like all full colors, to uncover just the outlines of the objects represented in the image. This is a really critical thing, I’m sure if you’ve used photoshop ever you’ve seen that this is something we often want to do if we’re doing digital image processing. One really simple edge detection algorithm just examines the neighbors below and to the left of each pixel in an image. What we end up doing is we look at the luminance, and luminance here is defined as the average of the red green and blue value. So red plus green plus blue divided by three. So the average of the red green and blue values, we’ll call that luminance. It takes a look at the luminance of a pixel and sees if it differs from the luminance of its neighbors below and to the left by a significant amount. And if it does, if it differs a lot, then we’ve detected an edge in the image. And we set that pixel’s color to black. Otherwise, if it’s not an edge, you set the pixel’s color to white. Now one important thing is we make these changes to a new image object, not to the original, because we don’t want to overwrite the information in the original image object, because we need that information intact for the next time, when we go through those pixels again for later passes of the algorithm through the loop. So in this example, this is actually the opposite of what I just said in terms of white and black. This is setting the edges to white and leaving everything else as black. But the algorithm we’ll look at does the opposite: it sets the edges to black and leaves everything else as white. So the test program we’ll look at inputs an image file name and an integer from the user. That integer is a threshold, that lets us experiment with different thresholds of the luminance, or how sensitive we want the program to be. Then it’ll display the original image and the new image. You can see we set the threshold for the luminance right here. int threshold = intReader.nextInt(). intReader is our scanner object. Key things to see: here we’re getting all of our inputs, setting up our images here, we’re creating a blank image where we’re actually going to receive the edges, here we actually stat to visit all the pixels one by one, you can see our nested loop structure, and here we’re actually doing the analysis: we’re getting the original pixel, we’re getting the left pixel and the bottom pixel, and then we’ll do our calculations of luminance so we can do our comparison here. Finally we’ll draw the resulting sketch image. Let’s see how this runs real quick. I’ll go ahead and click run, enter an integer threshold, well let’s say we’ll use a threshold of 15. That means that’s the difference in luminance that we’re going to be looking for, it has to be at least that difference in order to signify an edge. And for our file name, we’ll use winbaby.png. Perfect. Both images showed up offscreen and we can look at them right here. Looks like these were the edges that a threshold of 15 picked up. If I were to run it again, this time with a threshold of 20, and again winbaby.png, we end up with even fewer edges because the threshold we had to meet in order to identify a pixel as part of an edge was higher. Couple things to note here: the outer loop actually terminates at the y position height-1 because bottom pixels are examined at position y+1 and the inner loop starts at the x position 1 because left positions are examined at position x-1. So we don’t want to actually try to access any pixels that are outside of the bounds of the image, so they have to constrain where we start our loops at. Second thing: the color of the new pixel, we don’t have to set it to black when the threshold is exceeded because the new image is already black, right, it’s black by default because we made a blank image and the RGB values default to zero when we do that. So, the new pixel is only changed to white when the threshold is not exceeded. Okay we’ll talk briefly now about two algorithms that I’ll leave it to you to implement if you’d like. The first is for reducing the image size. We might think of size as the width in pixels by the height in pixels, and if we reduce that size for a particular image it can help with a lot of things. I mean just imagine loading an image on a website. If we have fewer pixels to load the image is going to load significantly faster, and if you could reduce them in such a way that they didn’t lead to a really highly noticeable loss in quality, you’re in a good situation. So here’s a very crude algorithm for how you might reduce an image size by a factor of n. If you have a source image whose width and height are w and h, you create a new blank are image whose width and height are n times smaller than that. So as an example if you wanna take a 10 by 10 pixel image and reduce it by a factor of 5, you create a new 2 by 2 image that’s five times smaller than the original width and height. Then you copy the color values of just some of the original pixels into the new image. So as an example of this if you wanted to make an image that was smaller in size by a factor of 2 you’d make a new image that’s half as big then you copy color values from every other row and every other column from the original image to the new image. Now that’s really crude, that’s not a great algorithm, but it does get the point across that at some point you have to discard or condense some information in order to reduce the image size. Now fortunately as you make an image smaller and smaller, the human eye is less and less able to actually detect the changes that you made. But at some point you will note it’s a difference. Things are a little bit the opposite when you’re enlarging the image. So if you want to increase the size of an image, you have to actually add pixels and add color information that wasn’t there to begin with. So what you really want to do then is approximate the color values that pixels would receive if you took another sample of the subject at a higher resolution. So if you had another image, another starting image that was of better quality, that had more pixels per unit of area. This is going to be kind of a complex process because you have to blend the new pixels you add in to the old pixels that are already there. I’ll leave it to you all to look up and think about some image enlarging algorithms. Now all the sample programs we’ve talked about in the past couple of lectures have allowed us to view an image in a window. Although it’s often handy for us to be able to do that, to be able to draw images in a window and look at them, you can also develop programs that just load an image from a file, do stuff to it, transform it somehow, and then save the image back to a file. And in fact sometimes we’ll want to do that because it’s quicker or it’s easier or we’re doing things to a whole batch of images. Now normally, these programs that we’ve been using, they terminate in a special way. They’ve terminated when we close the image windows that pop up. Now in this case because we’re going to run a program that doesn’t actually make the new window for us to close, we have to use this System.exit(0) line of code right here, and System.exit(0) will actually end the program for us without having to manually close the window as we have had to do in other programs. So you can see here we have a scanner object, we prompt for a file name, we make a new image of that file, we convert it to greyscale, and then we save that image. We’re saving it to the same location, so it’s actually going to modify the file itself. Finally, we print a message, “hey this image was converted and saved” and we exit. Notice we’re not actually rendering the image. We’re not drawing it for the user. So let’s run this real quick and we’ll check to confirm that it did in fact convert our image to greyscale. Now I’m starting with an image, winbaby2.png. Now I’ve actually changed the name of the file because I don’t want to overwrite winbaby.png. So winbaby2.png is my source file. Let’s go ahead and run this. Prompts for the name, great, winbaby2.png. Tells us it’s run the program, now let’s open up the file just to confirm that it worked. winbaby2 is now greyscale. Perfect. Okay, before you close up shop, you wanna be sure you can handle these couple of ideas. First, traversing a grid using nested loops, a 2D grid. You want to explain why you need to use clone for objects rather than just saying APImage newImage = oldImage. What’s the difference there? If you can, write a code segment that draws a blue border around the edges of an image. So I don’t want you to overwrite the edges of the image, I want you to create a new image with the entire contents of the original enclosed in the border, like what you see right here. You can make that border however many pixels in thickness you would write, but I don’t want you to overwrite the image at all. I want you to take the whole image and encase it in the blue border. Think about how you would write some code that creates a greyscale copy of an image and a black and white copy of an image, both leaving the original image unchanged. We don’t want to edit the original image at all. And finally, be able to explain, be able to talk me through a simple edge detection algorithm like the one we looked at today. Fantastic.