BRIAN YU: The next filter that you'll implement is called edges. In edges, what you're going to do is take an image and apply the Sobel edge detection algorithm to this image in order to highlight all of the boundaries between objects. Notice that we've highlighted the boundaries of the bridge in this case and some of the plants that are surrounding the water here. And this is the algorithm that you're going to try to create. But how do we take an image and figure out where the boundaries between objects actually are? Well, let's first look at a simplified image to try to get a sense for how we can detect boundaries between objects. Here we have two objects, a blue rectangle on the left and a red rectangle on the right. And as you might guess, this middle purple line you can think of as a boundary between these two objects. How do we know it's a boundary? Well, it's a boundary because the colors on the left side of the line are different from the colors on the right side of the line. And the same thing could be generally true when we start to think about edge detection in more complex images. If we have a line where, on the left side of the line, colors are one value, and on the right side of the line, the colors are a very different value, that indicates a major change in color across pixels, and therefore, might indicate a boundary between objects in that image. So you recall that every pixel is composed of color values of red, green, and blue, but let's first just look at red values, and look at the red value for each of these pixels, and take a look at how those values change as we move from left to right inside of this image. So if we just look at the red values, you'll notice that all the pixels on the left, the blue pixels, have a red channel value of zero. All the red pixels on the right-hand side have a channel value of 255, and in the middle, they're somewhere in between with a red channel value of 128, for example. How are we going to determine whether, at any particular point, we're noticing a change in color from one red value to another red value? Well, to do that, we're going to use what's called a convolutional matrix. Here, for example, is the convolutional matrix called gx, which is going to determine whether or not there is some sort of change in color or boundary between objects as we move in the x direction in the image, moving from left to right, for example. And the idea of this is going to be very similar to the algorithm that we applied in blur where we took a pixel and looked at the nine pixels that formed a square around that pixel to compute a new value for that same pixel. We're going to do the same thing here, except rather than just averaging all of the values around the middle pixel, we're instead going to multiply each of the surrounding pixels by a different value. Let's take a look at an example. So here, we're going to apply this gx convolutional matrix to try to compute what the value of gx should be for this red pixel right here. Now, notice that this red pixel does not have a whole lot of change happening around it. To the left and to the right of it, it still has a lot of red surrounding it, so we should expect that this gx value that we compute will be very low, indicating that there is not much change in the red color as we move from left to right along this pixel. So what does this actually look like? Well, we're first, much as with blur, going to form a 3-by-3 grid of pixels that surround the center pixel, and then we're going to do some calculations. We'll take our gx convolutional matrix and first look at the value in the upper left. It's negative 1, and so we're going to multiply negative 1 by the red value in the upper left of this 3-by-3 grid of pixels, in this case 255. Then we'll move on and say, all right, let's take the top pixel in this 3-by-3 grid. The matrix says that we should multiply its value by 0, so we have 0 times 255. And then we have 1 times 255. And we're going to repeat this process of adding up negative 2 times the red value of the pixels on the left, which in this case is also 255, plus 0 times 255, the pixel in the middle. Then the right is going to be 2 times 255. Then the bottom left is minus 1 times 255, plus 0 times 255, plus 1 times 255. And if we do the math, performing all these multiplications and adding all these numbers together, what you'll notice is that everything cancels out. All of the plus 1 times 255s cancel out with a minus 1 times 255, and the plus 2 times 255 cancels out with a minus 2 times 255. So the result of all this after we do all the math is that we get the number 0, implying that there's no change in the red values as we move from left to right. And that's to be expected here, because as we look at this grid, we see that all of the pixels are red. What might happen, though, if we instead try to apply this matrix to a different pixel-- say, for example, this pixel here in the middle, where you'll notice if we take a look at the 3-by-3 grid of pixels that surround this middle pixel, we see that the color values do change? There's an increase in red from 0 on the left-hand side to 255 on the right-hand side. That's a big change in the red channel value, so we would expect that after we do this calculation of the value of gx that we would get a large value. So how do we do the math here? Well, again, we'll do negative 1 times the value of the pixel in the upper left, 0, plus 0 times the value of the pixel on top, 128, plus 1 times the value of the pixel in the upper right, 255, and then repeating this process for each of the other pixels-- minus 2 times 0, plus 0 times 128, the pixel in the middle, plus 2 times 255 for the pixels on the right, and then repeating the same process for the bottom row. After we do all of that math and calculate it, the answer that we get is 1,020, a big number that's going to represent the fact that as we move from left to right along this pixel, we see a large change in the value of red. And that's pretty good evidence that there is some sort of boundary that's happening here. So how do we take this and apply it not just to one pixel, but to all of the pixels in the image? Well there are a couple of things that we need to keep in mind. First of all, we've so far only done this calculation for red values. And you'll imagine you'll probably also want to do the same calculation for the values in the green channel as well as for the values in the blue channel. But also, thus far, we're only looking at changes in the x direction as we move from left to right, taking a look at whether there's a big increase or a big decrease in the value of red, green, or blue, for example. But you could also imagine that there might be boundaries as we look in the y direction, from the top of the image to the bottom of the image. We might find places where there are horizontal lines where we go from low values to high values or vise versa. So in addition to a gx convolutional matrix, which is going to look for boundaries in the x direction, we're also going to want to introduce a gy matrix, which is going to look for changes in the y direction. And the gy matrix is going to look very similar. Just rotate it. Here we're going to say, take all the values in the three pixels below this middle pixel, and multiply them by 1, 2, and 1 respectively, and take all the values in the row of three pixels above this middle pixel and multiply those by negative 1, negative 2, and negative 1 respectively, the result of which is that we'll compute a value that will approximately represent how the value of red or green or blue is changing as we move from the pixels above a pixel to the pixels below a pixel. And so we'll use these two matrices, gx and gy, to compute how much colors are changing. How do you use these results to actually determine what the new value of each pixel should be? Well, for each pixel, you're going to want to compute gx and gy for each channel of red, green, and blue. So ultimately, you're going to be computing six different values by applying the gx matrix to all of the red values, then to the green values and the blue values, and doing the same for gy. Of course, as in the case with blur, there will be situations where we're looking at a pixel at the border, in the corner or at the edge of the image where there isn't a perfect 3-by-3 grid of pixels that surrounds it, for example. In that case, for any pixels that are at the border, you should treat any pixel values that are beyond the border as having all zero values or being pure black, for example. So for the pixel in the corner, in the upper left corner, for example, the pixel immediately above it, which doesn't exist, you should just treat as having all zero values, as being a solid black pixel. Once you've computed these gx and gy values for each of red, green, and blue, you need to use those values to somehow compute the total amount of red, green, and blue that should be in the new pixel in the resulting image. So to do that, we're going to use this formula. You're going to compute each new channel value as the square root of gx squared plus gy squared. So to compute the new value of red for a particular pixel, you're going to compute gx for that red pixel based on the original image, and you're going to compute gy for that red pixel based on the original image, square them both, add them together, and take the square root. We're squaring each of them to make sure that the magnitude we end up with is a positive value because regardless of whether there's a big increase in red or a big decrease in red, we still want to represent that as a potential border from a lot of red to a little bit of red or vise versa. And squaring the number makes sure we end up with a positive value. And we're taking the square root to normalize. Of course, as with the other filters, it may be the case that the value you get exceeds the maximum channel value of 255. And in that case, you should be careful to make sure to cap the value you get at the number 255. And you'll repeat the same process for red and green and blue for each of the original pixels, computing gx and gy based on the original image in order to calculate a new channel value for each pixel in the resulting image. After you've done all of that, you should be able to test your program by running ./filter, -e for edges, and then providing the name of an input bitmap file as well as an output bitmap file that you're hoping to generate. And if all goes well, you should see that you take the original image and highlight where all of the edges in that image are. My name is Brian, and this was edges.