Johanna Pingel, MathWorks
Edge detection is a common image processing technique, and can be used for a variety of applications such as image segmentation, object detection, and Hough line detection. Use edge detection effectively by using the 'edge' function in MATLAB®, and also explore the different available parameters.
Edge detection is an image processing technique for finding the boundaries of objects within images. It works by detecting changes in brightness within an image. Besides creating an interesting-looking image, edge detection can be a great pre-processing step for image segmentation.
If you have the boundary of an object created with edges, you can fill it in to detect an object's location. If you have two objects that are touching each other, you can find the edges and use that information to separate the object. You can also use edges to find objects based on texture, in certain situations where segmenting based on color may not work very well.
So let's take a look at a detailed example of how to use edges as an image pre-processing technique in MATLAB. The goal is to detect all of the windows on the garage door using edges. Let's start by searching the documentation.
I quickly learned there is a function in the image processing tool box called Edge, which performs edge detection on my image. I can simply call Edge, or if I want more control, I can choose the method of edge detection. So let's try a few of these methods out on our image and see how they perform.
I'll start by trying the Prewitt method and then Roberts and then Sobel, and I want to visualize these differences side by side. If I zoom in and look at the differences in these results, I can see subtle variations between these methods, particularly in the corners, that may have an effect of filling in these squares and finding the windows.
Now, I want to fill all of the holes in these images and compare those results. Some of the windows are not filled because of the differences in the edge detection algorithms. But I see that the last algorithm does fill all of the holes, so this will be the method that I choose to solve this particular problem.
Just to finish the algorithm quickly, I want to take my image and remove everything except for the windows. This task is very easy with one of our image processing apps. I used an app called the Image Region Analyzer to filter out objects based on certain properties-- in this case, size and solidity.
And I highly recommend checking out all of our image processing apps within the image processing tool box. Finally, I can show the results of the edge detection. First, showing the original image, and then showing the windows grayed out, proving that we have successfully detected all of the windows in the image.
One final tip-- if you're experimenting with edge detection, and you're not getting the results you expect, there are other parameters that you can change, a popular one being the sensitivity. Using the default sensitivity, we are still missing a lot of the right side of the owl. But I can quickly increase and decrease the sensitivity and visualize those results.
A lower sensitivity gives me all of the edges that I need to move forward. To learn even more about edge detection, click on the links to bring you to more examples and documentation in MATLAB.