Main Content

Edge Detection and Image Overlay

This example shows how to detect and highlight object edges in a video stream. The behavior of the pixel-stream Sobel Edge Detector block, video stream alignment, and overlay, is verified by comparing the results with the same algorithm calculated by the full-frame blocks from the Computer Vision Toolbox™.

This example model provides a hardware-compatible algorithm. You can implement this algorithm on a board using an AMD® Zynq® reference design. See Developing Vision Algorithms for Zynq-Based Hardware (SoC Blockset).

Structure of the Example

The EdgeDetectionAndOverlayHDL.slx system is shown below.

The difference in the color of the lines feeding the Full-Frame Behavioral Model and Pixel-Stream HDL Model subsystems indicates the change in the image rate on the streaming branch of the model. This rate transition is because the pixel stream is sent out in the same amount of time as the full video frames and therefore it is transmitted at a higher rate.

Full-Frame Behavioral Model

The following diagram shows the structure of the Full-Frame Behavioral Model subsystem, which employs the frame-based Edge Detection block.

Given that the frame-based Edge Detection block does not introduce latency, image overlay is performed by weighting the source image and the Edge Detection output image, and adding them together in a straightforward manner.

One frame of the source video, the edge detection result, and the overlaid image are shown from left to right in the diagram below.

It is a good practice to develop a behavioral system using blocks that process full image frames, the Full-Frame Behavioral Model subsystem in this example, before moving forward to working on an FPGA-targeting design. Such a behavioral model helps verify the video processing design. Later on, it can serve as a reference for verifying the implementation of the algorithm targeted to an FPGA. Specifically, the PSNR (peak signal-to-noise ratio) block at the top level of the model compares the results from full-frame processing with those from pixel-stream processing.

Frame To Pixels: Generating a Pixel Stream

The task of the Frame To Pixels is to convert a full frame image to pixel stream. To simulate the effect of horizontal and vertical blanking periods found in real life hardware video systems, the active image is augmented with non-image data. For more information on the streaming pixel protocol, see Streaming Pixel Interface (Vision HDL Toolbox). The Frame To Pixels block is configured as shown:

The Number of components field is set to 1 for grayscale image input, and the Video format field is 240p to match that of the video source.

In this example, the Active Video region corresponds to the 240x320 matrix of the dark image from the upstream Corruption block. Six other parameters, namely, Total pixels per line, Total video lines, Starting active line, Ending active line, Front porch, and Back porch specify how many non-image data will be augmented on the four sides of the Active Video. For more information, see the Frame To Pixels (Vision HDL Toolbox) block reference page.

Note that the sample time of the Video Source is determined by the product of Total pixels per line and Total video lines.

Pixel-Stream Edge Detection and Image Overlay

The Pixel-Stream HDL Model subsystem is shown in the diagram below. You can generate HDL code from this subsystem.

Due to the nature of pixel-stream processing, unlike the Edge Detection block in the Full-Frame Behavioral Model, the Edge Detector block from the Vision HDL Toolbox™ will introduce latency. The latency prevents us from directly weighting and adding two images to obtain the overlaid image. To address this issue, the Pixel Stream Aligner block is used to synchronize the two pixel streams before the sum.

To properly use this block, refPixel and refCtrl must be connected to the pixel and control bus that are associated with a delayed pixel stream. In our example, due to the latency introduced by the Edge Detector, the pixel stream coming out of the Edge Detection subsystem is delayed with respect to that feeding into it. Therefore, the upstream source of refPixel and refCtrl are the pixelOut and ctrlOut signals from the Edge Detection subsystem.

Pixels To Frame: Converting Pixel Stream Back to Full Frame

As a companion to Frame To Pixels that converts a full image frame to pixel stream, the Pixels To Frame block, reversely, converts the pixel stream back to the full frame by making use of the synchronization signals. Since the output of the Pixels To Frame block is a 2-D matrix of a full image, there is no need to further carry on the bus containing five synchronization signals.

The Number of components field and the Video format fields of both Frame To Pixels and Pixels To Frame are set at 1 and 240p, respectively, to match the format of the video source.

Verifying the Pixel Stream Processing Design

While building the streaming portion of the design, the PSNR block continuously verifies results against the original full-frame design. The Delay block on the top level of the model time-aligns the 2-D matrices for a fair comparison. During the course of the simulation, the PSNR block should give inf output, indicating that the output image from the Full-Frame Behavioral Model matches the image generated from the stream processing Pixel-Stream HDL Model.

Exploring the Example

The example allows you to experiment with different threshold and alpha values to examine their effect on the quality of the overlaid images. Specifically, two workspace variables $thresholdValue$ and $alpha$ with initial values 7 and 0.8, respectively, are created upon opening the model. You can modify their values using the MATLAB® command line as follows:

thresholdValue=8
alpha=0.5

The updated $thresholdValue$ will be propagated to the Threshold field of the Edge Detection block inside the Full-Frame Behavioral Model and the Edge Detector block inside Pixel-Stream HDL Model/Edge Detection. The $alpha$ value will be propagated to the Gain1 block in the Full-Frame Behavioral Model and Pixel-Stream HDL Model/Image Overlay, and the value of $1-alpha$ goes to Gain2 blocks. Closing the model clears both variables from your workspace.

In this example, the valid range of $thresholdValue$ is between 0 and 256, inclusive. Setting $thresholdValue$ equal to or greater than 257 triggers a message Parameter overflow occurred for 'threshold'. The higher you set the $thresholdValue$, the smaller the amount of edges the example finds in the video.

The valid range of $alpha$ is between 0 and 1, inclusive. It determines the weights for edge detection output image and the original source image before adding them. The overlay operation is a linear interpolation according to the following formula.

   overlaid image = alpha*source image + (1-alpha)*edge image.

Therefore, when $alpha=0$, the overlaid image is the edge detection output, and when $alpha=1$ it becomes the source image.

Generate HDL Code and Verify Its Behavior

To check and generate the HDL code referenced in this example, you must have an HDL Coder™ license.

To generate the HDL code, use the following command:

makehdl('EdgeDetectionAndOverlayHDL/Pixel-Stream HDL Model');

To generate a test bench, use the following command:

makehdltb('EdgeDetectionAndOverlayHDL/Pixel-Stream HDL Model');