Main Content

Fog Rectification with Zynq-Based Hardware

This example shows how to target a fog rectification algorithm to the Zynq® hardware using the SoC Blockset™ Support Package for AMD FPGA and SoC Devices.

Setup Prerequisites

This example follows the algorithm development workflow that is detailed in the Developing Vision Algorithms for Zynq-Based Hardware example. If you have not already done so, please work through that example to gain a better understanding of the required workflow.

This algorithm is based on the Vision HDL Toolbox™ example, Fog Rectification (Vision HDL Toolbox). With the SoC Blockset Support Package for AMD FPGA and SoC Devices, you get a hardware reference design that allows for easy integration of your targeted algorithm in the context of vision system.

If you have not yet done so, run through the guided setup wizard portion of the SoC Blockset Support Package for AMD FPGA and SoC Devices installation. You might have already completed this step when you installed this support package.

On the MATLAB Home tab, in the Environment section of the Toolstrip, click Add-Ons > Manage Add-Ons. Locate SoC Blockset Support Package for AMD FPGA and SoC Devices, and click Setup.

The guided setup wizard performs a number of initial setup steps, and confirms that the target can boot and that the host and target can communicate.

For more information, see Set Up Xilinx Devices.

Pixel-Stream Model

This model provides a pixel-stream implementation of the algorithm for targeting FPGA. Instead of working on full images, the HDL-ready algorithm works on a pixel-streaming interface.

The algorithm in this example performs to remove fog from images captured under foggy conditions. Fog rectification is an important preprocessing step for applications in autonomous driving and object recognition. Images captured in foggy and hazy conditions have low visibility and poor contrast. These conditions can lead to poor performance of vision algorithms performed on foggy images. Fog rectification improves the quality of the input images to such algorithms.

open_system('vzFogRectification_PixelStream')

Video Source The source video for this example comes from either the From Multimedia File block, that reads video data from a multimedia file, or from the Video Capture HDMI block, that captures live video frames from an HDMI source connected to the Zynq-based hardware. To configure the source, right-click on the variant selection icon in the lower-left corner of the Image Source block, choose Override using, and select either File or HW.

For this algorithm, the model is configured as listed:

  • A pixel format of RGB. This algorithm is written to work on a RGB pixel format, and both the From Multimedia File and Video Capture HDMI blocks are configured to deliver video frames in this format. Other supported pixel formats are YCbCr 4:2:2, and Y only.

The model features a Video Frame Buffer block that provides a simplified simulation model of a frame buffer implemented in external memory. The Video Frame Buffer block is configured for a pixel format of RGB, and a video resolution of 640x480p. In the targeted design, the frame buffer connections will interface with the external memory on the chosen Zynq platform.

The Video Frame Buffer block input interface features the video pixel ports {R, G, B}, corresponding pixel control bus port {pixelCtrl}, and a frame buffer trigger {pop} port. The video stream that is to be stored in the frame buffer, and the corresponding video timing signals, are provided on the pixel and control bus ports. The frame buffer pop port is used to schedule the release of the stored video frame from the frame buffer. The release of the frame from the frame buffer is controlled from within the FogRectification Algorithm subsystem, and should be asserted high for a single clock cycle. Once triggered, the frame buffer will release the stored frame on the video pixel output ports {R, G, B} with corresponding video timing signals on the pixel control bus port {pixelCtrl}.

You can optionally run this simulation without hardware. To modify the frame size for simulation performance, change the Frame size value in the RGB Resize, Frame To Pixels for RGB, and Pixels To Frame for RGB blocks. The lower the frame size, the faster the simulation will run. The minimum frame size is 480p.

Target the Algorithm

After you are satisfied with the pixel streaming algorithm simulation, you can target the pixel algorithm to the FPGA on the Zynq board.

Start the targeting workflow by right clicking the FogRectification Algorithm subsystem and selecting HDL Code > HDL Workflow Advisor.

  • In Step 1.1, select IP Core Generation workflow and the appropriate platform from the list.

  • In Step 1.2, select RGB reference design to match the pixel format of the FogRectification Algorithm subsystem. Map the other ports of the hardware user logic to the available hardware interface.

  • Step 2 prepares the design for generation by doing some design checks.

  • Step 3 generates HDL code for the IP core.

  • Step 4 integrates the newly generated IP core into the larger Vision Zynq reference design.

Execute each step in sequence to experience the full workflow, or, if you are already familiar with preparation and HDL code generation phases, right-click Step 4.1 in the table of contents on the left hand side and select Run to selected task.

  • In Step 4.2, the workflow generates a targeted hardware interface model and, if the Embedded Coder® Support Package for AMD SoC Devices has been installed, a Zynq software interface model. Click Run this task button with the default settings.

Steps 4.3 and 4.4

The rest of the workflow generates a bitstream for the FPGA, downloads it to the target, and reboots the board.

Because this process can take 20-40 minutes, you can choose to bypass this step by using a pre-generated bitstream for this example that ships with product and was placed on the SDCard during setup.

Note: This bitstream was generated with the HDMI pixel clock constrained to 148.5 MHz for a maximum resolution of 1080p HDTV at 60 frames-per-second. To run this example on Zynq hardware with a higher resolution, select the Source Video Resolution value from the drop-down list in Step 1.2.

To use this pre-generated bitstream execute the following:

>> vz = visionzynq();
>> changeFPGAImage(vz, 'visionzynq-zedboard-hdmicam-fogrectification.bit');

To use a bitstream for another platform, replace 'zedboard' with the platform name.

Alternatively, you can continue with Steps 4.3 and 4.4.

Using the Generated Models from the HDL Workflow Advisor

Step 4.2 generated two, or four, models depending on whether Embedded Coder® is installed: A 'targeted hardware interface' model and associated library model, and a 'software interface' model and associated library model. The 'targeted hardware interface' model can be used to control the reference design from the Simulink model without Embedded Coder. The 'software interface' model supports full software targeting to the Zynq when Embedded Coder and the Embedded Coder Support Package for AMD SoC Devices are installed, enabling External mode simulation, Processor-in-the-loop, and full deployment.

The library models are created so that any changes to the hardware generation model are propagated to any custom targeted hardware simulation or software interface models that exist.

Targeted Hardware Interface Model: In this model, you can adjust the configuration of the reference design and read or drive control ports of the hardware user logic. These configuration changes affect the design while it is running on the target. You can also display captured video from the target device.

Software Interface Model: In this model, you can run in External mode to control the configuration of the reference design, and read or drive any control ports of the hardware user logic that you connected to AXI-Lite registers. These configuration changes affect the design while it is running on the target. You can use this model to fully deploy a software design. (This model is generated only if Embedded Coder and Embedded Coder Support Package for AMD SoC Devices).