Corner Detection with Zynq-Based Hardware and MIPI Sensor
This example shows how to target a corner detection algorithm to a ZCU106 device with a MIPI® add-on card by using the SoC Blockset™ Support Package for AMD® FPGA and SoC Devices.
Setup Prerequisites
This algorithm is based on the Vision HDL Toolbox™ example, Harris Corner Detection (Vision HDL Toolbox). With the SoC Blockset Support Package for AMD FPGA and SoC Devices, you get a hardware reference design that allows for easy integration of your targeted algorithm in the context of vision system.
If you have not yet done so, run through the setup wizard portion of the support package installation. You might have already completed this step when you installed this support package.
Pixel-Stream Model
This model provides a pixel-stream implementation of the algorithm for targeting HDL. Instead of working on full images, the HDL-ready algorithm works on a pixel-streaming interface. This model uses a Image Resize block that allows the source video frame to be resized for better simulation performance. Alternatively, you may want to perform a crop of the video frame. The blocks in the shaded areas convert to and from pixel stream signals in preparation for targeting.
Video Source The source video for this example comes from either the From Multimedia File block, that reads video data from a multimedia file, or from the Video Capture MIPI block, that captures live video frames from IMX274 camera connected to the Zynq-based hardware. To configure the source, right-click on the variant selection icon in the lower-left corner of the Image Source block, choose Label mode active choice, and select either File or HW. This model uses RGB Vector to Packed RGB block to convert the input data from vector to packed ufix24 format before the algorithm. Similarly, the model uses Packed RGB to RGB Vector block to convert data back from ufix24 to the RGB vector format.
open_system('vzCornerDetectionMIPI')
You can optionally run this simulation without hardware. To modify the frame size for simulation performance, change the Frame size value in the Image Resize, Image Frame To Pixels, and Image Pixels To Frame blocks. The lower the frame size, the faster the simulation will run. The minimum frame size is 240p.
There are two things to note about the simulation outputs:
1. During the first frame of simulation output, the Video Display scopes displays a black image. This condition indicates that no image data is available. This behavior is because the output of the pixel-streaming algorithm must be buffered to form a full-frame before being displayed.
2. During the second frame of simulation, the Video Display scope for the pixel-streaming output is incorrect with respect to the cSliceLevel
and cOverlayTransp
parameter value that you configured Simulink model. This discrepancy is because the algorithm uses initial cSliceLevel
and cOverlayTransp
value of 0. When you use Rate Transition blocks configured in Unit Delay mode the input value is only registered at the output after a full sample period at the input rate. In this case, the input rate corresponds to one video frame. During this initial transition period, the block will output the initial conditions (a value of 0). These blocks are required to ensure a single-rate for all blocks within the subsystem, which is required for HDL code generation. For more information, see Rate Transition (Simulink).
Target the algorithm
When you are satisfied with the results of simulating the pixel-streaming algorithm, you can target the pixel algorithm to the FPGA on the Zynq board.
Start the targeting workflow by right clicking the Corner Overlay Algorithm
subsystem and selecting HDL Code > HDL Workflow Advisor
.
In Step 1.1, select
IP Core Generation
workflow andZCU106 IMX274MIPI-FMC
platform. The MIPI add-on card is supported only with the ZCU106 board.
In Step 1.2, select the reference design as 'MIPI Receive Path'. Set the 'Source Video Resolution' corresponding to your design. When using the MIPI reference design, 'Color Space' must be
RGB
, 'Number of Pixels per Clock must be1
, and 'Random Access of External Memory' must beoff
.
In Step 1.3, map the target platform interfaces to the input and output ports of your design.
For this algorithm, the model is configured as listed:
A pixel format of RGB. This algorithm is written to work on a RGB pixel format, and both the From Multimedia File and Video Capture MIPI blocks are configured to deliver video frames in this format.
Algorithm Configuration The algorithm, in addition to processing the image for corner detection, has some control as well.
pbCornerOnly
connects to a push button on the board to display the corner detection results without showing the original image.cSliceLevel
is a configuration option to adjust the corner detection threshold.cOverlayColor
is a configuration option to adjust the color of the corner markers.cOverlayTransp
is a configuration option to adjust the blending of the original image and the corner detection image. A value of 255 means full original image and a 0 means full corner detection image.
After targeting, cSliceLevel
, cOverlayColor
and cOverlayTransp
parameters are controllable from the Simulink model through External Mode
or Target Hardware
execution. pbCornerOnly
control port is a pure hardware connection in the targeted design. This port can run at any desired rate including at the pixel clock rate.
However, cSliceLevel
, cOverlayTransp
and cOverlayColor
values are controlled by the embedded processor (or the host in External Mode
or Target Hardware
mode). Because neither the host nor the embedded CPU can update these controls at the pixel clock rate, a rate on the order of the frame rate is desired. The sample times of the constant blocks attached to these controls are set to execute at the frameSampleTime
.
Step 2 prepares the design for generation by doing some design checks.
Step 3 generates HDL code for the IP core.
Step 4 integrates the newly generated IP core into the larger Vision Zynq reference design.
Execute each step in sequence to experience the full workflow, or, if you are already familiar with preparation and HDL code generation phases, right-click Step 4.1 in the table of contents on the left hand side and select Run to selected task
.
In Step 4.2, the workflow generates a targeted hardware interface model and, if the Embedded Coder® Support Package for AMD SoC Devices has been installed, a Zynq software interface model. Click
Run this task
button with the default settings.
In Step 4.3, the workflow advisor generates a bitstream for the FPGA. You can choose to execute this step in an external shell by keeping the selection
Run build process externally
. This selection allows you to continue using MATLAB while the FPGA is being built. The step will complete in a couple of minutes after some basic project checks have been completed, and the step will be marked with a green checkmark. However, you must wait until the external shell shows a successful bitstream build before moving on to the next step.
Once generation of the bistream is complete, the bitstream file will be located at the following location:
PROJECT_FOLDER\vivado_ip_prj\vivado_prj.runs\impl_1\design_1_wrapper.bit
where PROJECT_FOLDER
is the Project folder that was specified in Step 1.1. By default, this is hdl_prj
.
Steps 4.3 and 4.4
The rest of the workflow generates a bitstream for the FPGA, downloads it to the target, and reboots the board.
Using the Generated Models from the HDL Workflow Advisor
Step 4.2 generated two, or four, models depending on whether Embedded Coder® is installed: A 'targeted hardware interface' model and associated library model, and a 'software interface' model and associated library model. The 'targeted hardware interface' model can be used to control the reference design from the Simulink model without Embedded Coder. The 'software interface' model supports full software targeting to the Zynq when Embedded Coder and the Embedded Coder® Support Package for AMD SoC Devices are installed, enabling External mode simulation, Processor-in-the-loop, and full deployment.
The library models are created so that any changes to the hardware generation model are propagated to any custom targeted hardware simulation or software interface models that exist.
Using the Generated Targeted Hardware Interface Model
In this model, you can adjust the configuration of the reference design and read or drive control ports of the hardware user logic. These configuration changes affect the design while it is running on the target. You can also display captured video from the target device.
The generated model contains the blocks that enable the targeted algorithm to be configured and controlled from Simulink. Areas of the model are labelled to highlight where further video processing algorithms, and algorithms to control the targeted hardware user logic, should be placed.
An example of how you can use the targeted hardware simulation model is provided in a saved model.
open_system('vzCornerDetectionMIPI_tgthw_interface_saved')
Using the Generated Software Interface Model
In this model, you can run in External mode to control the configuration of the reference design, and read or drive any control ports of the hardware user logic that you connected to AXI-Lite registers. These configuration changes affect the design while it is running on the target. You can use this model to fully deploy a software design. (This model is generated only if Embedded Coder and the Embedded Coder Support Package for AMD SoC Devices are installed.)
The generated model contains the blocks that enable the targeted algorithm to be configured and controlled from software. An area of the model is labelled to highlight where the software algorithm to control the targeted hardware user logic should be placed.
An example of how to use the software interface model to generate software is provided in a saved model.
open_system('vzCornerDetectionMIPI_interface_saved')
Before running this model, you must perform additional setup steps to configure the AMD cross-compiling tools. For more information, see Setup for ARM Targeting with IP Core Generation Workflow
You can fully deploy the design. In the Simulink toolbar, click Monitor & Tune.