Frame-Based Video Pipeline Using Zynq UltraScale+ and FMC-HDMI-CAM
This example shows how to design and deploy a frame-based video processing pipeline implemented in Simulink® on a Zynq® UltraScale+ device.
This example uses SoC Blockset™ Support Package for AMD® FPGA and SoC Devices to deploy the Frame-Based Video Pipeline in Simulink example to the Xilinx Zynq® UltraScale+ MPSoC ZCU102 Evaluation Kit. The workflow can also be applied to other frame-based Vision HDL Toolbox examples.
This example uses HDMI input from FMC-HDMI-CAM to capture the input video to the hardware. For a frame-based example that uses the USB Camera, see Frame-Based Video Pipeline Using Zynq UltraScale+ and USB Camera.
Set Up Environment
Before running this example, you must run the guided hardware setup included in the support package installation.
On the MATLAB Home tab Toolstrip, in the Environment section, click Add-Ons > Manage Add-Ons.
Locate SoC Blockset Support Package for AMD FPGA and SoC Devices, and click Setup.
The setup tool configures the target board and host machine, confirms that the target starts correctly, and verifies host-target communication.
For more information, see Set Up Xilinx Devices (SoC Blockset).
Model Frame-Based Algorithm
This model provides a frame-based implementation of the video processing pipeline for noise removal and edge detection algorithm for targeting HDL. You can run this simulation without hardware as the video source for this example comes from the From Multimedia File block, that reads video data from a multimedia file.
open_system('soc_video_frame_pipeline_ycbcr')
The example model simulates a frame-based design. When you generate HDL code for this function with HDL Coder™ tools and frame-to-sample optimization, the generated code implements a hardware-friendly streaming video interface. To enable this frame-to-sample conversion for this model, set these Configuration properties
hdlset_param('soc_video_frame_pipeline_ycbcr', 'FrameToSampleConversion', 'on'); hdlset_param('soc_video_frame_pipeline_ycbcr/VideoPipeline/YCbCrIn', 'ConvertToSamples', 'on');
Algorithms are often sensitive to the specific video input. You can also run the model with real-world data coming from the camera attached to the HDMI input by configuring the Image Source block accordingly. To do this, right-click on the variant selection icon in the lower-left corner of the Image Source block, choose Label mode active choice, and select HW.
Target the Frame-Based Algorithm
In preparation for targeting, set up the tool chain by calling hdlsetuptoolpath. For example:
hdlsetuptoolpath('ToolName','Xilinx Vivado','ToolPath','C:\Xilinx\Vivado\2023.1\bin\vivado.bat');
For more information, see hdlsetuptoolpath
(HDL Coder).
Once you are satisfied with the frame-based model simulation results, you can target the algorithm to the FPGA on the Zynq UltraScale+ board. In the Simulink Toolstrip, in the Apps tab, select HDL Coder. In the HDL Code tab, select Settings. In the Workflow settings, set the Workflow to IP Core Generation
, Target Platform to Xilinx Zynq UltraScale+ MPSoC ZCU102 Evaluation Kit
. In the Reference design settings, set the Reference Design to Video design (requires FMC-HDMI-CAM)
. This example uses YCbCr Color Space and 1 sample per clock. So set the parameter Color Space to YCbCr
and Pixels per Clock to 1
. Other supported values for Color Space are RGB and Y Only.
In the HDL Code tab, select Target Interface. The input frame port for which frame-to-samples is enabled must be mapped to AXI4-Stream Video Slave
and the output port must be mapped to AXI4-Stream Video Master
.
Validate the interface mapping and click on Build Bitstream in the HDL Code tab. This step generates the bitstream in an external shell.
Once the bitstream is generated, you can download it in the Simulink Toolstrip. In the HDL Code tab, select Build Bitstream > Program Target Device.
To download the bitstream at a later stage, use these commands to create hardware object and deploy the bitstream and device tree to the hardware.
hw = socHardwareBoard('Xilinx Zynq UltraScale+ MPSoC ZCU102 Evaluation Kit','hostname','192.168.1.101','username','root','password','root'); programFPGA( hw,"PROJECT_FOLDER\vivado_ip_prj\vivado_prj.runs\impl_1\design_1_wrapper.bit","devicetree_visionzynq_frame.dtb");
Using Generated Interface models
To generate the interface model, in the Simulink Toolstrip, in the HDL Code tab, select Build Bitstream > Software Interface Model. This will create a Host IO model and a software interface model. In Host IO model, you can connect the output of the Video Capture HDMI block to a To Video Display block and visualize the output. Similarly, In software interface model, you can use Video Viewer block to visualize the output. You can use this model to fully deploy a software design. (This model is generated only if Embedded Coder and the Embedded Coder Support Package for AMD SoC Devices are installed.)