Main Content

Design and Deploy Workflow

SoC Blockset™ Support Package for Xilinx® Devices provides progressive features depending on what products you have installed. The required products for each design goal are listed in the figure. To achieve each goal, you must also have the preceding products. At a minimum, you must have the Simulink® and Computer Vision Toolbox™ products.

For a complete workflow example, see Developing Vision Algorithms for Zynq-Based Hardware.

Live Video Capture

Using this support package, you can capture live video from your Zynq® device and import the video into Simulink. The video source can be an HDMI FMC camera card, a MIPI® CSI-2® FMC camera card, or an on-chip test pattern generator provided with the HDMI FPGA reference design. You can select the color space and resolution of the input frames. The capture resolution must match that of your input camera.

The hardware data path runs at the same frame rate as the sensor output. The Simulink capture port works at a slower, best-effort rate. A Simulink model captures and processes a frame and then requests the next frame from the board. When the model includes minimal image processing logic, the frame capture rate for HDMI YCbCr 4:2:2 video at 1080p60 is typically about 20 MB/s, or 5 fps. The capture rate for MIPI CSI-2 RGB video at 1080p is about 3fps.

For an example of live video capture and display in a Simulink model, see Getting Started with Vision Zynq Hardware. For an example of capture and display from a MIPI CSI-2 camera board, see Getting Started with MIPI Sensor.

Frame-Based Design

After you have video frames in Simulink, you can design frame-based video processing algorithms that operate on the live data input. Use blocks from Computer Vision Toolbox libraries to develop frame-based, floating-point algorithms. After you are satisfied with the results of your design, in preparation for hardware targeting, convert the algorithm to use fixed-point data types and pixel-streaming video.

To get started with frame-based design, see Design Frame-Based Algorithms.

Pixel-Streaming Design

When you first develop your vision algorithm, using frame-based video data enables you to develop and debug your algorithm faster than using pixel-streaming data. You can verify your algorithm quickly without the constraints of hardware. However, due to resource constraints, hardware video processing designs operate on pixel-streaming data. Use blocks from Vision HDL Toolbox™ libraries to build a pixel-streaming algorithm that you can target to the FPGA user logic section of the reference design. These blocks have a standard interface that includes streaming pixel data and a control signal bus. You can use Vision HDL Toolbox blocks to convert between framed and streaming video data.

Your pixel-streaming design can include an interface to external memory for a frame buffer or random read and write access by using an AXI manager interface. The frame buffer interface is only supported when you use an HDMI FMC card.

To get started with pixel-streaming design, see Design Pixel-Streaming Algorithms for Hardware Targeting.

FPGA Targeting

After you have a pixel-streaming model that meets your requirements, you can generate HDL code from your model and prototype the design on the Zynq device. By running all or part of your code on the hardware, you speed up simulation of your video processing system and can verify the behavior of the system on real hardware.

When you use an FMC-HDMI card, your generated HDL code can use the Vision HDL Toolbox custom pixel-streaming interface or an AXI4-Stream Video interface. When you use a MIPI FMC card, the generated HDL coder uses the AXI4-Stream Video interface.

The targeted design must not modify the frame size or color format of the video stream. The reference design expects output data in the same format as the input data. The targeting step maps the external memory interface model to the physical memory interface on the board.

After FPGA targeting, you can capture the live output frames from the FPGA user logic back to Simulink for further processing and analysis. When you use an FMC-HDMI card, you can also view the output on an HDMI device connected to your board. Using the generated hardware interface model, you can control the video capture options and read and write AXI-Lite ports on the FPGA user logic from Simulink during simulation.

This step requires the HDL Coder™ product and the HDL Coder Support Package for Xilinx FPGA and SoC Devices, as well as Xilinx Vivado® software.

To learn more about FPGA targeting, see Target FPGA on Zynq Hardware and Models Generated from FPGA Targeting.

ARM Processor Targeting

You can create a model for software targeting using the default FPGA design loaded at setup, or you can customize the FPGA logic and modify the generated software interface model. With an HDMI FMC card, use the Video Capture HDMI block to route the video from the FPGA into the ARM® processor or to control the data path in the FPGA. With a MIPI FMC card, use the Video Capture MIPI block. You can design an algorithm for software targeting to the Zynq hardware, including external mode, processor-in-the-loop simulation, and full deployment.

The software interface model is generated by the HDL Workflow Advisor after you load custom logic to the FPGA. This model provides data path control and an interface to any AXI-Lite ports you defined on your FPGA targeted subsystem. You can generate ARM code from this model that drives or responds to the AXI-Lite ports on the FPGA user logic. You can deploy this code on the board to run along with the FPGA user logic.

This step requires Embedded Coder® and Embedded Coder Support Package for Xilinx Zynq Platform.

To learn more about ARM processor targeting, see Target an ARM Processor on Zynq Hardware and Models Generated from FPGA Targeting.

Related Examples

More About