Target Deep Learning Processor and Image Preprocessing to FPGA
The SoC Blockset™ Support Package for AMD FPGA and SoC Devices includes the Deep Learning with Preprocessing
Interface
reference design. You can use this reference design for deep
learning (DL) applications that target a DL processor and a DL image processor
(IP).
The reference design connects a DL processor with custom preprocessing logic. These two parts of the design communicate control information over an AXI manager interface, and share video data using a second AXI manager interface to DDR memory. The reference design expects input data over an AXI4-Stream interface and writes the processed data back to DDR memory.
This diagram shows the interfaces in the Deep Learning with Preprocessing
Interface
reference design.
The FPGA user logic for this reference design must contain two simplified AXI Manager protocol interfaces. One interface interacts with the DL IP core and the other transfers data between the FPGA user logic and DDR.
AXI-Lite — The MATLAB® host uses AXI-Lite registers to monitor and control the FPGA.
AXI4-Stream — The MATLAB host provides input data to this interface. Map the input pixel data and
pixelcontrol
bus of the FPGA user logic to the data port of an AXI4-Stream subordinate interface. The pixel data and control signals are packed into auint32
data word.AXI Manager of DDR — The FPGA user logic writes output data to the PL DDR memory through this interface. The deep learning IP then reads the data for processing.
AXI Manager of DL IP — The FPGA user logic and the deep learning IP communicate control information over this interface. The FPGA user logic must contain logic for the handshaking protocol of the deep learning IP. The Deploy and Verify YOLO v2 Vehicle Detector on FPGA (Vision HDL Toolbox) example includes a subsystem that shows how to model this handshake protocol.
To use this reference design, you must specify the name and file location of a deep learning processor core generated by using the Deep Learning HDL Toolbox™ tools. The Deploy and Verify YOLO v2 Vehicle Detector on FPGA (Vision HDL Toolbox) example shows how to use this reference design, and how to model the AXI interfaces and the handshaking logic between the preprocessing logic and the DL processor. In the example, the deployed design is controlled by a MATLAB host machine that provides input video data, reads the output data, and verifies the results.
This reference design does not support video capture to Simulink®.
For a reference design that processes live HDMI video and adds postprocessing operations in the ARM® processor, see Deep Learning Processing of Live Video.
Related Examples
- Deploy and Verify YOLO v2 Vehicle Detector on FPGA (Vision HDL Toolbox)
- YOLO v2 Vehicle Detector with Live Camera Input on Zynq-Based Hardware (Vision HDL Toolbox)