Main Content

Tracking Cars with Zynq-Based Hardware

This example shows how to target a car tracking algorithm to the ARM® processor on the Zynq® hardware.

Setup Prerequisites

This algorithm corresponds to the Computer Vision Toolbox™ example, Tracking Cars Using Foreground Detection (Computer Vision Toolbox). With the SoC Blockset™ Support Package for Xilinx® Devices, you get a video capture block for the ARM that allows for easy integration of your targeted algorithm in the context of a vision system. The Video Capture HDMI block, when deployed to the Zynq board, routes the video from HDMI camera input to the ARM processor.

If you have not yet done so, run through the guided setup wizard portion of the SoC Blockset Support Package for Xilinx Devices installation. You might have already completed this step when you installed this support package.

On the MATLAB® Home tab, in the Environment section of the Toolstrip, click Add-Ons > Manage Add-Ons. Locate SoC Blockset Support Package for Xilinx Devices, and click Setup.

The guided setup wizard performs a number of initial setup steps, and confirms that the target can boot and that a host and target can communicate.

For more information, see Set Up Xilinx Devices.

Frame-Based Model with Video File Input

Start with a frame-based model of the Car Tracking algorithm.

open_system('vzCarCounting_01_FrameBased')

You can run this simulation without hardware. The video source for this example comes from the From Multimedia File block. This step allows you to verify the frame-based algorithm against known video data.

Frame-Based Model with Live Camera Acquisition

Algorithms are often sensitive to the specific video input. In this step, you can verify the algorithm against real-world data coming from the camera attached to the HDMI input on the board. To do this, right-click on the variant selection icon in the lower-left corner of the Image Source block, choose Label mode active choice, and select HW.

When using real-world data, choose a frame size that matches your camera settings. If your camera allows different sizes, you can choose smaller sizes for faster throughput. The minimum size the HDMI input supports is 480p. This model crops the input video frames to 360x640 pixels. You can change the size of the frame by changing the Output Size parameter on the ROI block. Adjust the position of the Region of Interest (ROI) based on your camera setup by changing the x and y position inputs.

All of the settings on the Video Capture HDMI block are sent to the target during simulation to properly configure it for capturing the camera video stream.

Now run this model to verify your algorithm on live video captured from the Zynq board into Simulink.

Target the Algorithm to the ARM on the Zynq board

After you are satisfied with the frame-based algorithm simulation, you can target the frame-based algorithm to the ARM on the Zynq board. Open the model.

open_system('vzCarCounting_02_SwTargeting')

The 'software targeting' model supports full software targeting to the Zynq when Embedded Coder and the Embedded Coder® Support Package for Xilinx® Zynq Platform are installed, enabling External mode simulation, Processor-in-the-loop, and full deployment. The 'software targeting' model is identical to frame-based model, but is using the ARM software interface version of the Video Capture HDMI block.

Before running this model, you must perform additional setup steps to configure the Xilinx cross-compiling tools. For more information, see Setup for ARM Targeting with IP Core Generation Workflow

To avoid buffering errors when running the Video Viewer in External mode, reduce the duration of the External mode trigger. In the Code menu, select External Mode Control Panel. Click the Signal & Triggering button. In the Trigger options section, set Duration to 1.

Run the model in External mode. This mode runs the algorithm on the ARM on the Zynq board. You can see the results in the Video Viewer in Simulink. You can adjust the position of the Region of Interest (ROI) by changing the x and y position inputs in Simulink while running the model. The size of the ROI output frame is nontunable while the model is running.