Main Content

System Integration of Deep Learning Processor IP Core

Generate the deep learning (DL) processor IP core by using HDL Coder™ and Deep Learning HDL Toolbox™. Integrate the generated deep learning (DL) processor IP core into your system design manually or by using HDL Coder and IP core generation workflow

You can integrate the deep learning processor IP core into your system by:

  • Generating and integrating DL Processor IP Core—Generate a generic deep learning processor IP core by using Deep Learning HDL Toolbox. The generated deep learning processor IP core is a generic HDL Coder IP core with standard AXI4 interfaces. You can integrate the generated generic DL IP core into your Vivado® or Quartus® design.

    Accelerate the integration of the generated DL processor IP core into your system design by:

    • Reading the AXI4 register maps in the generated IP core report. The AXI4 registers allow MATLAB® or other AXI4 Master devices to control and program the DL processor IP core.

    • Using the compiler generated external memory buffer allocation.

    • Formatting the input and output external memory data.

    Manually integrate generic DL processor IP core
  • Reference design based DL Processor IP core integration—Generate a generic deep learning processor IP core by using Deep Learning HDL Toolbox. Integrate the generated deep learning processor IP core into your custom reference design by using HDL Coder. See Create a Custom Hardware Platform (HDL Coder). You can design the pre-processing and post processing DUT logic in Simulink® or MATLAB, and use the HDL Coder IP core generation workflow to integrate the pre-processing and post-processing logic with the deep learning processor.

    Reference design based deep learning processor IP core integration

    Use MATLAB to run your custom deep learning network on the deep learning processor IP core and retrieve the deep learning network prediction results from you integrated system design.

Functions

expand all

dlhdl.WorkflowConfigure deployment workflow for deep learning neural network
compile Compile workflow object
deploy Deploy the specified neural network to the target FPGA board
predictPredict responses by using deployed network
hdlcoder.ReferenceDesignReference design registration object that describes SoC reference design
registerDeepLearningMemoryAddressSpace Add memory address space to reference design
registerDeepLearningTargetInterfaceAdd and register a target interface
validateReferenceDesignForDeepLearningChecks property values in reference design object

Topics

Generate and Integrate DL Processor IP Core

Reference Design Based DL Processor IP Core Integration