Deep Learning Support from GPU Coder
Generate code from deep learning algorithms and integrate target-specific acceleration libraries
Capabilities and Features
GPU Coder generates optimized C++ code for deep learning, embedded vision, and autonomous systems. The generated code calls:
- Optimized NVIDIA® CUDA® libraries and can be used for prototyping on all NVIDIA GPU platforms
- Optimized ARM® libraries and can be used for prototyping on ARM Mali GPU platforms
You can deploy a variety of trained deep learning networks, such as YOLO, ResNet-50, SegNet, and MobileNet, from Deep Learning Toolbox to NVIDIA GPUs. You can generate optimized code for preprocessing and postprocessing along with your trained deep learning networks to deploy complete algorithms.
GPU Coder Interface for Deep Learning Libraries provides the ability to customize the code generated from deep learning algorithms by leveraging target-specific libraries on the embedded target. With this support package, you can integrate with libraries optimized for specific GPU targets for deep learning such as the TensorRT library for NVIDIA GPUs or ARM Compute Library for ARM Mali GPUs.
GPU Coder Interface for Deep Learning integrates with the following deep learning accelerator libraries and the corresponding GPU architectures:
- cuDNN and TensorRT libraries for NVIDIA GPUs
- ARM Compute Library for ARM Mali GPUs
Platform and Release Support
Available on 64-bit Microsoft and 64-bit Ubuntu only.
See the hardware support package system requirements table for current and prior version, release, and platform availability.