Hardware Support

GPU Coder generates optimized CUDA®code for deep learning, embedded vision, and autonomous systems. The generated code calls optimized NVIDIA®CUDA libraries and is portable across NVIDIA GPUs. You can build and deploy the generated code on NVIDIA GPU platforms such as the NVIDIA DRIVE™ platform.

You can deploy a variety of trained deep learning networks, such as YOLO, ResNet-50, SegNet, and MobileNet, from Deep Learning Toolbox to NVIDIA GPUs. You can generate optimized code for preprocessing and postprocessing along with your trained deep learning networks to deploy complete algorithms.

MATLAB Coder Support Package for NVIDIA GPUs automates the deployment of MATLAB algorithm or Simulink design on embedded NVIDIA GPUs such as the DRIVE™ platform. Use the interactive communication to prototype and develop your MATLAB algorithm, then automatically generate equivalent C code and deploy it to the drive platform to run as a standalone.

Interactive communication: You can remotely communicate with the NVIDIA target from MATLAB to acquire data from supported sensors and imaging devices connected to the target and then analyze and visualize it in MATLAB. You can log data from supported sensors to help fine-tune your algorithm for early prototyping.

Standalone execution: You can deploy the generated CUDA code as a standalone embedded application on the drive platform. You can build and deploy the generated CUDA code from your MATLAB algorithm, along with the interfaces to the peripherals and the sensors, on the drive platform.

The support package also supports NVIDIA Jetson®TK1, Jetson TX1, Jetson TX2, Jetson Xavier, and Jetson Nano developer kits.

Platform and Release Support

See the hardware support package system requirements table for current and prior version, release, and platform availability.