Deep Learning Toolbox
Create, analyze, and train deep learning networks
Deep Learning Toolbox™ (formerly Neural Network Toolbox™) provides a framework for designing and implementing deep neural networks with algorithms, pretrained models, and apps. You can use convolutional neural networks (ConvNets, CNNs) and long short-term memory (LSTM) networks to perform classification and regression on image, time-series, and text data. Apps and plots help you visualize activations, edit network architectures, and monitor training progress.
For small training sets, you can perform transfer learning with pretrained deep network models (including SqueezeNet, Inception-v3, ResNet-101, GoogLeNet, and VGG-19) and models imported from TensorFlow™-Keras and Caffe.
To speed up training on large datasets, you can distribute computations and data across multicore processors and GPUs on the desktop (with Parallel Computing Toolbox™), or scale up to clusters and clouds, including Amazon EC2® P2, P3, and G3 GPU instances (with MATLAB Distributed Computing Server™).
Networks and Architectures
Use Deep Learning Toolbox to train deep learning networks for classification, regression, and feature learning on image, time-series, and text data.
Convolutional Neural Networks
Learn patterns in images to recognize objects, faces, and scenes. Construct and train convolutional neural networks (CNNs) to perform feature extraction and image recognition.
Long Short-Term Memory Networks
Learn long-term dependencies in sequential data including signal, audio, text, and other time-series data. Construct and train long short-term memory (LSTM) networks to perform classification and regression.
Use various network structures such as series, directed acyclic graph (DAG), and recurrent architectures to build your deep learning network. DAG architectures offer more network topologies including those with skipped layers or layers connected in parallel.
Network Design and Analysis
Create, edit, visualize, and analyze deep learning networks with interactive apps.
Design Deep Learning Networks
Create a deep network from scratch using the Deep Network Designer app. Import a pretrained model, visualize the network structure, edit the layers, and tune parameters.
Analyze Deep Learning Networks
Analyze your network architecture to detect and debug errors, warnings, and layer compatibility issues before training. Visualize the network topology and view details such as learnable parameters and activations.
Transfer Learning and Pretrained Models
Import pretrained models into MATLAB for inference.
Transfer learning is commonly used in deep learning applications. Access a pretrained network and use it as a starting point to learn a new task and quickly transfer learned features to a new task using a smaller number of training images.
Access the latest models from research with a single line of code. Import pretrained models including AlexNet, GoogLeNet, VGG-16, VGG-19, ResNet-101, Inception-v3, and SqueezeNet. See pretrained models for a complete list of models.
Visualize network topologies, training progress, and activations of the learned features in a deep learning network.
Visualize a network topology with its layers and connections. Use the
analyzeNetwork function to analyze the network architecture interactively.
View training progress in every iteration with plots of various metrics. Plot the validation metrics against the training metrics to visually analyze whether the network is overfitting.
Extract activations corresponding to a layer, visualize the learned features, and train a machine learning classifier using the activations. Use the
deepDreamImage function to understand and diagnose network behavior by synthesizing images that strongly activate network layers and highlighting the learned features.
Interoperate with deep learning frameworks from MATLAB.
Import and export ONNX models within MATLAB® for interoperability with other deep learning frameworks. ONNX enables models to be trained in one framework and transferred to another for inference.
Import models from TensorFlow-Keras into MATLAB for inference and transfer learning using the
Import models from Caffe Model Zoo into MATLAB for inference and transfer learning using the
Speed up deep learning training using GPU, cloud, and distributed computing.
Speed up deep learning training and inference with high-performance NVIDIA® GPUs. You can perform training on a single workstation GPU or scale to multiple GPUs with DGX systems in data centers or on the cloud. You can use MATLAB with Parallel Computing Toolbox and most CUDA® enabled NVIDIA GPUs that have compute capability 3.0 or higher.
Speed up deep learning training with cloud instances. Use high-performance GPU instances for the best results.
Code Generation and Deployment
Deploy trained networks to embedded systems or integrate them with a wide range of production environments.
Shallow Neural Networks
Use neural networks with a variety of supervised and unsupervised shallow neural network architectures.
Train supervised shallow neural networks to model and control dynamic systems, classify noisy data, and predict future events.
Find relationships within data and automatically define classification schemes by letting the shallow network continually adjust itself to new inputs. Use self-organizing, unsupervised networks, competitive layers, and self-organizing maps.
Perform unsupervised feature transformation by extracting low-dimensional features from your data set using autoencoders. You can also use stacked autoencoders for supervised learning by training and stacking multiple encoders.
Deep Network Designer
Edit and build deep learning networks
Import and export models using the ONNX model format for interoperability with other deep learning frameworks
Visualize, analyze, and find problems in network architectures before training
Import LSTM and BiLSTM layers from TensorFlow-Keras
Long Short-Term Memory (LSTM) Networks
Solve regression problems with LSTM networks and learn from full sequence context using bidirectional LSTM layers
Deep Learning Optimization
Improve network training using Adam, RMSProp, and gradient clipping