Main Content

本页对应的英文页面已更新,但尚未翻译。 若要查看最新内容,请点击此处访问英文页面。

深度学习导入、导出和自定义

导入、导出和自定义深度学习网络,并自定义层、训练循环和损失函数

从 TensorFlow™-Keras、Caffe 和 ONNX™(开放式神经网络交换)模型格式导入网络和网络架构。您还可以将经过训练的 Deep Learning Toolbox™ 网络导出为 ONNX 模型格式。

您可以针对您的问题定义自己的自定义深度学习层。您可以使用自定义输出层指定自定义损失函数,并定义具有或不具有可学习参数的自定义层。例如,您可以将具有加权交叉熵损失的自定义加权分类层用于类分布不平衡的分类问题。定义自定义层后,您可以检查该层是否有效,是否与 GPU 兼容,以及是否输出正确定义的梯度。

如果 trainingOptions 函数不提供任务所需的训练选项,或者自定义输出层不支持所需的损失函数,则您可以定义自定义训练循环。对于无法使用层次图创建的网络,可以将自定义网络定义为函数。要了解详细信息,请参阅Define Custom Training Loops, Loss Functions, and Networks

函数

全部展开

importKerasNetworkImport a pretrained Keras network and weights
importKerasLayersImport layers from Keras network
importCaffeNetworkImport pretrained convolutional neural network models from Caffe
importCaffeLayersImport convolutional neural network layers from Caffe
importONNXNetworkImport pretrained ONNX network
importONNXLayersImport layers from ONNX network
exportONNXNetworkExport network to ONNX model format
findPlaceholderLayersFind placeholder layers in network architecture imported from Keras or ONNX
replaceLayerReplace layer in layer graph
assembleNetworkAssemble deep learning network from pretrained layers
PlaceholderLayerLayer replacing an unsupported Keras layer, ONNX layer, or unsupported functionality from functionToLayerGraph
checkLayerCheck validity of custom layer
setLearnRateFactorSet learn rate factor of layer learnable parameter
setL2FactorSet L2 regularization factor of layer learnable parameter
getLearnRateFactorGet learn rate factor of layer learnable parameter
getL2FactorGet L2 regularization factor of layer learnable parameter
dlnetworkDeep learning network for custom training loops
forwardCompute deep learning network output for training
predictCompute deep learning network output for inference
adamupdateUpdate parameters using adaptive moment estimation (Adam)
rmspropupdate Update parameters using root mean squared propagation (RMSProp)
sgdmupdate Update parameters using stochastic gradient descent with momentum (SGDM)
dlupdate Update parameters using custom function
dlarrayDeep learning array for custom training loops
dlgradientCompute gradients for custom training loops using automatic differentiation
dlfevalEvaluate deep learning model for custom training loops
dlmtimes(Not recommended) Batch matrix multiplication for deep learning
dimsDimension labels of dlarray
finddimFind dimensions with specified label
stripdimsRemove dlarray labels
extractdataExtract data from dlarray
functionToLayerGraphConvert deep learning model function to a layer graph
dlconvDeep learning convolution
dltranspconvDeep learning transposed convolution
lstmLong short-term memory
gruGated recurrent unit
fullyconnectSum all weighted input data and apply a bias
reluApply rectified linear unit activation
leakyreluApply leaky rectified linear unit activation
batchnormNormalize each channel of mini-batch
crosschannelnormCross channel square-normalize using local responses
avgpoolPool data to average values over spatial dimensions
maxpoolPool data to maximum value
maxunpoolUnpool the output of a maximum pooling operation
softmaxApply softmax activation to channel dimension
crossentropyCross-entropy loss for classification tasks
sigmoidApply sigmoid activation
mseHalf mean squared error

主题

自定义层

定义自定义深度学习层

了解如何定义自定义深度学习层。

Check Custom Layer Validity

Learn how to check the validity of custom deep learning layers.

Define Custom Deep Learning Layer with Learnable Parameters

This example shows how to define a PReLU layer and use it in a convolutional neural network.

Define Custom Deep Learning Layer with Multiple Inputs

This example shows how to define a custom weighted addition layer and use it in a convolutional neural network.

Define Custom Classification Output Layer

This example shows how to define a custom classification output layer with sum of squares error (SSE) loss and use it in a convolutional neural network.

Define Custom Weighted Classification Layer

This example shows how to define and create a custom weighted classification output layer with weighted cross entropy loss.

Define Custom Regression Output Layer

This example shows how to define a custom regression output layer with mean absolute error (MAE) loss and use it in a convolutional neural network.

Specify Custom Layer Backward Function

This example shows how to define a PReLU layer and specify a custom backward function.

Specify Custom Output Layer Backward Loss Function

This example shows how to define a weighted classification layer and specify a custom backward loss function.

网络训练与整合

Train Generative Adversarial Network (GAN)

This example shows how to train a generative adversarial network (GAN) to generate images.

Train Conditional Generative Adversarial Network (CGAN)

This example shows how to train a conditional generative adversarial network (CGAN) to generate images.

Train a Siamese Network for Dimensionality Reduction

This example shows how to train a Siamese network to compare handwritten digits using dimensionality reduction.

Train a Siamese Network to Compare Images

This example shows how to train a Siamese network to identify similar images of handwritten characters.

Define Custom Training Loops, Loss Functions, and Networks

Learn how to define and customize deep learning training loops, loss functions, and networks using automatic differentiation.

Specify Training Options in Custom Training Loop

Learn how to specify common training options in a custom training loop.

Train Network Using Custom Training Loop

This example shows how to train a network that classifies handwritten digits with a custom learning rate schedule.

Update Batch Normalization Statistics in Custom Training Loop

This example shows how to update the network state in a custom training loop.

Make Predictions Using dlnetwork Object

This example shows how to make predictions using a dlnetwork object by splitting data into mini-batches.

Train Network Using Model Function

This example shows how to create and train a deep learning network by using functions rather than a layer graph or a dlnetwork.

Update Batch Normalization Statistics Using Model Function

This example shows how to update the network state in a network defined as a function.

Make Predictions Using Model Function

This example shows how to make predictions using a model function by splitting data into mini-batches.

比较层权重初始化函数

此示例说明如何使用不同权重初始化函数来训练深度学习网络。

指定自定义权重初始化函数

此示例说明如何为后跟泄漏 ReLU 层的卷积层创建自定义 He 权重初始化函数。

基于预训练的 Keras 层组合网络

此示例说明如何从预训练的 Keras 网络中导入层、用自定义层替换不支持的层,以及将各层组合成可以进行预测的网络。

多输入和多输出网络

Multiple-Input and Multiple-Output Networks

Learn how to define and train deep learning networks with multiple inputs or multiple outputs.

Train Network with Multiple Outputs

This example shows how to train a deep learning network with multiple outputs that predict both labels and angles of rotations of handwritten digits.

Assemble Multiple-Output Network for Prediction

This example shows how to assemble a multiple output network for prediction.

自动微分

Automatic Differentiation Background

Learn how automatic differentiation works.

Use Automatic Differentiation In Deep Learning Toolbox

How to use automatic differentiation in deep learning.

List of Functions with dlarray Support

View the list of functions that support dlarray objects.

Grad-CAM Reveals the Why Behind Deep Learning Decisions

This example shows how to use the gradient-weighted class activation mapping (Grad-CAM) technique to understand why a deep learning network makes its classification decisions.

特色示例