主要内容

Edit Customizable Neural Network Using Network Editor in Classification Learner or Regression Learner

Since R2026a

In the Classification Learner or Regression Learner app, if you have a Deep Learning Toolbox™ license, you can train neural networks with customized architecture and layer settings. This example shows how to edit a customizable neural network using the Network Editor, and then train the model and use training progress plots to check for overfitting. Although this example uses Classification Learner, you can use the same workflow in Regression Learner on a data set with a numeric response variable.

  1. In the Command Window, load the ionosphere data set.

    load ionosphere.mat

  2. From the Command Window, open the Classification Learner app. Populate the New Session from Arguments dialog box with the predictor matrix X and the response variable Y.

    classificationLearner(X,Y)
    The default validation scheme is 5-fold cross-validation, to protect against overfitting. For this example, do not change the default validation setting. To accept the selections and continue, click Start Session.

    New session from Arguments dialog box

  3. Create a customizable neural network model. In the Customizable Neural Networks section of the Models gallery, click Fully Connected Customizable Neural Network.

    Customizable neural network classifiers

  4. The Model Hyperparameters section of the model Summary tab contains training and network options. Change the number of fully connected layers to 2. The app updates the display of the network architecture.

    Summary tab of fully connected customizable neural network

  5. Click the Customize Network button to launch the Network Editor. You can add new layers to a network by dragging blocks from the Layer library pane into the Network pane. To quickly search for layers, use the Filter layers search box in the Layer library pane.

    Network Editor window showing a fully connected customizable neural network

  6. Click the first fully connected layer block (fc_1) to display its properties. To learn more about fully connected layers, you can click the question mark icon in the Properties panel. Increase the number of activations in the first fully connected layer by changing the OutputSize value from 10 to 15.

    Fully connected layer properties

  7. Repeat the previous step for the fc_2 block.

  8. Drag an additionLayer block from the Combination section of the Layer library pane and place it between the fc_2 and relu_2 blocks.

    Addition layer block

  9. Connect the addition block to the network to create a skip connection. First, click the connector line between the fc_2 and relu_2 blocks and press Delete. Then click the out port of the fc_2 block and drag the connector line to the in port of the addition block.

    Connect fc_2 to the addition layer block

    Click the out port of the addition block and drag the connector line to the in port of the relu_2 block.

    Connect the addition layer block to relu_2

    To complete the connection, click the out port of the relu_1 block and drag the connector line to the in port of the addition block.

    Connect the addition layer block to relu_1

  10. In the Close section of the Designer tab, click Accept Changes to accept your changes and close the Network Editor.

  11. In the Neural Network Results section of the Plots and Results gallery, click Analyze Network.

    Analyze Network button

    The Analyze Network tab displays a network diagram and information about each layer.

    Analyze Network tab

    This network is a simple residual network, because it contains a skip connection between the first ReLU layer and the addition layer. Skip connections help a neural network learn residual functions (the difference between input and output). For more information about layer types, see List of Deep Learning Layers (Deep Learning Toolbox).

  12. Train the customizable neural network model and the draft fine tree model in the Models pane. In the Train section of the Learn tab, click Train All and select Train All.

    The model validation accuracy values are displayed in the Models pane.

    Accuracy values of trained models

    The customizable neural network model has a slightly higher validation accuracy than the fine tree model.

  13. To view plots of training progress at each iteration for the customizable neural network model, click Training Progress in the Neural Network Results section of the Plots and Results gallery. Ensure that the Training loss, Gradient, and Step size check boxes are selected.

    Training progress plots of the neural network model

    The top plot shows that the cross-entropy training loss steadily decreases with successive iterations. A flattening of the loss curve indicates that the model is approaching its optimal performance, and that further training can lead to overfitting of the data. In this example, the fitting terminates when the optimization algorithm reaches the default limit of 30 iterations. You can set the iteration limit of a draft model in the Model Hyperparameters section of the model Summary tab.

    The gradient value rapidly decreases in the first few iterations and then approaches zero, indicating that the algorithm converges on optimal network learnable values. The step size fluctuates between very small values and 1.5. For more information on the optimization algorithm, see the fitcnet reference page.

See Also

Topics