Create Neural Network Object
This topic is part of the design workflow described in Workflow for Neural Network Design.
The easiest way to create a neural network is to use one of
the network creation functions. To investigate how this is done, you
can create a simple, two-layer feedforward network, using the command feedforwardnet
:
net = feedforwardnet
net = Neural Network name: 'Feed-Forward Neural Network' userdata: (your custom info) dimensions: numInputs: 1 numLayers: 2 numOutputs: 1 numInputDelays: 0 numLayerDelays: 0 numFeedbackDelays: 0 numWeightElements: 10 sampleTime: 1 connections: biasConnect: [1; 1] inputConnect: [1; 0] layerConnect: [0 0; 1 0] outputConnect: [0 1] subobjects: inputs: {1x1 cell array of 1 input} layers: {2x1 cell array of 2 layers} outputs: {1x2 cell array of 1 output} biases: {2x1 cell array of 2 biases} inputWeights: {2x1 cell array of 1 weight} layerWeights: {2x2 cell array of 1 weight} functions: adaptFcn: 'adaptwb' adaptParam: (none) derivFcn: 'defaultderiv' divideFcn: 'dividerand' divideParam: .trainRatio, .valRatio, .testRatio divideMode: 'sample' initFcn: 'initlay' performFcn: 'mse' performParam: .regularization, .normalization plotFcns: {'plotperform', plottrainstate, ploterrhist, plotregression} plotParams: {1x4 cell array of 4 params} trainFcn: 'trainlm' trainParam: .showWindow, .showCommandLine, .show, .epochs, .time, .goal, .min_grad, .max_fail, .mu, .mu_dec, .mu_inc, .mu_max weight and bias values: IW: {2x1 cell} containing 1 input weight matrix LW: {2x2 cell} containing 1 layer weight matrix b: {2x1 cell} containing 2 bias vectors methods: adapt: Learn while in continuous use configure: Configure inputs & outputs gensim: Generate Simulink model init: Initialize weights & biases perform: Calculate performance sim: Evaluate network outputs given inputs train: Train network with examples view: View diagram unconfigure: Unconfigure inputs & outputs evaluate: outputs = net(inputs)
This display is an overview of the network object, which is used to store all of the information that defines a neural network. There is a lot of detail here, but there are a few key sections that can help you to see how the network object is organized.
The dimensions section stores the overall structure of the network. Here you can see that there is one input to the network (although the one input can be a vector containing many elements), one network output, and two layers.
The connections section stores the connections between components
of the network. For example, there is a bias connected to each layer,
the input is connected to layer 1, and the output comes from layer
2. You can also see that layer 1 is connected to layer 2. (The rows
of net.layerConnect
represent the destination layer,
and the columns represent the source layer. A one in this matrix indicates
a connection, and a zero indicates no connection. For this example,
there is a single one in element 2,1 of the matrix.)
The key subobjects of the network object are inputs
, layers
, outputs
, biases
, inputWeights
,
and layerWeights
. View the layers
subobject
for the first layer with the command
net.layers{1}
Neural Network Layer name: 'Hidden' dimensions: 10 distanceFcn: (none) distanceParam: (none) distances: [] initFcn: 'initnw' netInputFcn: 'netsum' netInputParam: (none) positions: [] range: [10x2 double] size: 10 topologyFcn: (none) transferFcn: 'tansig' transferParam: (none) userdata: (your custom info)
The number of neurons in a layer is given by its size
property.
In this case, the layer has 10 neurons, which is the default size
for the feedforwardnet
command.
The net input function is netsum
(summation)
and the transfer function is the tansig
.
If you wanted to change the transfer function to logsig
, for example, you could execute
the command:
net.layers{1}.transferFcn = 'logsig';
To view the layerWeights
subobject for the
weight between layer 1 and layer 2, use the command:
net.layerWeights{2,1}
Neural Network Weight delays: 0 initFcn: (none) initConfig: .inputSize learn: true learnFcn: 'learngdm' learnParam: .lr, .mc size: [0 10] weightFcn: 'dotprod' weightParam: (none) userdata: (your custom info)
The weight function is dotprod
,
which represents standard matrix multiplication (dot product). Note
that the size of this layer weight is 0-by-10. The reason that we
have zero rows is because the network has not yet been configured
for a particular data set. The number of output neurons is equal to
the number of rows in your target vector. During the configuration
process, you will provide the network with example inputs and targets,
and then the number of output neurons can be assigned.
This gives you some idea of how the network object is organized. For many applications, you will not need to be concerned about making changes directly to the network object, since that is taken care of by the network creation functions. It is usually only when you want to override the system defaults that it is necessary to access the network object directly. Other topics will show how this is done for particular networks and training methods.
To investigate the network object in more detail, you might find that the object listings, such as the one shown above, contain links to help on each subobject. Click the links, and you can selectively investigate those parts of the object that are of interest to you.