How to create a custom neural network?

7 次查看(过去 30 天)
Hello!
I am interested in creation of a neural network of the following type: https://www.dropbox.com/s/4x3yn8kchl84vm8/sketch.png this is prestructured network that mimics certain equation. So far i have difficulties with creation of network structure using network function for custom networks.
this is the code as i have it now (i decided to start with 3 inputs first): net=network; net.numInputs = 3; net.numLayers = 2; net.layers{1}.size = 3; net.layers{1}.initFcn = 'initnw'; net.layers{2}.size = 1; net.layers{2}.initFcn = 'initnw';
net.inputConnect(1,1)=1; net.inputConnect(1,2)=1; net.inputConnect(1,3)=1;
net.layerConnect(2,1)=1;
net.outputConnect(2) = 1;
net=init(net); view(net)
net.IW{1,1}=[1;0;0]; net.IW{1,2}=[0;1;0]; net.IW{1,3}=[0;0;1];
The problem is that view does not show the neurons and i cannot connect each of them to certain input. The other question is that in one layer i need neurons of different types (the custom activation function was already programmed). Is it possible to do? maybe it is easier to create 4 layers and connect them as i need?
Thank you in advance! All the answers are appreciated Alexandra

采纳的回答

Greg Heath
Greg Heath 2013-7-6
Although view.net shows the 3-input/4-layer network I wanted, I do not know how to tell train that there are 3 separate 1-dimensional inputs.
Therefore, keep the 4 layers but change to one 3-dimensional input. Attach input1 to layer 1 by zeroing out the input weights for input2 and input3; Similarly, attach input2 to layer 2 by zeroing out the weights for input1 and input3; etc
What you now want are
1 3-dimensional input
3 parallel hidden layers
1 output layer
4 biases
1 output
Each input component is connected to it's own hidden layer with it's
own transfer function.
Hope this helps.
Thank you for formally accepting my answer
Greg
  1 个评论
Greg Heath
Greg Heath 2013-7-15
编辑:Greg Heath 2013-7-15
Vertically concatenate the inputs in a cell:
net = train( net,{input1; input2; input3}, output);

请先登录,再进行评论。

更多回答(1 个)

Greg Heath
Greg Heath 2013-7-4
The diagram shows 3 neurons in the hidden layer and 1 neuron in the output layer. All transfer functions are 'purelin'. You have no biases.
What you want are
3 inputs
3 parallel hidden layers
1 output layer
4 biases
1 output
Each input is connected to it's own hidden layer with it's own transfer function.
Hope this helps.
Thank you for formally accepting my answer
Greg
  2 个评论
Alexandra
Alexandra 2013-7-5
thank you for answer! So, you suggest to make different layers instead of putting neurons with different activation functions into one layer. Is it possible in general? cause I think it may affect results after training and I am interested into weights defined by training. Now I have difficulties with training - it just does not converge (finished on number of epochs). I think it may be due to problems with custom activation function that I am using or due to the wrong choice of training algorithm. I would appreciate any other ideas.
Greg Heath
Greg Heath 2013-7-5
编辑:Greg Heath 2013-7-6
The idea I gave you is the simplest way to do it. if you are having trouble, show your code.
None of the MATLAB NNDATASETS are of the 3-input/1-output regression/curve-fitting type.
help nndatasets
So I will try it on the 2-input engine data set using only the first output.

请先登录,再进行评论。

类别

Help CenterFile Exchange 中查找有关 Deep Learning Toolbox 的更多信息

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by