神经网络编码中隐含层数怎么看出来的。

实例:(只有一个隐含层)
P=[-1,-2,3,1;-1,1,5,-3];
T=[-1,-1,1,1];
net=newff(minmax(P),[3,1],{'tansig','purelin'},'traingdm');
inputWeights=net.IW{1,1};inputbias=net.b{1};
layerWeights=net.LW{2,1};layerbias=net.b{2};
net.trainParam.show = 50; net.trainParam.lr = 0.05; net.trainParam.mc = 0.9; net.trainParam.epochs = 1000; net.trainParam.goal = 1e-3;
[net,tr]=train(net,P,T);
A = sim(net,P)
E = T - A;
MSE=mse(E)
figure;plot((1:4),T,'-*',(1:4),A,'-o')
(两个隐含层)
P=[-1,-2,3,1;-1,1,5,-3];
T=[-1,-1,1,1];
[pn,minp,maxp,tn,mint,maxt]=premnmx(P,T)
dx=[-1,1;-1,1];
net=newff(dx,[2,10,1],{'tansig','tansig','purelin'},'trainlm');
inputWeights=net.IW{1,1};inputbias=net.b{1};
layerWeights=net.LW{2,1};layerbias=net.b{2};
net.trainParam.show = 50; net.trainParam.lr = 0.05; net.trainParam.mc = 0.9; net.trainParam.epochs = 1000; net.trainParam.goal = 1e-3;
[net,tr]=train(net,P,T);
A = sim(net,P)
E = T - A;
MSE=mse(E)
figure;plot((1:4),T,'-*',(1:4),A,'-o')
(隐含层一般选择一个就好,只有存在奇异值时进行归一化较好,比如p【1,2,3,68;2,3,4,78】,但最后应该把输入值归一化,输出值反归一化)
这两个程序的隐含层数怎么看出来的啊,看不懂
不是怎么确定隐含层数
这是我找到的神经网络的变成
还有就是做怎么把灰色预测和神经网络结合在一起。。。看过一些论文
是用灰色预测把拟合值做出来然后求残差,用残差做输入样本吗?
求助啊。。。

 采纳的回答

higoray
higoray 2022-11-22

0 个投票

net=newff(minmax(P),[3,1],{'tansig','purelin'},'traingdm');
net=newff(dx,[2,10,1],{'tansig','tansig','purelin'},'trainlm');
红字部分,输入层不算一层,最后一个是代表输出层
所以看红字部分,除了最后一个释出出层,其余的就是隐藏层

更多回答(0 个)

类别

帮助中心File Exchange 中查找有关 Deep Learning Toolbox 的更多信息

标签

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!