Training Network stopping automatically after 3 iteration without showing any error.

6 次查看(过去 30 天)
tspan = 0:0.001:10;
y0 = 0;
[t,y] = ode45(@(t,y) t^2+2, tspan, y0);
T=t(1:0.9*end)
Y=y(1:0.9*end)
x=t(0.9*end+1:end)
v=y(1+0.9*end:end)
layer = functionLayer((@(X) X./(1 -X^2)))
layers = [
sequenceInputLayer(1)
fullyConnectedLayer(1)
tanhLayer
functionLayer(((@(t) t./(1 -t.^2))),Description="softsign")
fullyConnectedLayer(1)
tanhLayer
functionLayer(((@(t) t./(1 -t.^2))),Description="softsign")
regressionLayer]
options = trainingOptions('adam', ...
'LearnRateSchedule','piecewise', ...
'LearnRateDropFactor',0.2, ...
'LearnRateDropPeriod',5, ...
'miniBatchSize',20,.....
'VerboseFrequency',1,...
'ValidationPatience',Inf,...
'MaxEpochs',100, ...
'Plots','training-progress')
net = trainNetwork(T',Y',layers,options);
ypre=predict(net,tspan);
plot(ypre)
plot(y)

回答(1 个)

Prateek Rai
Prateek Rai 2022-2-22
Hi,
Training of the network stopped because training loss is NaN. This implies that the predictions using the output network might contain NaN values.
On analyzing network, I found that size of the all the layers is 1*1*1 which is why NaN values are coming.
You might want to recheck the dimension of the layers of the network using:
analyzeNetwork(layers)

类别

Help CenterFile Exchange 中查找有关 Image Data Workflows 的更多信息

产品


版本

R2021b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by