- Increase number of Layers
- Increase number of Parameters
- add an Activation layer to handle nonlinearity
- you might also give more iterate to network for learning , maxepoch=2000 since your data and network are small.
How to use deep learning for interpolation properly
10 次查看(过去 30 天)
显示 更早的评论
Hi!
What I do wrong? I try to apply deep learning for interpolation function sin. And It learns not good. Insted of sin its just show line function. This is my code
clear all;
clc;
close all;
N=100;
T=2*pi;
x=0:T/(N-1):T;
y=sin(x);
%----- normalization ---------------
y=y+abs(min(y));
y=y./max(y);
x=x./max(x);
x=x;
y=y;
%---------- end normalization -----
plot(x,y,'o')
%------------------------
layers = [
featureInputLayer(1)
fullyConnectedLayer(10)
fullyConnectedLayer(1)
regressionLayer
];
options = trainingOptions('adam', ...
'MaxEpochs',20, ...
'MiniBatchSize',10, ...
'Shuffle','every-epoch', ...
'Plots','training-progress', ...
'Verbose',false);
[net inf] = trainNetwork(x',y',layers,options);
y1 = predict(net,x');
figure(2); hold on;
plot(x,y,'oblack','MarkerSize',20)
plot(x,y1,'x','MarkerSize',20)
0 个评论
采纳的回答
Abolfazl Chaman Motlagh
2021-11-30
Hi,
The Sin(x) function is a complete nonlinear function, on the other hand your network is too simple to handle such a nonlinearity. for overcoming this problem you can:
here a simple solution for your model:
layers = [
featureInputLayer(1)
fullyConnectedLayer(20);
reluLayer;
fullyConnectedLayer(20);
reluLayer;
fullyConnectedLayer(20);
fullyConnectedLayer(1)
regressionLayer
];
the result is (5 fully connected layer with 20 parameters)
here are some other examples:
(2 fully connected layer each with 200 parameters) ( you can see still not converge correctly)
(3 fully connected layer each with 5 parameters)
so you can change this parameters (even change layers parameters indivisually) to see what is best option for your application.
6 个评论
Abolfazl Chaman Motlagh
2021-12-5
for first question, here is some example:
N=100;
T=2*pi;
x=0:T/(N-1):T;
y=sin(x);
y=y+abs(min(y));
y=y./max(y);
x=x./max(x);
layers = [
featureInputLayer(1)
fullyConnectedLayer(20);
reluLayer;
fullyConnectedLayer(20);
reluLayer;
fullyConnectedLayer(20);
reluLayer;
fullyConnectedLayer(20);
reluLayer;
fullyConnectedLayer(20);
fullyConnectedLayer(1)
regressionLayer
];
options = trainingOptions('sgdm', ...
'MaxEpochs',600, ...
'MiniBatchSize',10, ...
'Shuffle','every-epoch', ...
'Plots','training-progress', ...
'InitialLearnRate',0.01, ...
'LearnRateSchedule','piecewise', ...
'LearnRateDropFactor',0.1, ...
'LearnRateDropPeriod',200, ...
'Verbose',false);
[net inf] = trainNetwork(x',y',layers,options);
y1 = predict(net,x');
plot(x,y,'o','LineWidth',2)
hold on
plot(x,y1,'-','LineWidth',2)
legend('Sin(x)','Predicted')
Abolfazl Chaman Motlagh
2021-12-5
for second question:
by putting relu function after one layer, all outputs pass over the relu function. so just by adding reluLayer in layers list, you are using this function on every neurons.
there are some standart implementation of other common activation function in matlab, like
- relu
- leakyRelu
- clippedRelu
- elu
- tanh
- swish
but if you want you can define your own activation function using matlab:
functionLayer:
or even complete Deep Network Layer structure and operation:
更多回答(0 个)
另请参阅
类别
在 Help Center 和 File Exchange 中查找有关 Image Data Workflows 的更多信息
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!