How to do the Feedforward backpropagation during neural networks training
17 次查看(过去 30 天)
显示 更早的评论
Hi all,
I'm trying use the ANN to build up a model for device current prediciton based on lectures instruction. And the attachment is the rawdata and code that I used.
Because I'm the new in this filed, not sure whether it's correct or not.
- Networks type: Feedforward backpropagation (I don't know how to code it in the code)
- Networks transfer function: TANSIG
- Traning function: TRAINLM
- Hidden layer: 2 layers
Could any experts help me on it? And how to obtain the weights and biase for each hidden layers? Meanwhile, do I need to use MAPMINMAX to normalize the dataset before training ?
%%% Clear/Close/Clear all %%%
clc
close all;
clear all;
%%% Open the folder and read the data %%%
[filename,filedir] = uigetfile('*.csv','Multiselect','on');
path = fullfile(filedir,filename);
path = path';
file = struct('name',path);
%%% Set up the Import Options and import the data %%%
opts = delimitedTextImportOptions("NumVariables", 5);
% Specify range and delimiter
opts.DataLines = [1, Inf];
opts.Delimiter = ",";
% Specify column names and types
opts.VariableNames = ["VarName1", "VarName2", "VarName3", "VarName4", "e06"];
opts.VariableTypes = ["double", "double", "double", "double", "double"];
% Specify file level properties
opts.ExtraColumnsRule = "ignore";
opts.EmptyLineRule = "read";
% Read the table
Rawdatacsv = readtable("/Users/charleslin/Desktop/20220701 ANN using /simulated IdVg/Rawdata_csv.csv", opts);
% Conver the table to array
Rawdata_csv = table2array(Rawdatacsv);
% Read the input info from array
W = Rawdata_csv(:,1).*(1*10^-6);
L = Rawdata_csv(:,2).*(1*10^-6);
Vgs = Rawdata_csv(:,3);
Vds = Rawdata_csv(:,4);
Ids = Rawdata_csv(:,5);
% Read the test data set
test=Rawdata_csv(1:37202,1:4)';
%% Nomolization with fun of mapminmax
[Ln,Lps] = mapminmax(L);
[Wn,Wps] = mapminmax(W);
[Vdsn,Vdsps] = mapminmax(Vds);
[Vgsn,Vgsps] = mapminmax(Vgs);
[Idsn,Idsps] = mapminmax(Ids);
% Summary input and targer for ANN model train
p=[Ln Wn Vdsn Vgsn]'; % Input summary
t=Idsn'; % Target summray
% Declear the train function
trainFcn = 'trainlm' ;
% Create a Fitting Network, weights, and bias value
Nepochs = 10000;
hiddenLayer1Size = 24;
hiddenLayer2Size = 24;
net = fitnet([hiddenLayer1Size hiddenLayer2Size], trainFcn);
%net = feedforwardnet([hiddenLayer1Size hiddenLayer2Size], trainFcn);
save net;
%lweights = net.LW
%biases =net.b
% Training data seting
% net.divideParam.trainRatio = 75/100;
% net.divideParam.valRatio = 15/100;
% net.divideParam.testRatio = 15/100;
% Train param. set
net.trainParam.show = inf;
net.trainParam.epochs = Nepochs;
net.trainParam.goal = 1e-25;
net.trainParam.max_fail = 1000;
net.trainParam.mem_reduc = 1;
net.trainParam.min_grad = 1e-23;
net.trainParam.mu = 0.001;
net.trainParam.mu_dec = 0.1;
net.trainParam.mu_inc = 10;
net.trainParam.mu_max = 1e10;
net.trainParam.time = inf;
% Hidden layer activation function
net.layers{1}.transferFcn = 'tansig';
net.layers{2}.transferFcn = 'tansig';
% Output layer activation function
help ind2vec , help vec2ind
net.layers{3}.transferFcn = 'tansig';
view(net);
net=train(net,p,t);
% iw=net.iw{1,1};
% lw=net.lw{2,1};
% bh=net.b{1};
% bo=net.b{2};
% Output sim and overlap with raw data
[output]= sim(net, test);
%output_postpre = (test.*(log10(Ids)))./max(log10(Ids));
%output=output/(1*10^5);
figure;semilogy(Vgs(1:89),abs(Ids(1:89))); hold on; semilogy(Vgs(1:89),abs(output(1:89)'));...
semilogy(Vgs(22340:22428),abs(Ids(22340:22428))); hold on; semilogy(Vgs(22340:22428),abs(output(22340:22428)'));
0 个评论
回答(1 个)
Krishna
2023-11-17
Hello Chuanjung,
Based on the information provided in your data, it seems that you have four independent variables that you are using to predict the current value (the fifth column). If there are no time series dependencies in the dataset, meaning that the current value is not influenced by time or any periodic patterns, then using a feedforward neural network is a suitable choice. However, if there are time dependencies, you may consider using a “narxnet” , depending on your specific requirements. You can find more details in the documentation provided at this link:
Regarding your second question, you don't need to code a “feedforwardnet” from scratch. Instead, you can use “dlnetworks” to implement it in MATLAB using the “fullyConnectedLayer”. While your current implementation with “feedforwardnet” works, it is recommended to use “dlnetworks” for better performance and greater flexibility with neural network layers. You can learn more about this in the following documentation:
To obtain the weights and biases of each layer after training from the network object, you can use “net.LW” for hidden layer weights and “net.IW” for input layer weights.
Regarding normalization of the dataset, it depends on the type of dataset you are using. If the range of values for the features varies significantly, normalization can be helpful in bringing all features to the same scale. Also, you can cross validate for better accuracy. For a better understanding of normalization, you can refer to this documentation:
Hope that helps.
0 个评论
另请参阅
类别
在 Help Center 和 File Exchange 中查找有关 Modeling and Prediction with NARX and Time-Delay Networks 的更多信息
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!