How to compute the derivative of the neural network?

12 次查看(过去 30 天)
Hi,
Once you have trained a neural network, is it possible to obtain a derivative of it? I have a neural network "net" in a structure. I would like to know if there is a routine that will provide the derivatives of net (derivative of its outputs with respect to its inputs).
It is probably not difficult, for a feedforward model, there is just matrix multiplications and sigmoid functions, but it would be nice to have a routine that will do that directly on "net".
Thanks!

采纳的回答

Greg Heath
Greg Heath 2012-10-20
Differentiate to obtain dyi/dxn
y = b2 + LW*h
h = tanh(b1+IW*x)
or, with tensor notation(i.e., summation over repeated indices),
yi = b2i + Lwij*hj
hj = tanh(b1j + IWjk*xk)
Now just use the chain rule.
Hope this helps.
Thank you for formally accepting my answer
Greg
  2 个评论
Tittu Mathew
Tittu Mathew 2018-12-7
Hi Greg and Filipe,
I am approaching you based on your query years ago on getting the partial derivative of trained ANN outputs w.r.t each of the input parameters by using the MATLAB toolbox, years ago.
I am trying to represent a simple function P = f(X,Y,Z) where P is a scalar output and the input to ANN is a vector with 3 elements, namely X, Y and Z. Now, using the MATLAB toolbox, I was able to train, test and validate a shallow feedforward ANN. It has 3 layers, one for the input, one for the hidden and the last one being the output. So, the ANN configuration in my case will be of the form 3-x-1, where 'x' is the number of neurons in the hidden layer. Apart for the tanh() activation function used in the middle layer, linear activation functions were used in both input and output layers. MapMinMax() was used for normalizing both the inputs and the output of the ANN.
However, after successfully training the ANN, when it comes to calculating the first order derivative of the ANN output with each of the inputs, I am getting orders of difference in the results when compared with derivatives obtained using analytical equation. I tried to understand and implement the codes provided in the works of "Approximation of functions and their derivatives:A neural network implementation with applications" but in vain. The code which I developed to do the same is attached with this message (ANN_FOD.m).
I would greatly appreciate if you could help me with the implementation of the code for ANN derivatives. Thank you for your time and kind regards,
Tittu V Mathew

请先登录,再进行评论。

更多回答(4 个)

Filipe
Filipe 2012-10-20
Thanks Greg!
I was able to make this derivation, but it is not that easy to do on the "net" structure. You need to dig in the "net" structure to take into account the pre- and post-processing made automatically by Matlab. I was able to do it, but it was not that easy.
I think these NN derivatives are very useful in a lot of applications.
Thanks for your help!
  1 个评论
Greg Heath
Greg Heath 2012-10-24
You can try to make life easier by doing the pre and postprocessign yourself before and after training.

请先登录,再进行评论。


trevor
trevor 2013-11-7
Hi Filipe,
Could you possibly share your code for computing the partial derivative of the ANN, or provide some info on the steps you used? That would be immensely useful!
Thanks, Trevor

Muhammad Saif ur Rehman
Hi Filipe,
Can you share your code for computing the partial derivative of defined cost function w.r.t input?
Regards Saif

soo-choon kang
soo-choon kang 2021-8-14
net1 = fitnet(3);
net1 = train(net1,x',y');
% normalize x
nx = (x-net1.input.processSettings{1,1}.xmin)*net1.input.processSettings{1,1}.gain+net1.input.processSettings{1,1}.ymin;
h = tanh(net1.b{1}+net1.IW{1}*nx'); % h = [3xn] IW{1} = [3x1] x' = [1xn]
ny = net1.b{2}+net1.LW{2,1}*h; % y = [1xn] LW{2,1} = [1x3]
% de-normalize y
ypredict = (ny-net1.output.processSettings{1,1}.ymin)/net1.output.processSettings{1,1}.gain+net1.output.processSettings{1,1}.xmin;
% above ypredict is equivalent to predict(net1,x)
% derivative of nn at normalized scale
dnydnx = sum(net1.LW{2,1}'.*net1.IW{1}.*(1-h.*h),1)'; % dyy = [1xn] h'*h = [nxn]
% derivative of nn at real scale
dydx = dnydnx*net1.input.processSettings{1,1}.gain/net1.output.processSettings{1,1}.gain;

类别

Help CenterFile Exchange 中查找有关 Sequence and Numeric Feature Data Workflows 的更多信息

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by