Equivalent of Neural ODE for discrete time state space models

12 次查看(过去 30 天)
Dear forum members,
I have read recently the interesting example on how to train neural ODE to identify dynamical system :
https://mathworks.com/help/deeplearning/ug/dynamical-system-modeling-using-neural-ode.html
This example talks about continuous time models and I was wondering if there was any equivalent tutorial related to discrete time models ?
I know it is possible to create one with the network function, but I would like to implement my own training loop.
Thanks in advance for any help you can provide !

回答(1 个)

Arkadiy Turevskiy
Arkadiy Turevskiy 2023-1-31
We added idNeuralStateSpace object that support both continuous and discrete time model. Maybe this could be useful. It was created to simplify code you have to write, so it would not allow you to write your own training loop though.
  3 个评论
Ben
Ben 2023-2-2
Hi M.
I'm not sure if this is possible with the shallow network functions but it can be done with the dlnetwork and custom training loops since these we allow you to write your own model function that re-uses the same network on 2 different inputs. Here's some example code with dummy data - in practice you may need to tweak the training and network hyperparameters to get optimal performance.
% share a neural net across multiple calls
% create some fake data
% predict x(t+1) = F(x(1,t),u(1,t)) + F(x(2,t),u(2,t)) for some unknown F
numSteps = 100;
t = linspace(0,2*pi,numSteps);
F = @(x,u) sqrt(x+u+1);
x = [0;1];
u = [cos(t);sin(t)];
for i = 2:numSteps
x(:,i) = F(x(1,i-1),u(1,i-1)) + F(x(2,i-1),u(2,i-1));
end
% create a network to model F
% it needs to have two inputs, for x and u.
hiddenSize = 5000;
inputSize = 1;
outputSize = 2;
layers = [
featureInputLayer(inputSize,Name="x")
concatenationLayer(1,2,Name="concat");
fullyConnectedLayer(hiddenSize)
reluLayer
fullyConnectedLayer(outputSize)];
net = dlnetwork(layers,Initialize=false);
net = addLayers(net,featureInputLayer(1,Name="u"));
net = connectLayers(net,"u","concat/in2");
net = initialize(net);
% train with custom training loop
numEpochs = 1000;
vel = [];
x = dlarray(x,"CB");
u = dlarray(u,"CB");
learnRate = 0.1;
for epoch = 1:numEpochs
[loss,gradient] = dlfeval(@modelLoss,x,u,net);
lossValue = extractdata(loss);
fprintf("Epoch: %d, Loss %.4f\n", epoch, lossValue);
[net,vel] = sgdmupdate(net,gradient,vel,learnRate);
end
function [loss,gradient] = modelLoss(x,u,net)
% predict x(:,2:end) from x(:,1:end-1) and u(:,1:end-1)
xtarget = x(:,2:end);
xpred = model(x(:,1:end-1),u(:,1:end-1),net);
loss = mse(xtarget,xpred);
gradient = dlgradient(loss,net.Learnables);
end
function xpred = model(x,u,net)
% model xpred = x(t+1) = f(x(1,t),u(1,t)) + f(x(2,t),u(2,t)) where f is a neural net.
xpred = forward(net,x(1,:),u(1,:)) + forward(net,x(2,:),u(2,:));
end
Hope that helps.
M.
M. 2023-2-2
Thank you very much.
It is very close to what I am looking for.
The last thing is that I would like is to explicitely write the network function as in the continuous time example inside odeModel:
y = tanh(theta.fc1.Weights*y + theta.fc1.Bias);
y = theta.fc2.Weights*y + theta.fc2.Bias;
I tried for example to replace your example "xpred = forward(net,x(1,:),u(1,:))" with the above line replacing theta by net.Learnables.Value but when I try this with your code I get the following error :
One input argument can have dimension labels only when the other input argument is an
unformatted scalar. Use .* for element-wise multiplication.
Any idea why this is not working ?
Thanks again.

请先登录,再进行评论。

类别

Help CenterFile Exchange 中查找有关 Deep Learning Toolbox 的更多信息

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by