Problem with exact replication of maglev narx closed-loop output

2 次查看(过去 30 天)
I wrote a simple Matlab code to calculate a narx within the maglev example. The goal was to replicate the closed-loop output from the usual Matlab expression yc = netc(xc,xic,aic). My code is within the maglev example and uses the weights and bias' from the trained narx, so there shouldnt be any precision issues with reading weights from another file. The problem is my code does not perfectly replicate the closed-loop output. It is very close, and sometimes spot-on, but doesnt replicate perfectly every time.
Has anyone had this problem? I am not arrogant enough to seriously suspect a problem with Matlab, but I have to ask the question to see if anyone has had this issue. It is likely something silly with my code. I can trim it down to a reviewable size if anyone would like to help me out.
Thanks, Cal

采纳的回答

Calvin
Calvin 2014-5-18
Problem solved. So I'm writing an answer to my own question for the benefit of readers who may have a similar problem. The key is with how matlab interprets the time order of the input vector.
Sidebar: The deploy solution option in ntstool executes genFunction. The resulting m-file hard-codes the architecture and delay info, and the indexing appears unnecessarily complicated. But it was useful for troubleshooting my own code.
The problem with my stand-alone NARX code was with my incorrect assumption of the time order of the input vector. Here's an example of part of my simple code that calls my function narxann4:
for time=1+D:Ntimesteps % do for all training set timesteps after lag D
InputVector = Xin(time-inputDelays); %%%NOTE THE ORDER!!!!
Ycalc(time) = narxann4(InputVector',Ycalc(time-feedbackDelays),...
Bias1,WtsInput,WtsContext,Bias2,WtsL2);
end
Notice that the InputVector is in reverse chronological order. For example, if feedbackDelays=1:2 and time=3, then InputVector=[Xin(2) Xin(1)]
Once I figured this out, my simple stand-alone code replicated the maglev closed-loop simulation exactly. Hope this helps somebody.
Cal

更多回答(1 个)

Greg Heath
Greg Heath 2014-5-9
Closed loop designs have the irritating property of propagating errors.
Typically, just closing an openloop design is not sufficient.
After using the closeloop fuction to convert from openloop, you should train the closeloop net starting with the weights obtained from the openloop design.
Search
greg closeloop
I probably have some examples in the NEWSGROUP and/or ANSWERS.
Hope this helps.
Thank you for formally accepting my answer
Greg
  7 个评论
Calvin
Calvin 2014-7-3
Yes. That helps Greg. I forgot I can disable the default minmax scaling preprocessing function by using:
net.inputs{1}.processFcns = {'removeconstantrows'}; % mapminmax removed
net.inputs{2}.processFcns = {'removeconstantrows'}; % mapminmax removed
net.outputs{2}.processFcns = {'removeconstantrows'}; % mapminmax removed
Can you remind me what the first line of the above 3 does? I suspect it may have something to do with open vs closed loop mode.
Thanks! Cal
Greg Heath
Greg Heath 2014-7-4
It is a little confusing. Openloop narx has 2 inputs and 1 output
>> net.outputs
ans = [] [1x1 nnetOutput]
Therefore, the only output is net.outputs{2}
>> net.inputs
ans = [1x1 nnetInput]
[1x1 nnetInput]
Therefore, there are 2 inputs ( {1} and {2} ). However
>> net.inputs{1}
ans = ... name: 'x'
feedbackOutput: [] %input from exogeneous input, x
...
and
>> net.inputs{2}
ans = ... name: 'y'
feedbackOutput: 2 %input from 2 output feedback delays
Hope this helps.
Greg

请先登录,再进行评论。

类别

Help CenterFile Exchange 中查找有关 Sequence and Numeric Feature Data Workflows 的更多信息

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by