lsqcurvefit: Function value and YDATA sizes are not equal.

3 次查看(过去 30 天)
Hi,
I am fitting two data from the same experiment to obtain non-linear fit parameters that describe the whole system. How can I make the value returned by the "function" the same size as YDATA used by lsqcurvefit?
function [C, D] = kinetics(k,t)
x0 = [1;1;1;1];
[T,Cv] = ode45(@DiffEq,t,x0);
function dC = DiffEq(t,x)
RHOcat = 0.23552;
EPS = 0.528;
xdot = zeros(4,1);
xdot(1) = (RHOcat/EPS).*(-k(1).* x(1));
xdot(2) = k(1).* x(1) - k(2).* x(2);
xdot(3) = k(3).* x(4);
xdot(4) = k(2) .* x(2) - k(3) .* x(4);
dC=xdot;
end
C=Cv(:,1);
D=Cv(:,3);
end
Y=load('Pt330.dat');
t = Y(6:end,1);
cNO = Y(6:end,3);
cNO2 = Y(6:end,4);
c = [cNO,cNO2];
tdata = Y(:,1);
c1data = Y(:,2);
c2data = Y(:,3);
c3data = Y(:,4);
k0=[1;1;1;1];
options = optimoptions('lsqcurvefit','Algorithm','levenberg-marquardt');
lb = [];
ub = [];
[k,Rsdnrm,Rsd,ExFlg,OptmInfo,Lmda,Jmat]=lsqcurvefit(@kinetics,k0,t,c,lb,ub,options)
end
PS: Pt330.txt can be renamed as Pt330.dat to run the code.

采纳的回答

Star Strider
Star Strider 2023-1-14
The output returned by ‘kinetics’ needs to be:
C = Cv(:,[1 3]);
in this instance.
The code now works, however the fit is definitely not optimal for . I have no idea what the ‘k’ magnitudes should be, so using rand here. This gets closer ([NO] is appropriate), however there is still something wrong. (I put lower bounds on the parameters because in my experience, kinetic constants cannot be negative. If that is not true in this system, change ‘lb’ back to an empty matrix.) Also, check to be certain that ‘DifEq’ is coded correctly.
Y=readmatrix('https://www.mathworks.com/matlabcentral/answers/uploaded_files/1263580/Pt330.txt');
t = Y(6:end,1);
cNO = Y(6:end,3);
cNO2 = Y(6:end,4);
c = [cNO,cNO2];
tdata = Y(:,1);
c1data = Y(:,2);
c2data = Y(:,3);
c3data = Y(:,4);
% k0=[1;1;1;1];
k0 = rand(4,1)*1E-2;
options = optimoptions('lsqcurvefit','Algorithm','levenberg-marquardt');
lb = zeros(4,1);
ub = [];
[k,Rsdnrm,Rsd,ExFlg,OptmInfo,Lmda,Jmat]=lsqcurvefit(@kinetics,k0,t,c,lb,ub,options)
Local minimum found. Optimization completed because the size of the gradient is less than 1e-4 times the value of the function tolerance.
k = 4×1
0.5148 0 0 0.0008
Rsdnrm = 274.3861
Rsd = 286×2
0.0100 0.0050 -0.1476 0.1029 0.1019 0.4382 0.1470 0.6181 0.1092 0.7020 0.0674 0.7532 0.0316 0.7888 0.0053 0.8188 -0.0145 0.8409 -0.0285 0.8573
ExFlg = 1
OptmInfo = struct with fields:
iterations: 8 funcCount: 45 stepsize: 5.0029e+04 cgiterations: [] firstorderopt: 5.2194e-07 algorithm: 'levenberg-marquardt' message: 'Local minimum found.↵↵Optimization completed because the size of the gradient is less than↵1e-4 times the value of the function tolerance.↵↵<stopping criteria details>↵↵Optimization completed: The relative first-order optimality measure, 8.506067e-11,↵is less than 1e-4*options.FunctionTolerance = 1.000000e-10.'
Lmda = struct with fields:
upper: [4×1 double] lower: [4×1 double]
Jmat = 572×4
0 0 0 0 -0.4185 -0.0001 0 0 -0.6285 0.0004 0 0 -0.7066 0.0015 0 0 -0.7062 0.0000 0 0 -0.6674 -0.0004 0 0 -0.6011 -0.0004 0 0 -0.5270 -0.0003 0 0 -0.4550 -0.0001 0 0 -0.3846 -0.0001 0 0
fprintf(1,'\tRate Constants:\n')
Rate Constants:
for k1 = 1:numel(k)
fprintf(1, '\t\tK(%2d) = %8.5f\n', k1, k(k1))
end
K( 1) = 0.51481 K( 2) = 0.00000 K( 3) = 0.00000 K( 4) = 0.00080
tv = linspace(min(t), max(t));
Cfit = kinetics(k, tv);
figure
hd = plot(t, c, 'p');
for k1 = 1:size(c,2)
CV(k1,:) = hd(k1).Color;
hd(k1).MarkerFaceColor = CV(k1,:);
end
hold on
hlp = plot(tv, Cfit);
for k1 = 1:size(c,2)
hlp(k1).Color = CV(k1,:);
end
hold off
grid
xlabel('Time')
ylabel('Concentration')
legend(hlp, compose('C_%d',1:size(c,2)), 'Location','best')
function C = kinetics(k,t)
x0 = [1;1;1;1];
[T,Cv] = ode45(@DiffEq,t,x0);
function dC = DiffEq(t,x)
RHOcat = 0.23552;
EPS = 0.528;
xdot = zeros(4,1);
xdot(1) = (RHOcat/EPS).*(-k(1).* x(1));
xdot(2) = k(1).* x(1) - k(2).* x(2);
xdot(3) = k(3).* x(4);
xdot(4) = k(2) .* x(2) - k(3) .* x(4);
dC=xdot;
end
C = Cv(:,[1 3]);
end
I added the plot.
.
  8 个评论
Matterazzi
Matterazzi 2023-1-15
Two more questions:
1) Curiosity killed the "bear wearing a cap" but satisfaction brought it back :)
You are ahead of me in thought. But now I wonder how I can make lsqcurvefit to 'see' the second output "D". I think it is needed for describing the system fully i.e., for NO2.
I think I am wrong in treaing lsqcurvefit as any user-defined function as function [y1,...,yN] = myfun(x1,...,xM)
2) Is there an efficient way of evaluating lsqcurvefit's goodness-of-fit wihout changing to another solver? I can surely code the usual R2 but I just checking from your vast kinetics coding experience. I read that you can combine residuals and the Jacobian to achieve this.
Star Strider
Star Strider 2023-1-15
1) The lsqcurvefit function only needs to ‘see’ the data it is fitting. Everything else is irrelevant to it. ( I did not look closely at ‘DiffEq’ previoously. I see only three kinetic constants, so only three need to be defined in ‘x0’ and returned in ‘k’.) If you are fitting data to all four differential equations, then return all the columns, as in ‘kinetics4’. The benefit of that is that it might help define ‘k(3)’ that is not otherwise used in fitting the first two equations. If there are data for them, this simply involves re-definng ‘C’ in ‘DiffEq’.
I do not understand I think I am wrong in treaing lsqcurvefit as any user-defined function ... so I cannot comment on it.
2) The lsqcurvefit function is the only function that I know of that will fit matrix dependent variables (there are others such as ga, however they do not return the Jacobian and other outputs necessary to calculate statistics on the fit), so it is the only option here. (I do not have the Curve Fitting Toolbox, so I do not know if it could be used for goodness-of-fit estimations involving matrix dependent variables.)
It is possible to get parameter confidence intervals using the Statistics and Machine Learning Toolbox function nlparci however not confidence intervals on the fitted regression lines, since the nlpredci function only works with nonlinear functions that return one dependent variable. The problem is that fitting a matrix of dependent variables significantly limits the options for calculating other statistics on the fit.
I have only calculated parameter statistics in kinetics fitting of matrix dependent variables, not statistics on the fit, since that is a much more complicated problem, and I am usually only interested in the parameter statistics (that tell me everything I usually need to know). I am not certain how to calculate statistics on the fit in matrix dependent variable regressions, since I have never needed to do it.

请先登录,再进行评论。

更多回答(0 个)

类别

Help CenterFile Exchange 中查找有关 Linear and Nonlinear Regression 的更多信息

标签

产品


版本

R2020a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by