How to set up the Matrix variables to use Options = optimoptio​ns('lsqcur​vefit','al​gorithm','​levenberg-​marquardt'​)

2 次查看(过去 30 天)
I have two sets of M1 and M2 vectors (1x20) of experimental values that correspond to two sets of K1 and K2 inputs (1x20) respectively. Theoretlcal values of M1 and M2 (call them M1T and M2T) have a relationship with K1 and K2 such that M1-T is a function of K1 and K2, and M2-T is also a function of K1 and K2 with a series of a1 to a6 coefficients.
I am trying to find those a1 to a6 coefficients by minimizing the error function Total Error = (M1 - M1T)^2 + (M2-M2T)^2 by the use of
options = optimoptions('lsqcurvefit','Algorithm','levenberg-marquardt'), however, I don't know how to set up my variables into these names/function (i.e. don't know the approriate coding syntax).
As an example;
the inputs are: M1 = [ i1 i2 i3 ............ i20], M2 = [ i1 i2 i3 .............i20], K1 = [ i1 i2 i3 ............ i20], K2 = [ i1 i2 i3 .............i20],
the relationship between theroretical values of M1, M2 and the K1 and K2 are:
M1T(i) = a1*K1(i) + a2*K1(i)*K2(i) + a3*K2(i) + a4*(K1(i))^2
M2T(i) = a2*K2(i) + a2*K1(i) + a5*(K2(i))^2 + a6*K1(i)*K2(i)

采纳的回答

Star Strider
Star Strider 2024-7-28
I am not certain how ‘M1T’ and ‘M2T’ enter into this, however witth the ‘K’ values as the independent variables, and the ‘M’ values as the dependent variables, this is how I would set this up. (I prefer column vectors and column-oriented matrices, so I transposed the original row vectors to column vectors here.)
Try this, with your actual data —
M1 = randn(1,20);
M2 = randn(1,20);
K1 = randn(1,20);
K2 = randn(1,20);
% M1T = randn(1,20);
% M2T = randn(1,20);
M = [M1; M2].';
K = [K1; K2].';
MT = [M1T; M2T].';
objfcn = @(a,K) [a(1).*K(:,1) + a(2).*K(:,1).*K(:,2) + a(3).*K(:,2) + a(4).*K(:,1).^2, a(2).*K(:,2) + a(2).*K(:,1) + a(5).*K(:,2).^2 + a(6).*K(:,1).*K(:,2)]
objfcn = function_handle with value:
@(a,K)[a(1).*K(:,1)+a(2).*K(:,1).*K(:,2)+a(3).*K(:,2)+a(4).*K(:,1).^2,a(2).*K(:,2)+a(2).*K(:,1)+a(5).*K(:,2).^2+a(6).*K(:,1).*K(:,2)]
A0 = rand(6,1);
[A,Rsdnrm,Rsd,ExFlg,OptmInfo,Lmda,Jmat] = lsqcurvefit(objfcn, A0, K, M);
Local minimum found. Optimization completed because the size of the gradient is less than the value of the optimality tolerance.
Estimated_Parameters = A
Estimated_Parameters = 6x1
-0.0124 -0.2568 0.2718 -0.0384 0.1853 0.1014
<mw-icon class=""></mw-icon>
<mw-icon class=""></mw-icon>
I did not include the options call or options structure here, for simplicity. I usually lett lsqcurvefit pick the approopriate algorthm.
.
  23 个评论
Torsten
Torsten 2024-8-5
Squaring the difference would be appropriate for some optimisation functions such as fminsearch or fmincon in which it would be:
objfcn = @(Cff) [(M1.'-M1_Theory(Cff,K)).^2 (M2.'-M2_Theory(Cff,K)).^2];
inheriting ‘K’ from the workspace, and that would probably work (note the simple transpose operations denoted by.'). I did not test it with any of those functions.
For "fminsearch" or "fmincon", you even had to sum over the squared differences.

请先登录,再进行评论。

更多回答(0 个)

产品


版本

R2023a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by