The issue of optimization (minimization) of the average relative error between experimental and calculated data
79 次查看(过去 30 天)
显示 更早的评论
roborrr
2024-8-22
hello
I want to share the difficulties that I faced. Can someone help
problem statement:
there is a 'x' column where the values of the independent variable are written and there is a 'y' column where the experimental values of the dependent variable are written.
approximation model is considered:
y_calculate=A*x^B+C,
and based on this model, an objective function is created, which is equal to the average value of the relative deviations between y and y_calculate:
error_function = mean(abs(y - y_calculate)) / y)=mean(abs(y - =A*x^B+C)) / y);
Our goal is to select parameters A,B,C in such a way that 'error_function' takes the value of the global minimum.
I calculated the optimal values of A, B, C and got:
A = 85.5880, B = -0.0460, C = 4.8824,
at which error function value for optimized parameters: 0.0285.
but I know in advance the specific values of A, B, C:
A = 1005.6335852931, B = -1.59745963925582, C = 73.54149744754400,
at which error function value for specific parameters: 0.002680472178434,
which is much better than with optimization
Below is the code with visualization, which confirms the above.
clear
close all
% Data
x = [7.3392, 14.6784, 22.0176, 29.3436, 36.6828, 44.0088, 51.3348, 58.674, 66, 73.3392, 80.6652, 88.0044, 95.3304, 102.6696, 109.9956, 117.3348, 124.6608, 132];
y = [115.1079, 87.7698, 80.5755, 78.1611, 76.5743, 75.7074, 74.9375, 74.9453, 74.59, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990];
% Initial guesses for parameters A, B, C
initial_guess = [1, 1, 1];
% Error function
error_function = @(params) mean(abs(y - (params(1) * x.^params(2) + params(3))) ./ y);
% Optimization of parameters
optimized_params = fminsearch(error_function, initial_guess);
% Results of optimization
A_optimized = optimized_params(1);
B_optimized = optimized_params(2);
C_optimized = optimized_params(3);
% Calculation of the fitted function for optimized parameters
y_calculate_optimized = A_optimized * x.^B_optimized + C_optimized;
% Calculate and display the error function value for optimized parameters
value_error_optimized = error_function(optimized_params);
fprintf('Optimized parameters:\nA = %.4f\nB = %.4f\nC = %.4f\n', A_optimized, B_optimized, C_optimized);
fprintf(' error function value for optimized parameters: %.4f\n', value_error_optimized);
% Other specific parameters A, B, C
A_specific = 1005.63358529310;
B_specific = -1.59745963925582;
C_specific = 73.541497447544;
% Calculation of the fitted function for specific parameters
y_calculate_specific = A_specific * x.^B_specific + C_specific;
% Calculate and display the error function value for specific parameters
value_error_specific = error_function([A_specific, B_specific, C_specific]);
fprintf('Specific parameters:\nA = %.10f\nB = %.14f\nC = %.14f\n', A_specific, B_specific, C_specific);
fprintf(' error function value for specific parameters: %.4f\n', value_error_specific);
% Visualization
figure;
plot(x, y, 'bo-', 'DisplayName', 'Experimental data');
hold on;
plot(x, y_calculate_optimized, 'r--', 'DisplayName', 'Fitted model (Optimized)');
plot(x, y_calculate_specific, 'g-.', 'DisplayName', 'Fitted model (Specific)');
xlabel('x');
ylabel('y');
legend('Location', 'best');
title('Approximation of experimental data');
grid on;
Obviously, my optimization code does not lead to a global minimum of the objective function, since there is a better approximation for specific values of A,B,C. Maybe this is caused by a random selection of the initial values of the parameters A=1, B=1, c=1 and therefore my code is stuck in a local minimum?
who can write a code that will select the A,B,C parameters so as to achieve the global minimum of the target function 'error_function', for any initial iteration data of the variables A,B,C. Thoughts for testing: the value of the target function 'error_function' should not be worse (that is, more) than 0.002680472178434, which is obtained with the specific value of A,B,C: A = 1005.6335852931, B = -1.59745963925582, C = 73.54149744754400
采纳的回答
Matt J
2024-8-22
编辑:Matt J
2024-9-5
If you download minL1lin from
then you can do,
% Data
x = [7.3392, 14.6784, 22.0176, 29.3436, 36.6828, 44.0088, 51.3348, 58.674, 66, 73.3392, 80.6652, 88.0044, 95.3304, 102.6696, 109.9956, 117.3348, 124.6608, 132];
y = [115.1079, 87.7698, 80.5755, 78.1611, 76.5743, 75.7074, 74.9375, 74.9453, 74.59, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990];
% Initial guess for parameter B
initial_guess = 1;
% Error function
error_function = @(B) objFun(B, x,y);
% Optimization of parameters
B = fminsearch(error_function, initial_guess);
[fval,ABC]=error_function(B);
[A,B,C]=deal(ABC{:})
fval
function [fval,p]=objFun(theta, x,y)
arguments
theta; x (:,1); y(:,1);
end
B=theta;
d=ones(size(y));
Q=[x.^B, d]./y;
[AC,fval]=minL1lin(Q,d,[],[],[],[],[],[],[],optimoptions('linprog','Display','off'));
fval=fval/numel(y);
A=AC(1); C=AC(2);
p={A, B, C};
end
42 个评论
roborrr
2024-8-23
编辑:roborrr
2024-8-23
Matt J, thank you for your response.
I ran your code in my matlab and saw the following error:
"Error: Line: 26 Column: 2
Function definitions in a script must appear at the end of the file.
Move all statements after the "objFun" function definition to before the first local function definition."
As I found out the reason of this error is that my version of matlab R(2018b) does not support the following construction:
arguments
B; x (:,1); y(:,1);
end
this construction was introduced starting with version R(2019b)
please rework this part of the code so that it works in my version of matlab - R2018b.
roborrr
2024-9-4
编辑:roborrr
2024-9-4
Dear Matt J, I checked the code you suggested and got a stunning result for the model
y=A*x^B+C
However, I did not understand much about how it works.
My goal is to check other models on the same experimental data x,y and choose the best model from them. I tried to do it myself, following your example, but I did not succeed and gave up.
Here are the models that I want to check:
Model 1
y_calculate = eta_inf + (eta_0 - eta_inf) / (1 + (lambda * x)^m);
In this model, two options need to be considered:
1.1- all parameters need to be optimized - eta_0 , eta_inf , lambda , m
1.2 - we assume that m=1.5 is known and we optimize only the parameters - eta_0, eta_inf, lambda
Model 2
y_calculate = eta_inf + (eta_0 - eta_inf) / (1 + (lambda * x)^a)^((1 - n) / a);
in this model, as in model 1, two cases are considered:
2.1. - all parameters need to be optimized - eta_0 , eta_inf , lambda , n,a
2.2 - we assume that parameters eta_0 , eta_inf , lambda , n is known and we optimize only the parameter a.
Matt J
2024-9-4
Dear Matt J, I checked the code you suggested and got a stunning result for the model
Can I assume "stunning" means that it did what you want? If so, please Accept-click the answer to indicate that it resolved your question.
I tried to do it myself, following your example, but I did not succeed and gave up.
To see what went wrong, I would need to see what you tried.
all parameters need to be optimized - eta_inf , eta_inf , lambda , m
Here and elsewhere you have listed eta_inf twice, so I'm not sure what the model unknowns are.
roborrr
2024-9-5
Can I assume "stunning" means that it did what you want? If so, please Accept-click the answer to indicate that it resolved your question.
Yes, "stunning" does mean that you did the initial task perfectly, which I did not even expect, for which I am very grateful. I simply forgot to click the "Accept" button, for which I apologize.
Here and elsewhere you have listed eta_inf twice, so I'm not sure what the model unknowns are.
I made a mistake in the letter, where eta_inf is repeated twice, while in place of the first eta_inf, there should be eta_0. I immediately edited the letter, apparently you read the old version of the letter. Please read the new version of the previous letter.
To see what went wrong, I would need to see what you tried.
I didn't try anything because I couldn't figure out how it all worked. How do you solve nonlinear problems with 'minL1lin', which solves the linear problem C*x=d. I tried to figure it out, but I couldn't. Is it possible to write similar codes, as you did for the model y=A*x^B+C, for the models I presented in the previous letter?
Matt J
2024-9-5
编辑:Matt J
2024-9-6
Your objective function is, in general,
mean( abs( y_calculate./y -1 ) );
Originally, you had
y_calculate./y = A*x.^B./y + A./y
= [x.^B, d] * [A;C]
= Q * [A;C]
where we have defined the matrices,
d=ones(size(y));
Q=[x.^B, d]./y;
Now take one of your new models. It can be put in the same form:
y_calculate./y = A* 1./(1 + (lambda * x).^m)./y + C./y ;
= Q *[A;C]
but notice that Q has the new form,
Q=[ 1./(1 + (lambda * x).^m), d]./y;
So,
function [fval,p]=objFun(theta, x,y)
x=x(:);y=y(:); theta=theta(:)';
lambda=theta(1); m=theta(2);
d=ones(size(y));
Q=[ 1./(1 + (lambda * x).^m), d]./y;
[AC,fval]=minL1lin(Q,d,[],[],[],[],[],[],[],optimoptions('linprog','Display','off'));
fval=fval/numel(y);
A=AC(1); C=AC(2);
p={A, theta, C};
end
roborrr
2024-9-6
Following your tips I wrote a program for the model
y_calculate = A* (1 + (lambda * x).^m)+ C,
where it is necessary to calculate the best parameters
A,Lambda,m,C
or optimizing the objective functionЖ
mean( abs( y_calculate-y)/y) .
Here is the code that I wrote following your tips:
clear
% Data
x = [7.3392, 14.6784, 22.0176, 29.3436, 36.6828, 44.0088, 51.3348, 58.674, 66, 73.3392, 80.6652, 88.0044, 95.3304, 102.6696, 109.9956, 117.3348, 124.6608, 132];
y = [115.1079, 87.7698, 80.5755, 78.1611, 76.5743, 75.7074, 74.9375, 74.9453, 74.59, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990];
% Initial guess for parameter theta
initial_guess = 1;
% Error function
error_function = @(theta) objFun(theta, x,y);
% Optimization of parameters
theta = fminsearch(error_function, initial_guess);
[fval,AthetaC]=error_function(theta);
[A, theta,C]=deal(AthetaC{:})
fval
function [fval,p]=objFun(theta, x,y)
x=x(:);y=y(:); theta=theta(:)';
lambda=theta(1); m=theta(2);
d=ones(size(y));
Q=[ 1./(1 + (lambda * x).^m), d]./y;
[AC,fval]=minL1lin(Q,d,[],[],[],[],[],[],[],optimoptions('linprog','Display','off'));
fval=fval/numel(y);
A=AC(1); C=AC(2);
p={A, theta, C};
end
This code gives errors in my matlab. where am I making mistakes?
Torsten
2024-9-6
You give an initial guess for only one parameter, but you try to access two parameters (lambda and m) in objFun.
Matt J
2024-9-6
Yes, an initial guess is needed for all parameters that y_calculated has a nonlinear dependence on, in this case lambda and m.
roborrr
2024-9-11
I corrected the code according to your tips and received:
clear
% Data
x = [7.3392, 14.6784, 22.0176, 29.3436, 36.6828, 44.0088, 51.3348, 58.674, 66, 73.3392, 80.6652, 88.0044, 95.3304, 102.6696, 109.9956, 117.3348, 124.6608, 132];
y = [115.1079, 87.7698, 80.5755, 78.1611, 76.5743, 75.7074, 74.9375, 74.9453, 74.59, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990];
% Initial guess for parameter theta
initial_guess = [1,1];
% Error function
error_function = @(theta) objFun(theta, x,y);
% Optimization of parameters
theta = fminsearch(error_function, initial_guess);
[fval,AthetaC]=error_function(theta);
[A, theta,C]=deal(AthetaC{:})
fval
function [fval,p]=objFun(theta, x,y)
x=x(:);y=y(:); theta=theta(:)';
lambda=theta(1); m=theta(2);
d=ones(size(y));
Q=[ 1./(1 + (lambda * x).^m), d]./y;
[AC,fval]=minL1lin(Q,d,[],[],[],[],[],[],[],optimoptions('linprog','Display','off'));
fval=fval/numel(y);
A=AC(1); C=AC(2);
p={A, theta, C};
for the values of the variables x,y specified in the code, the result is fine.
but I encountered values of the variables x,y for which the code gives an error. for example, for the following values of x,y:
x = [7.3392, 14.6784, 22.0176, 29.3436, 36.6828, 44.0088, 51.3348, 58.674, 66, 73.3392, 80.6652, 88.0044, 95.3304, 102.6696, 109.9956, 117.3348, 124.6608, 132];
y = [21.58079135, 17.26809355, 15.3488609, 14.1074809, 13.81896005, 13.43865625, 13.49266905, 13.44, 13.44, 13.5349172, 13.4323965, 13.439328, 13.36635835, 13.365, 13.29, 13.29, 13.51, 13.365];
the code gives the following error:
Error using linprog (line 373)
A must be a real matrix.
Error in comment_3255504>minL1lin (line 147)
[xz,~, varargout{3:nargout-1}]=linprog(f,A,b,Aeq,beq,LB,UB,args{:});
Error in comment_3255504>objFun (line 23)
[AC,fval]=minL1lin(Q,d,[],[],[],[],[],[],[],optimoptions('linprog','Display','off'));
Error in comment_3255504>@(theta)objFun(theta,x,y) (line 8)
error_function = @(theta) objFun(theta, x,y);
Error in fminsearch (line 336)
fxr = funfcn(x,varargin{:});
how to make the code work correctly for all values of the x,y variables?
roborrr
2024-9-11
Use Q=[ 1./(1 + (lambda^2 * x).^m), d]./y;
great, this trick prevents complex numbers from appearing when negative lambda values are used. I modified your idea a bit and instead of 'lambda^2' I used 'abs(lambda)' and got the lambda value directly, without the need to square it.
Torsten
2024-9-11
I modified your idea a bit and instead of 'lambda^2' I used 'abs(lambda)' and got the lambda value directly, without the need to square it.
Usually, this is not a good idea since abs() is not differentiable, and most solvers for optimization need differentiability with respect to the parameters. But you are in luck: fminsearch does not belong to this class of solvers.
roborrr
2024-9-12
Usually, this is not a good idea since abs() is not differentiable, and most solvers for optimization need differentiability with respect to the parameters. But you are in luck: fminsearch does not belong to this class of solvers.
Thank you, I understand you. I want the code to be universal and not depend on luck. So I will remake the code exactly as you advised.
And one more question: is it possible to impose restrictions on the parameters, for example, for the parameter C impose such a restriction:
C>0 and Abs(C-min(y))/min(y)<0.1
roborrr
2024-9-12
In the call to minL1lin,
minL1lin(C,d,A,b,Aeq,beq,lb,ub,x0,options)
define the lower bound (lb) for C as 0.9*min(y) and the upper bound (ub) for C as 1.1*min(y).
I already call 'minL1lin' once in the code
[AC,fval]=minL1lin(Q,d,[],[],[],[],[],[],[],optimoptions('linprog','Display','off'));
Do I need to call 'minL1lin' again? Where in the code should I call it minL1lin(C,d,A,b,Aeq,beq,lb,ub,x0,options)? Should I define lb and ub before or after the call?
Matt J
2024-9-12
No you do not need to call minL1lin again. You must modify the call you have,
lb=[-inf,0.9,min(y)];
ub=[+inf, 1.1*min(y)];
[AC,fval]=minL1lin(Q,d,[],[],[],[],lb,ub,[],optimoptions('linprog','Display','off'));
Torsten
2024-9-12
编辑:Torsten
2024-9-12
function [fval,p]=objFun(theta, x,y)
x=x(:);y=y(:); theta=theta(:)';
lambda=theta(1); m=theta(2);
d=ones(size(y));
Q=[ 1./(1 + (lambda^2 * x).^m), d]./y;
%Q=[ 1./(1 + (lambda * x).^m), d]./y;
%Since you also want C>0, the below setting assumes min(y)>0.
lb = [-Inf,0.9*min(y)];
ub = [Inf,1.1*min(y)];
[AC,fval]=minL1lin(Q,d,[],[],[],[],lb,ub,[],optimoptions('linprog','Display','off'));
%[AC,fval]=minL1lin(Q,d,[],[],[],[],[],[],[],optimoptions('linprog','Display','off'));
fval=fval/numel(y);
A=AC(1); C=AC(2);
p={A, theta, C};
end
roborrr
2024-9-12
编辑:roborrr
2024-9-12
No you do not need to call minL1lin again. You must modify the call you have,
lb=[-inf,0.9,min(y)];
ub=[+inf, 1.1*min(y)];
[AC,fval]=minL1lin(Q,d,[],[],[],[],lb,ub,[],optimoptions('linprog','Display','off'));
Thank you for your answer. Please make changes to this design so that it also imposes restrictions on the rest of the parameters:
A>0, lambda0<lambda<lambda1, m0<m<m1
where the lambda0,lambda1,m0,m1 numbers will be preset in the code.
Torsten
2024-9-12
编辑:Torsten
2024-9-12
Please make changes to this design so that it also imposes restrictions on the rest of the parameters:
A>0, lambda0<lambda<lambda1, m0<m<m1
where the lambda0,lambda1,m0,m1 numbers will be preset in the code.
For A>0, set
lb=[0,0.9*min(y)];
ub=[+inf,1.1*min(y)];
For the other two, switch from "fminsearch" to an optimizer that allows to set bounds on the parameters (e.g. "fmincon"). This way, the artificial solution to replace lambda by lambda^2 can also be avoided.
roborrr
2024-9-12
For A>0, set
lb=[0,0.9*min(y)];
ub=[+inf,1.1*min(y)];
but, lb, ub we used to impose restrictions on parameter C, and where are the restrictions of parameter A? and can't restrictions of all parameters(A,lamda,m) be written into the following construction:
[AC,fval]=minL1lin(Q,d,[],[],[],[],lb,ub,[],optimoptions('linprog','Display','off'));
as Matt J did for parameter C?
Torsten
2024-9-12
编辑:Torsten
2024-9-12
but, lb, ub we used to impose restrictions on parameter C, and where are the restrictions of parameter A?
The restrictions on A are the 0 and the +inf in
lb=[0,0.9*min(y)];
ub=[+inf,1.1*min(y)];
A and C are solved for by minL1lin - so you have to set bounds for them in the call to minL1lin.
lambda and m are solved for by fminsearch - so you have to set bounds for them in the call to fminsearch. But since fminsearch does not have the option to impose bounds on the parameters, you have to use a different optimizer here:
theta = fminsearch(error_function, initial_guess);
I suggested "fmincon":
theta = fmincon(error_function, initial_guess, [],[],[],[],[lambda0,m0],[lambda1,m1]);
You should try to understand the code you are using.
roborrr
2024-9-13
编辑:roborrr
2024-9-13
A and C are solved for by minL1lin - so you have to set bounds for them in the call to minL1lin.
restrictions for parameter C (lb and ub) we wrote instead of five and six empty square brackets:
[AC,fval]=minL1lin(Q,d,[],[],[],[],lb,ub,[],optimoptions('linprog','Display','off'));
and where to write restrictions for parameter A? I don't know the syntax, how to write it.
Torsten
2024-9-13
编辑:Torsten
2024-9-13
The code requires "fminsearchbnd" from the file exchange:
clear
% Data
x = [7.3392, 14.6784, 22.0176, 29.3436, 36.6828, 44.0088, 51.3348, 58.674, 66, 73.3392, 80.6652, 88.0044, 95.3304, 102.6696, 109.9956, 117.3348, 124.6608, 132];
y = [115.1079, 87.7698, 80.5755, 78.1611, 76.5743, 75.7074, 74.9375, 74.9453, 74.59, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990];
% Initial guess for parameter theta
initial_guess = [1,1];
% Error function
error_function = @(theta) objFun(theta, x,y);
% Optimization of parameters
%Bounds on lambda and m
lambda0 = 0;
lambda1 = inf;
m0 = -inf;
m1 = inf;
%Call the optimizer
theta = fminsearchbnd(error_function, initial_guess,[lambda0,m0],[lambda1,m1]);
%theta = fminsearch(error_function, initial_guess);
[fval,AthetaC]=error_function(theta);
[A, theta,C]=deal(AthetaC{:})
A = 131.5017
theta = 1×2
0.2036 1.9539
<mw-icon class=""></mw-icon>
<mw-icon class=""></mw-icon>
C = 73.9207
fval
fval = 0.0015
function [fval,p]=objFun(theta, x,y)
x=x(:);y=y(:); theta=theta(:)';
lambda=theta(1); m=theta(2);
d=ones(size(y));
Q=[ 1./(1 + (lambda * x).^m), d]./y;
%Bounds on A and C
A0 = 0;
A1 = inf;
C0 = 0.9*min(y);
C1 = 1.1*min(y);
lb=[A0,C0];
ub=[A1,C1];
%Call the optimizer
[AC,fval]=minL1lin(Q,d,[],[],[],[],lb,ub,[],optimoptions('linprog','Display','off'));
%[AC,fval]=minL1lin(Q,d,[],[],[],[],[],[],[],optimoptions('linprog','Display','off'));
fval=fval/numel(y);
A=AC(1); C=AC(2);
p={A, theta, C};
end
DGM
2024-9-24
roborrr
2024-9-24
编辑:roborrr
2024-9-24
TThanks to Torsen for perfecting the code, it was trully amazing. Also thanks to Matt J who wrote the initial version of the code, that was also excellent.
The main beauty of this code is that it eliminates the occurrence of local minima during optimization, that could arise when using built-in Matlab functions.
In this code, the dependent variable 'y' depends only on one variable 'x' (one-dimensional problem). Is it possible to write code for a two-dimensional problem, when the dependent variable depends nonlinearly on two independent variables at once, using the scheme you used for the one-dimensional problem, which excludes the occurrence of local minima? Here is an example of such a problem:
The experiments were carried out on the following unique values of the variable x_un length of 5 and y_un length of 10 :
x_un = [0.4, 0.5, 0.6, 0.7, 0.8];
y_un = [283.15, 288.15, 290.65, 293.15, 295.15, 298.15, 300.65, 303.15, 313.15, 323.15];
based on x_un and y_un, new variables x and y length of 50 (5*10) are created so that the pairs (x(i),y(i)) i=1,50 exhaust all combinations of pairs (x_un, y_un):
x = [0.4, 0.5, 0.6, 0.7, 0.8, 0.4, 0.5, 0.6, 0.7, 0.8, 0.4, 0.5, 0.6, 0.7, 0.8, ...
0.4, 0.5, 0.6, 0.7, 0.8, 0.4, 0.5, 0.6, 0.7, 0.8, 0.4, 0.5, 0.6, 0.7, 0.8, ...
0.4, 0.5, 0.6, 0.7, 0.8, 0.4, 0.5, 0.6, 0.7, 0.8, 0.4, 0.5, 0.6, 0.7, 0.8, ...
0.4, 0.5, 0.6, 0.7, 0.8];
y = [283.15, 283.15, 283.15, 283.15, 283.15, 288.15, 288.15, 288.15, 288.15, 288.15, ...
290.65, 290.65, 290.65, 290.65, 290.65, 293.15, 293.15, 293.15, 293.15, 293.15, ...
295.15, 295.15, 295.15, 295.15, 295.15, 298.15, 298.15, 298.15, 298.15, 298.15, ...
300.65, 300.65, 300.65, 300.65, 300.65, 303.15, 303.15, 303.15, 303.15, 303.15, ...
313.15, 313.15, 313.15, 313.15, 313.15, 323.15, 323.15, 323.15, 323.15, 323.15];
variable z is also created, the length of which is also 50, in such a way that experimental data are entered into z(i), which correspond to (x(i),y(i)) i=1,50:
z = [12.61323, 15.96843, 22.20326, 30.15335, 60.71139, 10.97453, 14.54062, 18.84638, 24.82916, ...
53.02121, 9.57412, 13.34543, 17.50987, 22.78405, 46.60942, 8.74127, 11.59837, 14.62923, ...
21.16607, 42.31916, 7.93542, 10.73998, 13.71756, 19.05432, 39.08479, 7.33475, 10.12368, ...
12.67264, 18.14767, 35.87183, 6.52910, 9.07151, 11.60863, 16.87576, 32.95969, 5.43683, ...
8.19517, 10.54588, 15.03453, 29.99945, 4.25370, 6.86437, 9.20690, 12.29640, 24.78083, ...
2.99082, 5.05884, 6.68976, 9.54918, 18.76493];
approximation model is considered:
z_calculate = A * exp(E/ (8.314 * y)) * (1 + k * x^n);
and based on this model, an objective function is created, which is equal to the average value of the relative deviations between z and z_calculate:
error_function = mean(abs(z - z_calculate)) /z)
Our goal is to select parameters A,E,k,n in such a way that 'error_function' takes the value of the global minimum.
if possible, set lower and upper limits for parameters:
lb_A = 10^(-3); ub_A = 100;
lb_E = 2000; ub_E = 50000;
lb_k = 0.1; ub_k = 10;
lb_n = 1; ub_n = 3;
roborrr
2024-9-26
and construct Q this time using x,y, and z.
in the linear part of the model z_calculate = A * exp(E/ (8.314 * y)) * (1 + k * x^n) there is only the coefficient A, and the free term C is missing. the nonlinear part of the model depends on two variables at once. how to construct Q? and how to replace those commands where C is present? I don't know the syntax, how to do this.
Matt J
2024-9-26
编辑:Matt J
2024-9-26
The role of minL1lin is to solve for the linear parameters, for a fixed and given set of values of the nonlinear parameters. So, if A is your only linear parameter, then Q should contain coefficients only for A. Bounds on the nonlinear parameters should be given to fminsearchbnd as before.
function [fval,p]=objFun(theta, x,y,z)
x=x(:);y=y(:);z=z(:); theta=theta(:)';
E=theta(1); k=theta(2); n=theta(3);
M=numel(z);
d=ones(M,1);
Q= exp(E./(8.314 .* y)) .* (1 + k .* x.^n) ./z;
lb_A=___;
ub_A=___;
%Solve the linear part of the model
[A,fval]=minL1lin(Q,d,[],[],[],[],lb_A,ub_A,[],optimoptions('linprog','Display','off'));
fval=fval/M;
p={A, theta};
end
roborrr
2024-10-18
移动:Torsten
2024-10-18
Thanks Matt J for the answer.
How do you optimize a model that doesn't have a linear part? For example, in the model
y = A.* (1 + (lambda * x).^m)+ C,
where A and C are vectors whose values are already known and only the parameters of the nonlinear part - lambda and m - need to be optimized.
If we introduce the following designation:
y1 =( y- C)./A-1
then the original model is reduced to the model
y1= (lambda * x).^m), which doesn't have a linear part. What should we do in this case?
This model is a one-dimensional projection of a three-dimensional model onto the x-axis, so the values in the vector x are repeated with different corresponding values of y.
roborrr
2024-10-19
编辑:roborrr
2024-10-19
Thanks Torsten for the answer, but I'm again confused by the code syntax. Here is the code you created for the model, which contains linear coefficients:
clear
% Data
x = [7.3392, 14.6784, 22.0176, 29.3436, 36.6828, 44.0088, 51.3348, 58.674, 66, 73.3392, 80.6652, 88.0044, 95.3304, 102.6696, 109.9956, 117.3348, 124.6608, 132];
y = [115.1079, 87.7698, 80.5755, 78.1611, 76.5743, 75.7074, 74.9375, 74.9453, 74.59, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990];
% Initial guess for parameter theta
initial_guess = [1,1];
% Error function
error_function = @(theta) objFun(theta, x,y);
% Optimization of parameters
%Bounds on lambda and m
lambda0 = 0;
lambda1 = inf;
m0 = -inf;
m1 = inf;
%Call the optimizer
theta = fminsearchbnd(error_function, initial_guess,[lambda0,m0],[lambda1,m1]);
%theta = fminsearch(error_function, initial_guess);
[fval,AthetaC]=error_function(theta);
[A, theta,C]=deal(AthetaC{:})
fval
function [fval,p]=objFun(theta, x,y)
x=x(:);y=y(:); theta=theta(:)';
lambda=theta(1); m=theta(2);
d=ones(size(y));
Q=[ 1./(1 + (lambda * x).^m), d]./y;
%Bounds on A and C
A0 = 0;
A1 = inf;
C0 = 0.9*min(y);
C1 = 1.1*min(y);
lb=[A0,C0];
ub=[A1,C1];
%Call the optimizer
[AC,fval]=minL1lin(Q,d,[],[],[],[],lb,ub,[],optimoptions('linprog','Display','off'));
%[AC,fval]=minL1lin(Q,d,[],[],[],[],[],[],[],optimoptions('linprog','Display','off'));
fval=fval/numel(y);
A=AC(1); C=AC(2);
p={A, theta, C};
end
as I understand, you suggest to replace "minL1lin" with "fminsearchbnd" in the "objFun" subroutine. but "fminsearchbnd" is already used in the main part of the code to determine the boundaries of the "lambda" and "m" parameters. I don't know the syntax of how to combine these two uses of "fminsearchbnd", how to output the optimized values of "lambda" and "m" and the error function "fval", for a model that does not have a linear part.
If it is not too much trouble, Please remake the code you sent which is written for the model y = A.* (1 + (lambda * x).^m)+ C, for the model y1= (lambda * x).^m)
Torsten
2024-10-19
编辑:Torsten
2024-10-20
Why do you still solve for A and C if you say their values are known ?
clear
% Data
x = [7.3392, 14.6784, 22.0176, 29.3436, 36.6828, 44.0088, 51.3348, 58.674, 66, 73.3392, 80.6652, 88.0044, 95.3304, 102.6696, 109.9956, 117.3348, 124.6608, 132];
y = [115.1079, 87.7698, 80.5755, 78.1611, 76.5743, 75.7074, 74.9375, 74.9453, 74.59, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990];
% Initial guess for parameter theta
initial_guess = [1,1];
% Error function
error_function = @(theta) objFun(theta, x,y);
% Optimization of parameters
%Bounds on lambda and m
lambda0 = 0;
lambda1 = inf;
m0 = -inf;
m1 = inf;
%Call the optimizer
theta = fminsearchbnd(error_function, initial_guess,[lambda0,m0],[lambda1,m1]);
fval=error_function(theta);
fval
function fval=objFun(theta, x,y)
x=x(:);y=y(:); theta=theta(:);
lambda=theta(1); m=theta(2);
% Set A and B to appropriate values !
A = 1;
C = 1;
fval = sum(abs((y - (A.* (1 + (lambda * x).^m)+ C))./y));
end
roborrr
2024-10-28,17:40
移动:Torsten
2024-10-28,18:06
You guessed correctly: I wanted only lambda and m to be optimized, with A and C known, And you sent the code i wanted.
but I tested your code as follows:
first I solved the optimization problem for the model y=A.*(1+(lambda*x).^m)+C using minL1lin, assuming first that A and C are unknown and should be optimized jointly lambda and m:
clear
% Data
x = [7.3392, 14.6784, 22.0176, 29.3436, 36.6828, 44.0088, 51.3348, 58.674, 66, 73.3392, 80.6652, 88.0044, 95.3304, 102.6696, 109.9956, 117.3348, 124.6608, 132];
y = [115.1079, 87.7698, 80.5755, 78.1611, 76.5743, 75.7074, 74.9375, 74.9453, 74.59, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990];
initial_guess = [1,1];
% Error function
error_function = @(theta) objFun(theta, x,y);
% Optimization of parameters
%Bounds on lambda and m
lambda0 =0;
lambda1=inf;
m0 =0;
m1 =inf;
%Call the optimizer
theta = fminsearchbnd(error_function, initial_guess,[lambda0,m0],[lambda1,m1]);
%theta = fminsearch(error_function, initial_guess);
[fval,AthetaC]=error_function(theta);
[A, theta,C]=deal(AthetaC{:})
fval
function [fval,p]=objFun(theta, x,y)
x=x(:);y=y(:); theta=theta(:)';
lambda=theta(1); m=theta(2);
d=ones(size(y));
Q=[ 1./(1 + (lambda * x).^m), d]./y;
%Bounds on A and C
A0 = 0;
A1 = inf;
C0 = 0.2*min(y);
C1 = 1.1*min(y);
lb=[A0,C0];
ub=[A1,C1];
%Call the optimizer
[AC,fval]=minL1lin(Q,d,[],[],[],[],lb,ub,[],optimoptions('linprog','Display','off'));
%[AC,fval]=minL1lin(Q,d,[],[],[],[],[],[],[],optimoptions('linprog','Display','off'));
fval=fval/numel(y);
A=AC(1); C=AC(2);
p={A, theta, C};
end
and I got the following results:
A = 131.5161945547369
theta = 0.2037 1.9538
C = 73.9206126555175
fval = 0.0015
after that the optimized values A= 131.5161945547369 and C= 73.9206126555175 (obtained using minL1lin) I used as previously known coefficients and optimized only lambda and m parameters using fminsearchbnd:
clear
% Data
x = [7.3392, 14.6784, 22.0176, 29.3436, 36.6828, 44.0088, 51.3348, 58.674, 66, 73.3392, 80.6652, 88.0044, 95.3304, 102.6696, 109.9956, 117.3348, 124.6608, 132];
y = [115.1079, 87.7698, 80.5755, 78.1611, 76.5743, 75.7074, 74.9375, 74.9453, 74.59, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990];
% Initial guess for parameter theta
initial_guess = [1,1];
% Error function
error_function = @(theta) objFun(theta, x,y);
% Optimization of parameters
%Bounds on lambda and m
lambda0 = 0;
lambda1 = inf;
m0 = 0;
m1 = inf;
%Call the optimizer
theta = fminsearchbnd(error_function, initial_guess,[lambda0,m0],[lambda1,m1]);
fval=error_function(theta);
fval
function fval=objFun(theta, x,y)
x=x(:);y=y(:); theta=theta(:);
lambda=theta(1); m=theta(2);
% Set A and B to appropriate values !
A = 131.5161945547369;
C = 73.9206126555175;
fval = sum(abs((y - (A.* (1 + (lambda * x).^m)+ C))./y));
fval= fval/numel(y);
end
and got a much worse result for Fval:
fval = 3.3576
why the result of the first script - fval = 0.0015 and the result of the second script - fval = 3.3576 are not equal? where is the error in the second script?
Torsten
2024-10-28,19:04
编辑:Torsten
2024-10-29,0:03
I think the error is in the first script. If you take
y=A.*(1+(lambda*x).^m)+C
as your model equation, you should take
Q=[ 1 + (lambda * x).^m, d]./y;
instead of
Q=[ 1./(1 + (lambda * x).^m), d]./y;
shouldn't you ?
Or you change the objective of your second script to
function fval=objFun(theta, x,y)
x=x(:);y=y(:); theta=theta(:);
lambda=theta(1); m=theta(2);
% Set A and B to appropriate values !
A = 131.5161945547369;
C = 73.9206126555175;
fval = sum(abs((y - (A./(1 + (lambda * x).^m)+ C))./y));
fval= fval/numel(y);
end
In this case, your model function is changed to
y=A./(1+(lambda*x).^m)+C
roborrr
2024-11-1,14:08
thank you dear Torshen for correcting the error in my second script - instead of model function y=A.*(1+(lambda*x).^m)+C, there should indeed be function y=A./(1+(lambda*x).^m)+C:
clear
% Data
x = [7.3392, 14.6784, 22.0176, 29.3436, 36.6828, 44.0088, 51.3348, 58.674, 66, 73.3392, 80.6652, 88.0044, 95.3304, 102.6696, 109.9956, 117.3348, 124.6608, 132];
y = [115.1079, 87.7698, 80.5755, 78.1611, 76.5743, 75.7074, 74.9375, 74.9453, 74.59, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990];
% Initial guess for parameter theta
initial_guess = [1,1];
% Error function
error_function = @(theta) objFun(theta, x,y);
% Optimization of parameters
%Bounds on lambda and m
lambda0 =0;
lambda1=inf;
m0 =0;
m1 =inf;
A0 = 0;
A1 = inf;
C0 = 0.2*min(y);
C1 = 1.1*min(y);
%Call the optimizer
theta = fminsearchbnd(error_function, initial_guess,[lambda0,m0],[lambda1,m1]);
fval=error_function(theta);
fval
function fval=objFun(theta, x,y)
x=x(:);y=y(:); theta=theta(:);
lambda=theta(1); m=theta(2);
% Set A and B to appropriate values !
A = 131.5161945547369;
C = 73.9206126555175;
fval = sum(abs((y - (A./(1 + (lambda * x).^m)+ C))./y));
fval= fval/numel(y);
end
This construct does give the best result - Fval = 0.015, when the best coefficients A and C are known in advance. But I tried to use this construct to optimize all coefficients A, C, lambda and m and got the worst result - Fval = 0.0240. Here is the script I wrote:
clear
% Data
x = [7.3392, 14.6784, 22.0176, 29.3436, 36.6828, 44.0088, 51.3348, 58.674, 66, 73.3392, 80.6652, 88.0044, 95.3304, 102.6696, 109.9956, 117.3348, 124.6608, 132];
y = [115.1079, 87.7698, 80.5755, 78.1611, 76.5743, 75.7074, 74.9375, 74.9453, 74.59, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990];
% Initial guess for parameter theta
initial_guess = [1,1,1,1];
% Error function
error_function = @(theta) objFun(theta, x,y);
% Optimization of parameters
%Bounds on lambda and m
lambda0 =0;
lambda1=inf;
m0 =0;
m1 =inf;
A0 = 0;
A1 = inf;
C0 = 0.2*min(y);
C1 = 1.1*min(y);
%Call the optimizer
theta = fminsearchbnd(error_function, initial_guess,[lambda0,m0,A0,C0],[lambda1,m1,A1,C1]);
fval=error_function(theta);
fval
function fval=objFun(theta, x,y)
x=x(:);y=y(:); theta=theta(:);
lambda=theta(1); m=theta(2);
A=theta(3);
C=theta(4);
% the best values of A and C that should be obtained in this script
% A = 131.5161945547369;
% C = 73.9206126555175;
fval = sum(abs((y - (A./(1 + (lambda * x).^m)+ C))./y));
fval= fval/numel(y);
end
why does the value - Fval = 0.015, obtained from the first script not match the value obtained from the second script - Fval = 0.0240? where am I making a mistake in the second script?
how universal is this construct? is it possible to use it for models of any complexity? or is the class of models limited?
Torsten
2024-11-1,14:22
编辑:Torsten
2024-11-1,14:30
You tried two solution strategies: Solving for all 4 parameters simultaneously (first strategy) and solving for the 2 linear and the 2 nonlinear coefficients separately with result coupling (second strategy). The second strategy turned out to be better and more robust - especially because no initial guesses for A and C are needed (your values A = 1 and C = 1 used for the first strategy most probably are too far off).
That's life in numerical analysis: usually, you have to go the way of trial and error for each problem you want to solve, and generalizations are not possible.
roborrr
2024-11-16,9:28
I use construct:
[AC,fval]=minL1lin(Q,d,[],[],[],[],lb,ub,[],optimoptions('linprog','Display','off'));
matlab gives a message:
Exiting: Maximum number of function evaluations has been exceeded
- increase MaxFunEvals option.
Current function value: 0.012493
What is the default maximum iteration count in aa and how to increase it?
and also: in some cases when using this construct matlab gives the following errors:
Error using linprog (line 346)
LINPROG encountered an internal error that caused the solution to lose feasibility. We are sorry for the inconvenience.
Please contact technical support for assistance with your problem, quoting the code "-2_4".
Error in minL1lin (line 116)
[xz,~, varargout{3:nargout-1}]=linprog(f,A,b,Aeq,beq,LB,UB,args{:});
Error in A_A_optimize_problem_A_B_C_lambda_m/objFun (line 42)
[AC,fval]=minL1lin(Q,d,[],[],[],[],lb,ub,[],optimoptions('linprog','Display','off'));
Error in A_A_optimize_problem_A_B_C_lambda_m>@(theta)objFun(theta,x,y) (line 9)
error_function = @(theta) objFun(theta, x,y);
Error in fminsearchbnd>intrafun (line 259)
fval = feval(params.fun,reshape(xtrans,params.xsize),params.args{:});
Error in fminsearch (line 324)
x(:) = xr; fxr = funfcn(x,varargin{:});
Error in fminsearchbnd (line 229)
[xu,fval,exitflag,output] = fminsearch(@intrafun,x0u,options,params);
Error in A_A_optimize_problem_A_B_C_lambda_m (line 19)
theta = fminsearchbnd(error_function, initial_guess,[lambda0,m0],[lambda1,m1]);
Matt J
2024-11-16,14:35
Error in A_A_optimize_problem_A_B_C_lambda_m/objFun (line 42)
[AC,fval]=minL1lin(Q,d,[],[],[],[],lb,ub,[],optimoptions('linprog','Display','off'));
We need to examine Q,d, lb,ub when this happens.
更多回答(0 个)
另请参阅
标签
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!发生错误
由于页面发生更改,无法完成操作。请重新加载页面以查看其更新后的状态。
您也可以从以下列表中选择网站:
如何获得最佳网站性能
选择中国网站(中文或英文)以获得最佳网站性能。其他 MathWorks 国家/地区网站并未针对您所在位置的访问进行优化。
美洲
- América Latina (Español)
- Canada (English)
- United States (English)
欧洲
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom(English)
亚太
- Australia (English)
- India (English)
- New Zealand (English)
- 中国
- 日本Japanese (日本語)
- 한국Korean (한국어)