fmincon with handle function input

9 次查看(过去 30 天)
Hello, I have a question about the usage of the fmincon routine. I've read the documentation, and the call requires an explicit function with a certain number of variables and coefficients that determine equality and inequality constraints. I wanted to ask if, instead of an explicit function, it's possible to input a handle function, such as a numerical integrator (for example, one that varies initial conditions to minimize the integration output). In the documentation, the only example that shows the use of a handle function still includes an explicit function within the handle.
  20 个评论
Yousef
Yousef 2023-10-22
编辑:Walter Roberson 2023-10-22
Yes, you can definitely use a function handle as an input to `fmincon`, and the function that the handle refers to can be quite complex. In fact, many optimization problems involve non-trivial functions, including numerical integrators.
Let's break down your question a bit:
1. **Function Handle for Objective Function**: Your objective function can be a numerical integrator or any other complex routine. Here's a simple way to set it up:
matlab code:
function cost = myObjectiveFunction(x)
% Here, x is the vector of variables you're optimizing.
% You can use x as initial conditions or parameters for your numerical integrator.
% ... Your complex routine, e.g., numerical integration
result = myIntegrator(x); % This is just a hypothetical function
% The objective is to minimize 'result'
cost = result;
end
% Call fmincon
x0 = [initial_guesses]; % replace with your initial guesses
[x_optimal, fval] = fmincon(@myObjectiveFunction, x0, ...);
```
2. **Equality and Inequality Constraints**: If you have constraints that involve complex operations or other integrations, you can use function handles for those as well.
matlab code:
function [c, ceq] = myConstraints(x)
% c(x) <= 0 constraints
c = ...; % your inequality constraints
% ceq(x) = 0 constraints
ceq = ...; % your equality constraints
end
% Call fmincon
options = optimoptions('fmincon','SpecifyConstraintGradient',false);
[x_optimal, fval] = fmincon(@myObjectiveFunction, x0, [], [], [], [], [], [], @myConstraints, options);
```
3. **Using Integrators within the Optimization Routine**: If you're using MATLAB's built-in numerical integrators (like `ode45`, etc.), you can easily embed them inside your objective function or constraints:
matlab code:
function cost = myObjectiveFunction(x)
[~, Y] = ode45(@(t,y) myODE(t, y, x), tspan, initial_conditions);
% Process Y to get the cost, if necessary
cost = ...; % some function of Y
end
```
Here, `myODE` would be another function that defines your differential equations. The point is, `fmincon` doesn't care how the function value is computed. It just needs a function handle to obtain values for given inputs.
In essence, as long as you can define your problem in terms of functions (handles) that `fmincon` can call to get objective values, constraints, and optionally gradients, you can use any complex logic within those functions, including numerical integrators.
Walter Roberson
Walter Roberson 2023-10-22
In essence, as long as you can define your problem in terms of functions (handles) that `fmincon` can call to get objective values, constraints, and optionally gradients, you can use any complex logic within those functions, including numerical integrators.
Both fmincon() and ode45() have the restriction that the functions being invoked by them must have continuous first derivatives (and second derivatives too in some cases). Therefore you cannot use any complex logic in the functions.
The particular system being modeled has four phases. If there is a smooth (2 second derivatives) transition between each of the phases, then fmincon() and ode45() should be able to handle it, but if there is an abrupt transition anywhere in the mix then you would need to break up the logic into multiple parts.

请先登录,再进行评论。

采纳的回答

Sam Chak
Sam Chak 2023-9-11
I'm unsure if this is what you want to minimize using fmincon(). This is just a simple example without constraints. It returns the solution that is very to the true solution (the equilibrium point).
% search the initial states that minimize the norm using fmincon
initial_guess = [10 -10];
[x0sol, fval] = fmincon(@objfun, initial_guess)
Local minimum possible. Constraints satisfied. fmincon stopped because the size of the current step is less than the value of the step size tolerance and constraints are satisfied to within the value of the constraint tolerance.
x0sol = 1×2
1.0e-10 * 0.1119 -0.8979
fval = 1.1892e-10
% objective function <-- solve the Lander ODE with ode113 and find the norm
function J = objfun(x0)
tspan = [0 10];
init = [x0(1) x0(2)]; % initial states (equilibrium)
[t, x] = ode45(@odefcn, tspan, init);
J = norm(x);
end
% system <-- you already have this one (the Lander ODE)
function dxdt = odefcn(t, x)
A = [0 1; -1 -sqrt(3)];
dxdt = A*x;
end

更多回答(3 个)

Catalytic
Catalytic 2023-9-9
编辑:Catalytic 2023-9-9
No, the objective function that you give to fmincon can contain anything, but fmincon's algorithms will assume that the input-to-ouput mapping is differentiable and in some cases twice differentiable.
  1 个评论
Hynod
Hynod 2023-9-11
Can you please take a look at my comment with the script? Maybe my question now is clear.

请先登录,再进行评论。


Matt J
Matt J 2023-9-12
编辑:Matt J 2023-9-12
The problem is that it is not an explicit function, so it seems that fmincon never stops finding the minimum
It is possible that fmincon is struggling to find the minimum, but that shouldn't be related to the "explicitness" of the function. There certainly is no requirement that fun(p) be implemented in a single-line formula, if that's what worries you.
Note however, that the solution to an ODE can be a discontinuous (and therefore non-differentiable) function of the initial state. That can be a problem, since fmincon assumes fun(p) to be continuous and differentiable. Note also that numerical ODE solvers like ode113() have certain fragilities. This is why MathWorks has posted some relevant guidelines about problems involving ODEs, which you should read.
so it seems that fmincon never stops finding the minimum, how can I limitate the iterations?
You can limit the number of iterations by setting the MaxIterations option,
opts=optimoptions('fmincon','MaxIterations',100);
p=fmincon(fun,____,opts)
Generally, though, if you have to cut fmincon off after some number of iterations, it means the optimization is failing.
  13 个评论
Sam Chak
Sam Chak 2023-9-14
Hi @Hynod,
As you can see in my example below, we can work on a vector (state x) and return a scalar representing the Euclidean norm. This scalar can be used as the objective function value. However, in your code, you take the norm of the final state value, which also returns a scalar. Nevertheless, there is a significant difference in the physical meaning between the vector norm and the end-state norm.
% solving the 1st-order ODE
[t, x] = ode45(@(t, x) - x, [0 10], 1);
% check the size of vector, x(t)
Sx = size(x)
Sx = 1×2
53 1
% vector-based norm
Nx = norm(x)
Nx = 2.3371
% end-state norm
Ne = norm(x(end))
Ne = 4.5600e-05
% exponential decay solution
plot(t, x, 'linewidth', 1.5), grid on
Walter Roberson
Walter Roberson 2023-9-16
Unfortunately at the moment I do not really understand the difference between gamultiobj and paretosearch . It looks like they use different algorithms, with gamultiobj() using genetic algorithm approaches, and paretosearch using pattern searching; https://www.mathworks.com/help/gads/paretosearch-algorithm.html

请先登录,再进行评论。


Hynod
Hynod 2023-9-16
@Matt J @Catalytic @Torsten @Walter Roberson @Sam Chak I thank you all for the support, fmincon reached the minimum I wanted by minimizing the norm of the difference between the orbital parameters vector I wanted, and the vector it has been actually computed by the integration.
  2 个评论
Hynod
Hynod 2023-9-16
编辑:Hynod 2023-9-16
indeed , i would be very curious about how paretosearch and gamultiobj work, because they not seem to require any initial guess and I don't know why.
Walter Roberson
Walter Roberson 2023-10-22
gamultiobj() for one internally generates a number of starting points (number == "population size") based upon the upper and lower bound. You can use the options for gamultiobj to supply a particular starting population.

请先登录,再进行评论。

类别

Help CenterFile Exchange 中查找有关 Surrogate Optimization 的更多信息

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by