当前选项名称和旧选项名称
许多选项名称在 R2016a 中都发生了更改。optimset 仅使用旧的选项名称。optimoptions 既接受旧名称也接受更改后的名称。但是,当您使用旧的名称-值对组设置选项时,optimoptions 会显示当前名称等效值。例如,旧的 TolX 选项等效于当前的 StepTolerance 选项。
options = optimoptions("fsolve",TolX=1e-4)options =
fsolve options:
Options used by current Algorithm ('trust-region-dogleg'):
(Other available algorithms: 'levenberg-marquardt', 'trust-region')
Set properties:
StepTolerance: 1.0000e-04
Default properties:
Algorithm: 'trust-region-dogleg'
Display: 'final'
FiniteDifferenceStepSize: 'sqrt(eps)'
FiniteDifferenceType: 'forward'
FunctionTolerance: 1.0000e-06
MaxFunctionEvaluations: '100*numberOfVariables'
MaxIterations: 400
OptimalityTolerance: 1.0000e-06
OutputFcn: []
PlotFcn: []
SpecifyObjectiveGradient: 0
TypicalX: 'ones(numberOfVariables,1)'
UseParallel: 0
Show options not used by current Algorithm ('trust-region-dogleg')下表提供相同的信息。第一个表按旧选项名称的字母顺序列出选项,第二个表按当前选项名称的字母顺序列出选项。这些表仅包括那些存在不同或具有不同值的名称,并且仅列出旧选项和当前选项中存在差异的那些值。有关 Global Optimization Toolbox 求解器中的更改,请参阅R2016a 中的选项变更 (Global Optimization Toolbox)。
按旧名称顺序排列的选项表
| 旧名称 | 当前名称 | 旧值 | 当前值 |
|---|---|---|---|
AlwaysHonorConstraints | HonorBounds | "bounds", "none" | true, false |
FinDiffRelStep | FiniteDifferenceStepSize | ||
FinDiffType | FiniteDifferenceType | ||
GoalsExactAchieve | EqualityGoalCount | ||
GradConstr | SpecifyConstraintGradient | "on", "off" | true, false |
GradObj | SpecifyObjectiveGradient | "on", "off" | true, false |
Hessian | HessianApproximation | "user-supplied", "bfgs", "lbfgs", "fin-diff-grads", "on", "off" |
当 |
HessFcn | HessianFcn | ||
HessMult | HessianMultiplyFcn | ||
HessUpdate(在 R2022a 中针对 fminunc 进行了更改) | HessianApproximation | "bfgs", "lbfgs", {"lbfgs",Positive Integer}, "dfp", "steepdesc" | "bfgs", "lbfgs", {"lbfgs",Positive Integer} |
Jacobian | SpecifyObjectiveGradient | ||
JacobMult | JacobianMultiplyFcn | ||
MaxFunEvals | MaxFunctionEvaluations | ||
MaxIter | MaxIterations | ||
MinAbsMax | AbsoluteMaxObjectiveCount | ||
PlotFcns | PlotFcn | ||
ScaleProblem | ScaleProblem | "obj-and-constr", "none" | true, false |
SubproblemAlgorithm | SubproblemAlgorithm | "cg", "ldl-factorization" | "cg", "factorization" |
TolCon | ConstraintTolerance | ||
TolFun(用法 1) | OptimalityTolerance | ||
TolFun(用法 2) | FunctionTolerance | ||
TolGapAbs | AbsoluteGapTolerance | ||
TolGapRel | RelativeGapTolerance | ||
TolX | StepTolerance |
按当前名称顺序排列的选项列表
| 当前名称 | 旧名称 | 当前值 | 旧值 |
|---|---|---|---|
AbsoluteGapTolerance | TolGapAbs | ||
AbsoluteMaxObjectiveCount | MinAbsMax | ||
ConstraintTolerance | TolCon | ||
EqualityGoalCount | GoalsExactAchieve | ||
FiniteDifferenceStepSize | FinDiffRelStep | ||
FiniteDifferenceType | FinDiffType | ||
FunctionTolerance | TolFun(用法 2) | ||
HessianApproximation (fmincon) | Hessian |
当 | "user-supplied", "bfgs", "lbfgs", "fin-diff-grads", "on", "off" |
HessianApproximation (fminunc)(在 R2022a 中针对 fminunc 进行了更改) | HessUpdate |
| "bfgs", "lbfgs", {"lbfgs",Positive Integer}, "dfp", "steepdesc" |
HessianFcn | HessFcn | ||
HessianMultiplyFcn | HessMult | ||
HonorBounds | AlwaysHonorConstraints | true, false | "bounds", "none" |
JacobianMultiplyFcn | JacobMult | ||
MaxFunctionEvaluations | MaxFunEvals | ||
MaxIterations | MaxIter | ||
OptimalityTolerance | TolFun(用法 1) | ||
PlotFcn | PlotFcns | ||
RelativeGapTolerance | TolGapRel | ||
ScaleProblem | ScaleProblem | true, false | "obj-and-constr", "none" |
SpecifyConstraintGradient | GradConstr | true, false | "on", "off" |
SpecifyObjectiveGradient | GradObj 或 Jacobian | true, false | "on", "off" |
StepTolerance | TolX | ||
SubproblemAlgorithm | SubproblemAlgorithm | "cg", "factorization" | "cg", "ldl-factorization" |