Main Content

当前选项名称和旧选项名称

许多选项名称在 R2016a 中都发生了更改。optimset 仅使用旧的选项名称。optimoptions 既接受旧名称也接受更改后的名称。但是,当您使用旧的名称-值对组设置选项时,optimoptions 会显示当前名称等效值。例如,旧的 TolX 选项等效于当前的 StepTolerance 选项。

options = optimoptions('fsolve','TolX',1e-4)
options = 

  fsolve options:

   Options used by current Algorithm ('trust-region-dogleg'):
   (Other available algorithms: 'levenberg-marquardt', 'trust-region-reflective')

   Set properties:
               StepTolerance: 1.0000e-04

   Default properties:
                   Algorithm: 'trust-region-dogleg'
              CheckGradients: 0
                     Display: 'final'
    FiniteDifferenceStepSize: 'sqrt(eps)'
        FiniteDifferenceType: 'forward'
           FunctionTolerance: 1.0000e-06
      MaxFunctionEvaluations: '100*numberOfVariables'
               MaxIterations: 400
         OptimalityTolerance: 1.0000e-06
                   OutputFcn: []
                     PlotFcn: []
    SpecifyObjectiveGradient: 0
                    TypicalX: 'ones(numberOfVariables,1)'
                 UseParallel: 0

   Show options not used by current Algorithm ('trust-region-dogleg')

下表提供相同的信息。第一个表按旧选项名称的字母顺序列出选项,第二个表按当前选项名称的字母顺序列出选项。这些表仅包括那些存在不同或具有不同值的名称,并且仅列出旧选项和当前选项中存在差异的那些值。有关 Global Optimization Toolbox 求解器中的更改,请参阅R2016a 中的选项变更 (Global Optimization Toolbox)

按旧名称顺序排列的选项表

旧名称当前名称旧值当前值
AlwaysHonorConstraintsHonorBounds'bounds', 'none'true, false
BranchingRuleBranchRule  
CutGenMaxIterCutMaxIterations  
DerivativeCheckCheckGradients'on', 'off'true, false
FinDiffRelStepFiniteDifferenceStepSize  
FinDiffTypeFiniteDifferenceType  
GoalsExactAchieveEqualityGoalCount  
GradConstrSpecifyConstraintGradient'on', 'off'true, false
GradObjSpecifyObjectiveGradient'on', 'off'true, false
HessianHessianApproximation'user-supplied', 'bfgs', 'lbfgs', 'fin-diff-grads', 'on', 'off'

'bfgs', 'lbfgs', 'finite-difference'

HessianFcnHessianMultiplyFcn 非空时将被忽略。

HessFcnHessianFcn  
HessMultHessianMultiplyFcn  
HessUpdate(在 R2022a 中针对 fminunc 进行了更改)HessianApproximation"bfgs", "lbfgs", {"lbfgs",Positive Integer}, "dfp", "steepdesc""bfgs", "lbfgs", {"lbfgs",Positive Integer}
IPPreprocessIntegerPreprocess  
JacobianSpecifyObjectiveGradient  
JacobMultJacobianMultiplyFcn  
LPMaxIterLPMaxIterations  
MaxFunEvalsMaxFunctionEvaluations  
MaxIterMaxIterations  
MaxNumFeasPointsMaxFeasiblePoints  
MinAbsMaxAbsoluteMaxObjectiveCount  
PlotFcnsPlotFcn  
RelObjThresholdObjectiveImprovementThreshold  
RootLPMaxIterRootLPMaxIterations  
ScaleProblemScaleProblem'obj-and-constr', 'none'true, false
SubproblemAlgorithmSubproblemAlgorithm'cg', 'ldl-factorization''cg', 'factorization'
TolConConstraintTolerance  
TolFun(用法 1)OptimalityTolerance  
TolFun(用法 2)FunctionTolerance  
TolFunLPLPOptimalityTolerance  
TolGapAbsAbsoluteGapTolerance  
TolGapRelRelativeGapTolerance  
TolIntegerIntegerTolerance  
TolXStepTolerance  

按当前名称顺序排列的选项列表

当前名称旧名称当前值旧值
AbsoluteGapToleranceTolGapAbs  
AbsoluteMaxObjectiveCountMinAbsMax  
BranchRuleBranchingRule  
CheckGradientsDerivativeChecktrue, false'on', 'off'
ConstraintToleranceTolCon  
CutMaxIterationsCutGenMaxIter  
EqualityGoalCountGoalsExactAchieve  
FiniteDifferenceStepSizeFinDiffRelStep  
FiniteDifferenceTypeFinDiffType  
FunctionToleranceTolFun(用法 2)  
HessianApproximation (fmincon)Hessian

'bfgs', 'lbfgs', 'finite-difference'

HessianFcnHessianMultiplyFcn 非空时将被忽略。

'user-supplied', 'bfgs', 'lbfgs', 'fin-diff-grads', 'on', 'off'
HessianApproximation (fminunc)(在 R2022a 中针对 fminunc 进行了更改)HessUpdate

"bfgs", "lbfgs", {"lbfgs",Positive Integer}

"bfgs", "lbfgs", {"lbfgs",Positive Integer}, "dfp", "steepdesc"
HessianFcnHessFcn  
HessianMultiplyFcnHessMult  
HonorBoundsAlwaysHonorConstraintstrue, false'bounds', 'none'
IntegerPreprocessIPPreprocess  
IntegerToleranceTolInteger  
JacobianMultiplyFcnJacobMult  
LPMaxIterationsLPMaxIter  
LPOptimalityToleranceTolFunLP  
MaxFeasiblePointsMaxNumFeasPoints  
MaxFunctionEvaluationsMaxFunEvals  
MaxIterationsMaxIter  
ObjectiveImprovementThresholdRelObjThreshold  
OptimalityToleranceTolFun(用法 1)  
PlotFcnPlotFcns  
RelativeGapToleranceTolGapRel  
RootLPMaxIterationsRootLPMaxIter  
ScaleProblemScaleProblemtrue, false'obj-and-constr', 'none'
SpecifyConstraintGradientGradConstrtrue, false'on', 'off'
SpecifyObjectiveGradientGradObjJacobiantrue, false'on', 'off'
StepToleranceTolX  
SubproblemAlgorithmSubproblemAlgorithm'cg', 'factorization''cg', 'ldl-factorization'

相关主题