Main Content

Bayesian Optimization Workflow

What Is Bayesian Optimization?

Optimization, in its most general form, is the process of locating a point that minimizes a real-valued function called the objective function. Bayesian optimization is the name of one such process. Bayesian optimization internally maintains a Gaussian process model of the objective function, and uses objective function evaluations to train the model. One innovation in Bayesian optimization is the use of an acquisition function, which the algorithm uses to determine the next point to evaluate. The acquisition function can balance sampling at points that have low modeled objective functions, and exploring areas that have not yet been modeled well. For details, see Bayesian Optimization Algorithm.

Bayesian optimization is part of Statistics and Machine Learning Toolbox™ because it is well-suited to optimizing hyperparameters of classification and regression algorithms. A hyperparameter is an internal parameter of a classifier or regression function, such as the box constraint of a support vector machine, or the learning rate of a robust classification ensemble. These parameters can strongly affect the performance of a classifier or regressor, and yet it is typically difficult or time-consuming to optimize them. See Bayesian Optimization Characteristics.

Typically, optimizing the hyperparameters means that you try to minimize the cross-validation loss of a classifier or regression.

Ways to Perform Bayesian Optimization

You can perform a Bayesian optimization in several ways:

  • fitcauto and fitrauto — Pass predictor and response data to the fitcauto or fitrauto function to optimize across a selection of model types and hyperparameter values. Unlike other approaches, using fitcauto or fitrauto does not require you to specify a single model before the optimization; model selection is part of the optimization process. The optimization minimizes cross-validation loss, which is modeled using a multi-TreeBagger model in fitcauto and a multi-RegressionGP model in fitrauto, rather than a single Gaussian process regression model as used in other approaches. See Bayesian Optimization for fitcauto and Bayesian Optimization for fitrauto.

  • Classification Learner and Regression Learner apps — Choose Optimizable models in the machine learning apps and automatically tune their hyperparameter values by using Bayesian optimization. The optimization minimizes the model loss based on the selected validation options. This approach has fewer tuning options than using a fit function, but allows you to perform Bayesian optimization directly in the apps. See Hyperparameter Optimization in Classification Learner App and Hyperparameter Optimization in Regression Learner App.

  • Fit function — Include the OptimizeHyperparameters name-value argument in many fitting functions to apply Bayesian optimization automatically. The optimization minimizes cross-validation loss. This approach gives you fewer tuning options than using bayesopt, but enables you to perform Bayesian optimization more easily. See Bayesian Optimization Using a Fit Function.

  • bayesopt — Exert the most control over your optimization by calling bayesopt directly. This approach requires you to write an objective function, which does not have to represent cross-validation loss. See Bayesian Optimization Using bayesopt.

Bayesian Optimization Using a Fit Function

To perform Bayesian optimization using the error in a cross-validated response or the compact size of the model as the objective, follow these steps.

  1. Choose your classification or regression solver among the fit functions that accept the OptimizeHyperparameters name-value argument.

  2. Decide on the hyperparameters to optimize, and pass them in the OptimizeHyperparameters name-value argument. For each fit function, you can choose from a set of hyperparameters. See Eligible Hyperparameters for Fit Functions, or use the hyperparameters function, or consult the fit function reference page.

    You can pass a cell array of parameter names. You can also set 'auto' as the OptimizeHyperparameters value, which chooses a typical set of hyperparameters to optimize, or 'all' to optimize all available parameters.

  3. For ensemble fit functions fitcecoc, fitcensemble, and fitrensemble, also include parameters of the weak learners in the OptimizeHyperparameters cell array.

  4. Optionally, create an options structure or object for the HyperparameterOptimizationOptions name-value argument. For example, you can specify to optimize on the cross-validated loss (default) or the compact size of the model. You can also perform a set of optimization problems that have the same options but different constraint bounds. For more information, see the HyperparameterOptimizationOptions name-value argument description on the fit function reference pages.

  5. Call the fit function with the appropriate name-value arguments.

For examples, see Optimize Classifier Fit Using Bayesian Optimization and Optimize a Boosted Regression Ensemble. Also, every fit function reference page contains a Bayesian optimization example.

Bayesian Optimization Using bayesopt

To perform a Bayesian optimization using bayesopt, follow these steps.

  1. Prepare your variables. See Variables for a Bayesian Optimization.

  2. Create your objective function. See Bayesian Optimization Objective Functions. If necessary, create constraints, too. See Constraints in Bayesian Optimization. To include extra parameters in an objective function, see Parameterizing Functions.

  3. Decide on options, meaning the bayseopt Name,Value pairs. You are not required to pass any options to bayesopt but you typically do, especially when trying to improve a solution.

  4. Call bayesopt.

  5. Examine the solution. You can decide to resume the optimization by using resume, or restart the optimization, usually with modified options.

For an example, see Optimize Cross-Validated Classifier Using bayesopt.

Bayesian Optimization Characteristics

Bayesian optimization algorithms are best suited to these problem types.

CharacteristicDetails
Low dimension

Bayesian optimization works best in a low number of dimensions, typically 10 or fewer. While Bayesian optimization can solve some problems with a few dozen variables, it is not recommended for dimensions higher than about 50.

Expensive objective

Bayesian optimization is designed for objective functions that are slow to evaluate. It has considerable overhead, typically several seconds for each iteration.

Low accuracy

Bayesian optimization does not necessarily give very accurate results. If you have a deterministic objective function, you can sometimes improve the accuracy by starting a standard optimization algorithm from the bayesopt solution.

Global solution

Bayesian optimization is a global technique. Unlike many other algorithms, to search for a global solution you do not have to start the algorithm from various initial points.

Hyperparameters

Bayesian optimization is well-suited to optimizing hyperparameters of another function. A hyperparameter is a parameter that controls the behavior of a function. For example, the fitcsvm function fits an SVM model to data. It has hyperparameters BoxConstraint and KernelScale for its 'rbf' KernelFunction. For an example of Bayesian optimization applied to hyperparameters, see Optimize Cross-Validated Classifier Using bayesopt.

Parameters Available for Fit Functions

Eligible Hyperparameters for Fit Functions

Function NameEligible Parameters
fitcdiscrDelta
Gamma
DiscrimType
fitcecocCoding
eligible fitcdiscr parameters for 'Learners','discriminant'
eligible fitckernel parameters for 'Learners','kernel'
eligible fitcknn parameters for 'Learners','knn'
eligible fitclinear parameters for 'Learners','linear'
eligible fitcsvm parameters for 'Learners','svm'
eligible fitctree parameters for 'Learners','tree'
fitcensembleMethod
NumLearningCycles
LearnRate
eligible fitcdiscr parameters for 'Learners','discriminant'
eligible fitcknn parameters for 'Learners','knn'
eligible fitctree parameters for 'Learners','tree'
fitcgamInitialLearnRateForInteractions
InitialLearnRateForPredictors
Interactions
MaxNumSplitsPerInteraction
MaxNumSplitsPerPredictor
NumTreesPerInteraction
NumTreesPerPredictor
fitckernelLearner
KernelScale
Lambda
NumExpansionDimensions
fitcknnNumNeighbors
Distance
DistanceWeight
Exponent
Standardize
fitclinearLambda
Learner
Regularization
fitcnbDistributionNames
Width
Kernel
fitcnetActivations
Lambda
LayerBiasesInitializer
LayerWeightsInitializer
LayerSizes
Standardize
fitcsvmBoxConstraint
KernelScale
KernelFunction
PolynomialOrder
Standardize
fitctreeMinLeafSize
MaxNumSplits
SplitCriterion
NumVariablesToSample
fitrensembleMethod
NumLearningCycles
LearnRate
eligible fitrtree parameters for 'Learners','tree':
MinLeafSize
MaxNumSplits
NumVariablesToSample
fitrgamInitialLearnRateForInteractions
InitialLearnRateForPredictors
Interactions
MaxNumSplitsPerInteraction
MaxNumSplitsPerPredictor
NumTreesPerInteraction
NumTreesPerPredictor
fitrgpSigma
BasisFunction
KernelFunction
KernelScale
Standardize
fitrkernelLearner
KernelScale
Lambda
NumExpansionDimensions
Epsilon
fitrlinearLambda
Learner
Regularization
fitrnetActivations
Lambda
LayerBiasesInitializer
LayerWeightsInitializer
LayerSizes
Standardize
fitrsvmBoxConstraint
KernelScale
Epsilon
KernelFunction
PolynomialOrder
Standardize
fitrtreeMinLeafSize
MaxNumSplits
NumVariablesToSample

See Also

|

Related Topics