fitrgp: hyperparamter optimization method maximum likelihood & cross-validation

28 次查看(过去 30 天)
Hi,
I am wondering how fitrgp optimizes hyperparameters.
According to the instructions in this link : https://kr.mathworks.com/help/stats/fitrgp.html
gprMdl2 = fitrgp(x,y,'KernelFunction','squaredexponential',...
'KernelParameters',kparams0,'Sigma',sigma0);
This code: ' The marginal log likelihood that fitrgp maximizes to estimate GPR parameters has multiple local solution '
That means fitrgp use maximum likelihood estimation (MLE) to optimize hyperparameter.
But in this code,
gprMdl2 = fitrgp(x,y,'KernelFunction','squaredexponential',...
'OptimizeHyperparameters','auto','HyperparameterOptimizationOptions',...
struct('AcquisitionFunctionName','expected-improvement-plus'));
This code : ' Find hyperparameters that minimize five-fold cross-validation loss by using automatic hyperparameter optimization.'
That means fitrgp use cross-validation (CV) to optimize hyperparameters. is this right?
So when train GPR models, there are MLE and CV methods to optimize hyperparameters. Just use fitrgp, use MLE to optimize hyperparameters, and if i put options separately in fitrgp like above code, i can optimize hyperparameters using CV instead of MLE. Did i get it right?

采纳的回答

Don Mathis
Don Mathis 2019-1-10
The hyperparameters and the objective function are different in the 2 cases.
  • When you do pass 'OptimizeHyperparameters', it will optimize the parameters you specify, which is some subset of {'BasisFunction','KernelFunction','KernelScale','Sigma','Standardize'}, using Bayesian Optimization, minimizing the out-of-sample MSE as measured using cross-validation.
There is some overlap between the 2 sets of parameters. Sigma is the same, and KernelScale corresponds to the length scales inside the kernel functions.
  1 个评论
Mahmoud
Mahmoud 2024-4-3
Thanks for the answer, is there a way to specify how many fold cross-validation will be used in the optimization process? I do have a small data set (around 14 points) and using 5 of them in the corss-validation process will remove "A lot" of the useful infromation? any idea about what's the best solution to deal with such a small dataset? Also, what's the used objective function that is being optimized in this case?
Thanks

请先登录,再进行评论。

更多回答(0 个)

类别

Help CenterFile Exchange 中查找有关 Gaussian Process Regression 的更多信息

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by