K-Fold validation with Hyperparameter Optimization, doesn't yield a Classifica​tionPartit​ionedModel

3 次查看(过去 30 天)
Hello people, I have a problem making cross-validation with the fitctree function, if I enabled Hyper parameter optimization. Normally, I use,
CTModel = fitctree(trainData,trainLabels,'KFold', 50 ,'ClassNames',[0 1]);
- which yields a ClassificationPartitionedModel class. If I tried to use ('OptimizeHyperparameters','all') options, I cannot use 'KFold', 50 pair, and I should use 'HyperparameterOptimizationOptions', struct('KFold',50) instead, otherwise it yields an error of "When optimizing parameters, validation arguments may only appear in the 'HyperparameterOptimizationOptions' argument". The problem that using this option, it produces a normal ClassificationTree not a ClassificationPartitionedModel. What should I do to produce the same first output with enabling hyper-parameter optimization?

采纳的回答

Don Mathis
Don Mathis 2018-7-12
You will need to take the model you got and run crossval on it:
M = crossval(CTModel, 'KFold',50)
When you passed
'HyperparameterOptimizationOptions',struct('KFold',50),
you were telling fitctree to use 50-fold crossvalidation loss as the objective function of the optimization. After the optimization, fitctree fits the entire dataset using the best hyperparameters found and returns that single model.
To get a partitioned model using those hyperparameters (which are now saved inside the model) you need to do the 50-fold crossval again.
  5 个评论

请先登录,再进行评论。

更多回答(1 个)

sanjeev kumar T M
hello,
I am using SVR to develop a model for motor positioning and i need help regarding how to use cross validation to make a good model. If any one is there please help me regarding this. I am using three ways to find a good model 1) i trained the model without partitioning the data and later i used same data set for validation. during optimization also i used 5-fold and 10-fold cross validation loss minimization for the complete data set. 2) first i partition the data set into training and testing next i used trained set set to develop a model with and without hyperparameter optimization techniques later i used testing set to validate the model. but in this case when i used 5-fold cross / 10-fold cross validation without optimization am getting high error but when i used bayesian optimization for hyperparameter tuning. After the optimization am with the help of parameter i trained the complete model and tested with testing data set but when i used the cross validation for this data set again am getting large errors. Similarly i followed the steps for both 1st and 2nd step from 2 to 10 folds each part am getting high losses after optimizing the model for respective model but when i compare the models with above two section 7, 9 and 10 fold cross validation losses able to reduce the cross validation losses when i used tuned hyperparameters. please any one help me regarding this whatever the step i am following is it right or wrong. can i take the better model with respect to the comparison of above two condition is it suitable procedure to select the best model. Please any one help me regarding this. Thank you

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by