Classification Learner App vs. Training and testing a model programmatically, Is there any hidden magical step in the classification learner app?

1 次查看(过去 30 天)
I am trying to find a good model to explain my dataset. The problem is that I want to do leave-one-person-out cross validation which is not available in the App. So I trained different models (e.g. Tree, SVM, KNN, LDA) using functions like fitctee, fitcsvm, fitcknn, and fitcdiscr. Following the leave-one-person-out procedure I have found average classification accuracy of about 70% for the best model. However, when I use the App to model the data using 10-Fold cross validation, it has much better accuracy and TPR and TNR about 98%. This is really confusing that why this is happening! I was wondering if there are some steps I am missing when I do the modeling programmatically. Or is there any way to do what the App does by writing scripts and probably customizing the cross validation scheme to leave-one-person-out?

回答(1 个)

Stephan
Stephan 2018-7-16
编辑:Stephan 2018-7-16
Hi,
A possible way to do this is working with the app and then, when you got a good result, export the code to matlab. This allows you to see the magic steps that are made and modify the code you got, if needed.
There is some more information here.
I could imagine that this procedure will solve your problem.
Best regards
Stephan
  6 个评论
RZM
RZM 2018-7-16
Thank you very much Stephan, I would rather to choose the one with higher performance but I am afraid if this K-Fold CV does not take inter-subject variability into account. In other words, I am not sure which one of these two CV approaches has higher power in terms of generalization. Regards

请先登录,再进行评论。

产品


版本

R2018a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by