Classification Learner App vs. Training and testing a model programmatically, Is there any hidden magical step in the classification learner app?
1 次查看(过去 30 天)
显示 更早的评论
I am trying to find a good model to explain my dataset. The problem is that I want to do leave-one-person-out cross validation which is not available in the App. So I trained different models (e.g. Tree, SVM, KNN, LDA) using functions like fitctee, fitcsvm, fitcknn, and fitcdiscr. Following the leave-one-person-out procedure I have found average classification accuracy of about 70% for the best model. However, when I use the App to model the data using 10-Fold cross validation, it has much better accuracy and TPR and TNR about 98%. This is really confusing that why this is happening! I was wondering if there are some steps I am missing when I do the modeling programmatically. Or is there any way to do what the App does by writing scripts and probably customizing the cross validation scheme to leave-one-person-out?
0 个评论
回答(1 个)
Stephan
2018-7-16
编辑:Stephan
2018-7-16
Hi,
A possible way to do this is working with the app and then, when you got a good result, export the code to matlab. This allows you to see the magic steps that are made and modify the code you got, if needed.
I could imagine that this procedure will solve your problem.
Best regards
Stephan
6 个评论
另请参阅
类别
在 Help Center 和 File Exchange 中查找有关 Discriminant Analysis 的更多信息
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!