How can I assess the reliability of my machine learning model on unseen data?
4 次查看(过去 30 天)
显示 更早的评论
I have a model of a system that can detect some abnormalities and then react accordingly.
Now, I want to analyze how reliable is our model in predicting these abnormalities.
So far, I have manually analyse certain situations and assess whether the system reacted correctly or incorrectly. This is very time consuming and I would like to know how we could adopt supervised machine learning to train a neural network to make this assessment automatically.
采纳的回答
MathWorks Support Team
2018-6-14
In general, to create a machine learning model, you would:
1. Collect data.
2. Split the data into training, test and validation sets.
3. Train a machine learning model using both the training and test sets.
4. Validate that your trained model on the validation set to verify that it can still reliably predict "unseen" data.
5. Use the model to predict real world data.
From the workflow above, you can see that we can only assess the accuracy of the model (before really using it in real world) by evaluating the prediction it outputs on the validation set.
If the predicted values on the validation set is within some reasonable accuracy that you desire, then, you can use the model to predict real world data with the assumption that it would also predict these new data with the same level of accuracy.
Yet, the validation set itself had to first be manually collected and labeled.
Furthermore, it is counter-intuitive (if not impossible) to be able to *automatically *assess the accuracy of your model on new unseen (and unlabeled) data. If you have another model that can assess whether your existing model is predicting new data correctly vs. incorrectly, you would certainly have used that model instead.
更多回答(1 个)
Greg Heath
2018-6-22
THE ABOVE IS INCORRECT FOR NEURAL NETWORKS. FOR NNs:
DESIGN = TRAIN + VALIDATE
1. Collect data.
2. a. Split the data into DESIGN and TEST subsets.
b. Split the design data into TRAINING and VALIDATION subsets.
i. Weight values are calculated from the TRAINING subset.
ii. The VALIDATION subset is used to verify good performance
on NONTRAINING DATA via "EARLY STOPPING":
If, DURING TRAINING, VALIDATION subset performance decreases
for 6(default) CONSECUTIVE EPOCHS, TRAINING IS STOPPED!
FOR OBVIOUS REASONS I prefer the term "VALIDATION STOPPING"!
3. UNBIASED ESTIMATES of performance are obtained using the TEST subset which, of course, was not used in any way, for design.
4. MATLAB default values for the trn/val/tst split are 0.7/0.15/0.15
Hope this helps
Thank you for formally accepting my answer
Greg
0 个评论
另请参阅
类别
在 Help Center 和 File Exchange 中查找有关 Sequence and Numeric Feature Data Workflows 的更多信息
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!