Info

此问题已关闭。 请重新打开它进行编辑或回答。

I am working with the Case Study for the Credit Scorecard Analysis

1 次查看(过去 30 天)
Why is this model so poor? In the validatemodel.html you show the following reference: Basel Committee on Banking Supervision: Studies on the Validation of Internal Rating Systems,. Working Paper No. 14, February 2005.
The Basel Committee states that the Accuracy Ratio should be between 0.5 and 0.8. This model has 0.32. The ROC curve indicates that this is just slightly better than a random model. According to the article, the Area under the ROC curve should be as close to 1 as possible. A naive model has an area of 0.5! This has 0.66 for the area under the curve. Does the data have a high number of defaults?

回答(1 个)

Adam Barber
Adam Barber 2015-8-13
Hey Vernon,
The bottom line is the data set just does not have great predictors. I think the point of the example is just to show how to use the tools, not to show off a perfect data set.
Typically finding a good model does require, usually, some trial and error with manual adjustments to bins, and re-fitting the logistic model. The acceptable levels for the validation statistics also vary depending on the source, and depending on the final use of the model. Overall, I agree that the model in the case study isn't great.
One thing to note is that an AUROC too close to 1 may actually be considered "suspicious". Sometimes very high AUROC's come from predictors that know about the future, information that implicitly or explicitly knows whether a customer defaulted or not. If that predictor is a score from a different source, that may be acceptable. But in other cases it may be a predictor that should not be allowed in the model.
Hope this helps,
-Adam

此问题已关闭。

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by