loss
Classification loss for classification ensemble model
Description
returns the Classification Loss
L
= loss(ens
,tbl
,ResponseVarName
)L
for the trained classification ensemble model
ens
using the predictor data in table
tbl
and the true class labels in
tbl.ResponseVarName
. The interpretation of
L
depends on the loss function
(LossFun
) and weighting scheme
(Weights
). In general, better classifiers yield smaller
classification loss values. The default LossFun
value is
"classiferror"
(misclassification rate in decimal).
specifies options using one or more name-value arguments in addition to any of the
input argument combinations in the previous syntaxes. For example, you can specify
the indices of weak learners in the ensemble to use for calculating loss, specify a
classification loss function, and perform computations in parallel.L
= loss(___,Name=Value
)
Note
If the predictor data X
or the predictor variables in
tbl
contain any missing values, the
loss
function might return NaN. For more
details, see loss might return NaN for predictor data with missing values.
Examples
Estimate Classification Error
Load Fisher"s iris data set.
load fisheriris
Train a classification ensemble of 100 decision trees using AdaBoostM2. Specify tree stumps as the weak learners.
t = templateTree(MaxNumSplits=1);
ens = fitcensemble(meas,species,Method="AdaBoostM2",Learners=t);
Estimate the classification error of the model using the training observations.
L = loss(ens,meas,species)
L = 0.0333
Alternatively, if ens
is not compact, then you can estimate the training-sample classification error by passing ens
to resubLoss
.
Assess Performance of Ensemble of Boosted Trees
Create an ensemble of boosted trees and inspect the importance of each predictor. Using test data, assess the classification accuracy of the ensemble.
Load the arrhythmia data set. Determine the class representations in the data.
load arrhythmia
Y = categorical(Y);
tabulate(Y)
Value Count Percent 1 245 54.20% 2 44 9.73% 3 15 3.32% 4 15 3.32% 5 13 2.88% 6 25 5.53% 7 3 0.66% 8 2 0.44% 9 9 1.99% 10 50 11.06% 14 4 0.88% 15 5 1.11% 16 22 4.87%
The data set contains 16 classes, but not all classes are represented (for example, class 13). Most observations are classified as not having arrhythmia (class 1). The data set is highly discrete with imbalanced classes.
Combine all observations with arrhythmia (classes 2 through 15) into one class. Remove those observations with an unknown arrhythmia status (class 16) from the data set.
idx = (Y ~= "16"); Y = Y(idx); X = X(idx,:); Y(Y ~= "1") = "WithArrhythmia"; Y(Y == "1") = "NoArrhythmia"; Y = removecats(Y);
Create a partition that evenly splits the data into training and test sets.
rng("default") % For reproducibility cvp = cvpartition(Y,"Holdout",0.5); idxTrain = training(cvp); idxTest = test(cvp);
cvp
is a cross-validation partition object that specifies the training and test sets.
Train an ensemble of 100 boosted classification trees using AdaBoostM1
. Specify to use tree stumps as the weak learners. Also, because the data set contains missing values, specify to use surrogate splits.
t = templateTree("MaxNumSplits",1,"Surrogate","on"); numTrees = 100; mdl = fitcensemble(X(idxTrain,:),Y(idxTrain),"Method","AdaBoostM1", ... "NumLearningCycles",numTrees,"Learners",t);
mdl
is a trained ClassificationEnsemble
model.
Inspect the importance measure for each predictor.
predImportance = predictorImportance(mdl); bar(predImportance) title("Predictor Importance") xlabel("Predictor") ylabel("Importance Measure")
Identify the top ten predictors in terms of their importance.
[~,idxSort] = sort(predImportance,"descend");
idx10 = idxSort(1:10)
idx10 = 1×10
228 233 238 93 15 224 91 177 260 277
Classify the test set observations. View the results using a confusion matrix. Blue values indicate correct classifications, and red values indicate misclassified observations.
predictedValues = predict(mdl,X(idxTest,:)); confusionchart(Y(idxTest),predictedValues)
Compute the accuracy of the model on the test data.
error = loss(mdl,X(idxTest,:),Y(idxTest), ... "LossFun","classiferror"); accuracy = 1 - error
accuracy = 0.7731
accuracy
estimates the fraction of correctly classified observations.
Input Arguments
ens
— Classification ensemble model
ClassificationEnsemble
model object | ClassificationBaggedEnsemble
model object | CompactClassificationEnsemble
model object
Classification ensemble model, specified as a ClassificationEnsemble
or ClassificationBaggedEnsemble
model object trained with fitcensemble
, or a CompactClassificationEnsemble
model object created with compact
.
tbl
— Sample data
table
Sample data, specified as a table. Each row of tbl
corresponds to
one observation, and each column corresponds to one predictor variable.
tbl
must contain all of the predictors used to train the model.
Multicolumn variables and cell arrays other than cell arrays of character vectors are
not allowed.
If you trained ens
using sample data contained in a table, then
the input data for loss
must also be in a table.
Data Types: table
ResponseVarName
— Response variable name
name of variable in tbl
Response variable name, specified as the name of a variable in
tbl
. If tbl
contains the response variable
used to train ens
, then you do not need to specify
ResponseVarName
.
If you specify ResponseVarName
, you must specify it as a
character vector or string scalar. For example, if the response variable
Y
is stored as tbl.Y
, then specify it as
"Y"
. Otherwise, the software treats all columns of
tbl
, including Y
, as predictors.
The response variable must be a categorical, character, or string array, a logical or numeric vector, or a cell array of character vectors. If the response variable is a character array, then each element must correspond to one row of the array.
Data Types: char
| string
Y
— Class labels
categorical array | character array | string array | logical vector | numeric vector | cell array of character vectors
Class labels, specified as a categorical, character, or string array, a logical or numeric
vector, or a cell array of character vectors. Y
must have the same
data type as tbl
or X
. (The software treats string arrays as cell arrays of character
vectors.)
Y
must be of the same type as the classification used to train
ens
, and its number of elements must equal the number of rows
of tbl
or X
.
Data Types: categorical
| char
| string
| logical
| single
| double
| cell
X
— Predictor data
numeric matrix
Predictor data, specified as a numeric matrix.
Each row of X
corresponds to one observation, and each column
corresponds to one variable. The variables in the columns of X
must
be the same as the variables used to train ens
.
The number of rows in X
must equal the number of rows in
Y
.
If you trained ens
using sample data contained in a matrix, then
the input data for loss
must also be in a matrix.
Data Types: double
| single
Name-Value Arguments
Specify optional pairs of arguments as
Name1=Value1,...,NameN=ValueN
, where Name
is
the argument name and Value
is the corresponding value.
Name-value arguments must appear after other arguments, but the order of the
pairs does not matter.
Before R2021a, use commas to separate each name and value, and enclose
Name
in quotes.
Example: loss(ens,X,LossFun="exponential",UseParallel=true)
specifies to use an exponential loss function, and to run in
parallel.
Learners
— Indices of weak learners
[1:ens.NumTrained]
(default) | vector of positive integers
Indices of the weak learners in the ensemble to use with
loss
, specified as a
vector of positive integers in the range
[1:ens.NumTrained
]. By default,
the function uses all learners.
Example: Learners=[1 2 4]
Data Types: single
| double
LossFun
— Loss function
"classiferror"
(default) | "binodeviance"
| "classifcost"
| "exponential"
| "hinge"
| "logit"
| "mincost"
| "quadratic"
| function handle
Loss function, specified as a built-in loss function name or a function handle.
The following table describes the values for the built-in loss functions.
Value | Description |
---|---|
"binodeviance" | Binomial deviance |
"classifcost" | Observed misclassification cost |
"classiferror" | Misclassified rate in decimal |
"exponential" | Exponential loss |
"hinge" | Hinge loss |
"logit" | Logistic loss |
"mincost" | Minimal expected misclassification cost (for classification scores that are posterior probabilities) |
"quadratic" | Quadratic loss |
"mincost"
is appropriate for classification scores that are posterior probabilities.Bagged and subspace ensembles return posterior probabilities by default (
ens.Method
is"Bag"
or"Subspace"
).To use posterior probabilities as classification scores when the ensemble method is
"AdaBoostM1"
,"AdaBoostM2"
,"GentleBoost"
, or"LogitBoost"
, you must specify the double-logit score transform by entering the following:ens.ScoreTransform = "doublelogit";
For all other ensemble methods, the software does not support posterior probabilities as classification scores.
You can specify your own function using function handle notation. Suppose that
n
is the number of observations in X
, and
K
is the number of distinct classes
(numel(ens.ClassNames)
, where ens
is the input
model). Your function must have the signature
lossvalue = lossfun
(C,S,W,Cost)
The output argument
lossvalue
is a scalar.You specify the function name (
lossfun
).C
is ann
-by-K
logical matrix with rows indicating the class to which the corresponding observation belongs. The column order corresponds to the class order inens.ClassNames
.Create
C
by settingC(p,q) = 1
, if observationp
is in classq
, for each row. Set all other elements of rowp
to0
.S
is ann
-by-K
numeric matrix of classification scores. The column order corresponds to the class order inens.ClassNames
.S
is a matrix of classification scores, similar to the output ofpredict
.W
is ann
-by-1 numeric vector of observation weights. If you passW
, the software normalizes the weights to sum to1
.Cost
is a K-by-K
numeric matrix of misclassification costs. For example,Cost = ones(K) - eye(K)
specifies a cost of0
for correct classification and1
for misclassification.
For more details on loss functions, see Classification Loss.
Example: LossFun="binodeviance"
Example: LossFun=@
Lossfun
Data Types: char
| string
| function_handle
Mode
— Aggregation level for output
"ensemble"
(default) | "individual"
| "cumulative"
Aggregation level for the output, specified as "ensemble"
,
"individual"
, or "cumulative"
.
Value | Description |
---|---|
"ensemble" | The output is a scalar value, the loss for the entire ensemble. |
"individual" | The output is a vector with one element per trained learner. |
"cumulative" | The output is a vector in which element J is
obtained by using learners 1:J from the input
list of learners. |
Example: Mode="individual"
Data Types: char
| string
UseObsForLearner
— Option to use observations for learners
true(N,T)
(default) | logical matrix
Option to use observations for learners, specified as a logical matrix of size
N
-by-T
, where:
When UseObsForLearner(i,j)
is true
(default),
learner j
is used in predicting the class of row i
of X
.
Example: UseObsForLearner=logical([1 1; 0 1; 1 0])
Data Types: logical matrix
UseParallel
— Flag to run in parallel
false
or 0
(default) | true
or 1
Flag to run in parallel, specified as a numeric or logical
1
(true
) or 0
(false
). If you specify UseParallel=true
, the
loss
function executes for
-loop iterations by
using parfor
. The loop runs in parallel when you
have Parallel Computing Toolbox™.
Example: UseParallel=true
Data Types: logical
Weights
— Observation weights
ones(size(X,1),1)
(default) | numeric vector | name of variable in tbl
Observation weights, specified as a numeric vector or the name of a
variable in tbl
. If you supply weights,
loss
normalizes them so that the observation
weights in each class sum to the prior probability of that class.
If you specify Weights
as a numeric vector, the
size of Weights
must be equal to the number of
observations in X
or tbl
. The
software normalizes Weights
to sum up to the value
of the prior probability in the respective class.
If you specify Weights
as the name of a variable
in tbl
, you must specify it as a character vector
or string scalar. For example, if the weights are stored as
tbl.W
, then specify Weights
as "W"
. Otherwise, the software treats all columns of
tbl
, including tbl.W
, as
predictors.
Example: Weights="W"
Data Types: single
| double
| char
| string
Output Arguments
L
— Classification loss
numeric scalar | numeric column vector
Classification loss, returned as a numeric scalar or numeric column vector.
If
Mode
is"ensemble"
, thenL
is a scalar value, the loss for the entire ensemble.If
Mode
is"individual"
, thenL
is a vector with one element per trained learner.If
Mode
is"cumulative"
, thenL
is a vector in which elementJ
is obtained by using learners1:J
from the input list of learners.
When computing the loss, the loss
function normalizes the class probabilities in
ResponseVarName
or Y
to the
class probabilities used for training, which are stored in the
Prior
property of ens
.
More About
Classification Loss
Classification loss functions measure the predictive inaccuracy of classification models. When you compare the same type of loss among many models, a lower loss indicates a better predictive model.
Consider the following scenario.
L is the weighted average classification loss.
n is the sample size.
For binary classification:
yj is the observed class label. The software codes it as –1 or 1, indicating the negative or positive class (or the first or second class in the
ClassNames
property), respectively.f(Xj) is the positive-class classification score for observation (row) j of the predictor data X.
mj = yjf(Xj) is the classification score for classifying observation j into the class corresponding to yj. Positive values of mj indicate correct classification and do not contribute much to the average loss. Negative values of mj indicate incorrect classification and contribute significantly to the average loss.
For algorithms that support multiclass classification (that is, K ≥ 3):
yj* is a vector of K – 1 zeros, with 1 in the position corresponding to the true, observed class yj. For example, if the true class of the second observation is the third class and K = 4, then y2* = [
0 0 1 0
]′. The order of the classes corresponds to the order in theClassNames
property of the input model.f(Xj) is the length K vector of class scores for observation j of the predictor data X. The order of the scores corresponds to the order of the classes in the
ClassNames
property of the input model.mj = yj*′f(Xj). Therefore, mj is the scalar classification score that the model predicts for the true, observed class.
The weight for observation j is wj. The software normalizes the observation weights so that they sum to the corresponding prior class probability stored in the
Prior
property. Therefore,
Given this scenario, the following table describes the supported loss functions that you can specify by using the LossFun
name-value argument.
Loss Function | Value of LossFun | Equation |
---|---|---|
Binomial deviance | "binodeviance" | |
Observed misclassification cost | "classifcost" | where is the class label corresponding to the class with the maximal score, and is the user-specified cost of classifying an observation into class when its true class is yj. |
Misclassified rate in decimal | "classiferror" | where I{·} is the indicator function. |
Cross-entropy loss | "crossentropy" |
The weighted cross-entropy loss is where the weights are normalized to sum to n instead of 1. |
Exponential loss | "exponential" | |
Hinge loss | "hinge" | |
Logit loss | "logit" | |
Minimal expected misclassification cost | "mincost" |
The software computes the weighted minimal expected classification cost using this procedure for observations j = 1,...,n.
The weighted average of the minimal expected misclassification cost loss is |
Quadratic loss | "quadratic" |
If you use the default cost matrix (whose element value is 0 for correct classification
and 1 for incorrect classification), then the loss values for
"classifcost"
, "classiferror"
, and
"mincost"
are identical. For a model with a nondefault cost matrix,
the "classifcost"
loss is equivalent to the "mincost"
loss most of the time. These losses can be different if prediction into the class with
maximal posterior probability is different from prediction into the class with minimal
expected cost. Note that "mincost"
is appropriate only if classification
scores are posterior probabilities.
This figure compares the loss functions (except "classifcost"
,
"crossentropy"
, and "mincost"
) over the score
m for one observation. Some functions are normalized to pass through
the point (0,1).
Extended Capabilities
Tall Arrays
Calculate with arrays that have more rows than fit in memory.
Usage notes and limitations:
You cannot use the
UseParallel
name-value argument with tall arrays.
For more information, see Tall Arrays.
Automatic Parallel Support
Accelerate code by automatically running computation in parallel using Parallel Computing Toolbox™.
To run in parallel, set the UseParallel
name-value argument to
true
in the call to this function.
For more general information about parallel computing, see Run MATLAB Functions with Automatic Parallel Support (Parallel Computing Toolbox).
You cannot use the UseParallel
name-value
argument with tall arrays, GPU arrays, or code generation.
GPU Arrays
Accelerate code by running on a graphics processing unit (GPU) using Parallel Computing Toolbox™.
Usage notes and limitations:
The
loss
function does not support ensembles trained using decision tree learners with surrogate splits.
For more information, see Run MATLAB Functions on a GPU (Parallel Computing Toolbox).
Version History
Introduced in R2011aR2022a: loss
returns a different value for a model with a nondefault cost matrix
If you specify a nondefault cost matrix when you train the input model object, the loss
function returns a different value compared to previous releases.
The loss
function uses the prior
probabilities stored in the Prior
property to normalize the observation
weights of the input data. Also, the function uses the cost matrix stored in the
Cost
property if you specify the LossFun
name-value
argument as "classifcost"
or "mincost"
. The way the
function uses the Prior
and Cost
property values has not
changed. However, the property values stored in the input model object have changed for a model
with a nondefault cost matrix, so the function might return a different value.
For details about the property value changes, see Cost property stores the user-specified cost matrix.
If you want the software to handle the cost matrix, prior
probabilities, and observation weights in the same way as in previous releases, adjust the prior
probabilities and observation weights for the nondefault cost matrix, as described in Adjust Prior Probabilities and Observation Weights for Misclassification Cost Matrix. Then, when you train a
classification model, specify the adjusted prior probabilities and observation weights by using
the Prior
and Weights
name-value arguments, respectively,
and use the default cost matrix.
R2022a: loss
might return NaN for predictor data with missing values
The loss
function no longer omits an observation
with a NaN score when computing the weighted average classification loss.
Therefore, loss
might return NaN when the predictor
data X
or the predictor variables in
tbl
contain any missing values. In most cases, if the
test set observations do not contain missing predictors, the
loss
function does not return NaN.
This change improves the automatic selection of a classification model when
you use fitcauto
. Before this change, the software might select a model
(expected to best classify new data) with few non-NaN predictors.
If loss
in your code returns NaN, you can update
your code to avoid this result. Remove or replace the missing values by using
rmmissing
or fillmissing
, respectively.
The following table shows the classification models for which the
loss
function might return NaN. For more
details, see the Compatibility Considerations for each
loss
object function.
Model Type | Full or Compact Model Object | loss Object
Function |
---|---|---|
Discriminant analysis classification model | ClassificationDiscriminant , CompactClassificationDiscriminant | loss |
Ensemble of learners for classification | ClassificationEnsemble , CompactClassificationEnsemble | loss |
Gaussian kernel classification model | ClassificationKernel | loss |
k-nearest neighbor classification model | ClassificationKNN | loss |
Linear classification model | ClassificationLinear | loss |
Neural network classification model | ClassificationNeuralNetwork , CompactClassificationNeuralNetwork | loss |
Support vector machine (SVM) classification model | loss |
MATLAB 命令
您点击的链接对应于以下 MATLAB 命令:
请在 MATLAB 命令行窗口中直接输入以执行命令。Web 浏览器不支持 MATLAB 命令。
Select a Web Site
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .
You can also select a web site from the following list:
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.
Americas
- América Latina (Español)
- Canada (English)
- United States (English)
Europe
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)