主要内容

loss

Classification error for XGBoost model

Since R2026a

    Description

    L = loss(mdl,tbl,ResponseVarName) returns the Classification Loss L for the compact classification XGBoost model ens using the predictor data in table tbl and the true class labels in tbl.ResponseVarName. The interpretation of L depends on the loss function (LossFun) and weighting scheme (Weights). In general, better classifiers yield smaller classification loss values. The default LossFun value is "classiferror" (misclassification rate in decimal).

    L = loss(mdl,tbl,Y) uses the predictor data in table tbl and the true class labels in Y.

    L = loss(mdl,X,Y) uses the predictor data in matrix X and the true class labels in Y.

    example

    L = loss(___,Name=Value) specifies options using one or more name-value arguments in addition to any of the input argument combinations in the previous syntaxes. For example, you can specify a classification loss function and perform computations in parallel.

    Examples

    collapse all

    Import a pretrained XGBoost classification model trained using the ionosphere dataset. The pretrained model is provided with this example.

    load ionosphere
    modelfile = "trainedXGBoostModel.json";
    Mdl = importModelFromXGBoost(modelfile)
    Mdl = 
      CompactClassificationXGBoost
                   ResponseName: 'Y'
                     ClassNames: [0 1]
                 ScoreTransform: 'logit'
                     NumTrained: 30
        ImportedModelParameters: [1×1 struct]
    
    
      Properties, Methods
    
    

    The model is imported as a CompactClassificationXGBoost model object.

    Convert the response data to a boolean array to match the imported model.

    Y = (Y=="g");

    Use the loss function to evaluate the classification cost of the model.

    loss(Mdl,X,Y,LossFun="classifcost")
    ans = single
    
    0.0476
    

    Input Arguments

    collapse all

    Compact classification XGBoost model, specified as a CompactClassificationXGBoost model object created with importModelFromXGBoost.

    Sample data, specified as a table. Each row of tbl corresponds to one observation, and each column corresponds to one predictor variable. tbl must contain all of the predictors used to train the model. Multicolumn variables and cell arrays other than cell arrays of character vectors are not allowed.

    Data Types: table

    Response variable name, specified as the name of a variable in tbl. If mdl.ResponseName is the response variable name, then you do not need to specify ResponseVarName.

    If you specify ResponseVarName, you must specify it as a character vector or string scalar. For example, if the response variable Y is stored as tbl.Y, then specify it as "Y". Otherwise, the software treats all columns of tbl, including Y, as predictors.

    The response variable must be a logical or numeric vector.

    Data Types: char | string

    Class labels, specified as a logical or numeric vector. Y must have the same data type as tbl or X.

    Y must be of the same type as the classification used to train mdl, and its number of elements must equal the number of rows of tbl or X.

    Data Types: logical | single | double

    Predictor data, specified as a numeric matrix.

    Each row of X corresponds to one observation, and each column corresponds to one variable. The number of rows in X must equal the number of rows in Y.

    The variables that make up the columns of X must have the same order as the predictor variables used to train mdl.

    Data Types: double | single

    Name-Value Arguments

    collapse all

    Specify optional pairs of arguments as Name1=Value1,...,NameN=ValueN, where Name is the argument name and Value is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.

    Before R2021a, use commas to separate each name and value, and enclose Name in quotes.

    Example: loss(mdl,X,LossFun="exponential",UseParallel=true) specifies to use an exponential loss function, and to run in parallel.

    Loss function, specified as a built-in loss function name or a function handle.

    The following table describes the values for the built-in loss functions.

    ValueDescription
    "binodeviance"Binomial deviance
    "classifcost"Observed misclassification cost
    "classiferror"Misclassified rate in decimal
    "exponential"Exponential loss
    "hinge"Hinge loss
    "logit"Logistic loss
    "mincost"Minimal expected misclassification cost (for classification scores that are posterior probabilities)
    "quadratic"Quadratic loss

    • "mincost" is appropriate for classification scores that are posterior probabilities.

    • Bagged and subspace ensembles return posterior probabilities by default (ens.Method is "Bag" or "Subspace").

    • To use posterior probabilities as classification scores when the ensemble method is "AdaBoostM1", "AdaBoostM2", "GentleBoost", or "LogitBoost", you must specify the double-logit score transform by entering the following:

      ens.ScoreTransform = "doublelogit";

    • For all other ensemble methods, the software does not support posterior probabilities as classification scores.

    You can specify your own function using function handle notation. Suppose that n is the number of observations in X, and K is the number of distinct classes (numel(mdl.ClassNames), where mdl is the input model). Your function must have the signature

    lossvalue = lossfun(C,S,W,Cost)
    where:

    • The output argument lossvalue is a scalar.

    • You specify the function name (lossfun).

    • C is an n-by-K logical matrix with rows indicating the class to which the corresponding observation belongs. The column order corresponds to the class order in mdl.ClassNames.

      Create C by setting C(p,q) = 1, if observation p is in class q, for each row. Set all other elements of row p to 0.

    • S is an n-by-K numeric matrix of classification scores. The column order corresponds to the class order in mdl.ClassNames. S is a matrix of classification scores, similar to the output of predict.

    • W is an n-by-1 numeric vector of observation weights. If you pass W, the software normalizes the weights to sum to 1.

    • Cost is a K-by-K numeric matrix of misclassification costs. For example, Cost = ones(K) - eye(K) specifies a cost of 0 for correct classification and 1 for misclassification.

    For more details on loss functions, see Classification Loss.

    Example: LossFun="binodeviance"

    Example: LossFun=@Lossfun

    Data Types: char | string | function_handle

    Flag to run in parallel, specified as a numeric or logical 1 (true) or 0 (false). If you specify UseParallel=true, the loss function executes for-loop iterations by using parfor. The loop runs in parallel when you have Parallel Computing Toolbox™.

    Example: UseParallel=true

    Data Types: logical

    Observation weights, specified as a numeric vector or the name of a variable in tbl. If you supply weights, loss normalizes them so that the observation weights in each class sum to the prior probability of that class.

    If you specify Weights as a numeric vector, the size of Weights must be equal to the number of observations in X or tbl. The software normalizes Weights to sum up to the value of the prior probability in the respective class.

    If you specify Weights as the name of a variable in tbl, you must specify it as a character vector or string scalar. For example, if the weights are stored as tbl.W, then specify Weights as "W". Otherwise, the software treats all columns of tbl, including tbl.W, as predictors.

    Example: Weights="W"

    Data Types: single | double | char | string

    Output Arguments

    collapse all

    Classification loss, returned as a numeric scalar. L is a scalar value, the loss for the entire ensemble of trees.

    When computing the loss, the loss function normalizes the class probabilities in ResponseVarName or Y to the class probabilities used for training, which are stored in the Prior property of mdl.

    More About

    collapse all

    Extended Capabilities

    expand all

    Version History

    Introduced in R2026a