主要内容

loss

Regression error for XGBoost model

Since R2026a

    Description

    L = loss(mdl,tbl,ResponseVarName) returns the mean squared error L between the predictions of mdl to the data in tbl, compared to the true responses tbl.ResponseVarName. The interpretation of L depends on the loss function (LossFun) and weighting scheme (Weights). In general, better classifiers yield smaller classification loss values. The formula for loss is described in the section Weighted Mean Squared Error.

    L = loss(mdl,tbl,Y) returns the mean squared error between the predictions of mdl to the data in tbl, compared to the true responses Y.

    L = loss(mdl,X,Y) returns the mean squared error between the predictions of mdl to the data in X, compared to the true responses Y.

    example

    L = loss(___,Name=Value) specifies options using one or more name-value arguments in addition to any of the input argument combinations in the previous syntaxes. For example, you can specify the loss function and whether to perform calculations in parallel.

    Examples

    collapse all

    Find the loss of an XGBoost model using the carsmall data set. An XGBoost model trained using Cylinders, Displacement, Horsepower, and Weight as predictors is provided with this example.

    load carsmall
    modelfile = "trainedRegressionXGBoostModel.json";
    Mdl = importModelFromXGBoost(modelfile)
    Mdl = 
      CompactRegressionXGBoost
                   ResponseName: 'Y'
              ResponseTransform: 'none'
                     NumTrained: 30
        ImportedModelParameters: [1×1 struct]
    
    
      Properties, Methods
    
    

    Find the regression error for predicting the fuel economy MPG.

    X = [Cylinders Displacement Horsepower Weight];
    L = loss(Mdl,X,MPG)
    L = 
    15.7129
    

    Input Arguments

    collapse all

    Compact regression XGBoost model, specified as a CompactRegressionXGBoost model object created with importModelFromXGBoost.

    Sample data, specified as a table. Each row of tbl corresponds to one observation, and each column corresponds to one predictor variable. tbl must contain all of the predictors used to train the model. Multicolumn variables and cell arrays other than cell arrays of character vectors are not allowed.

    Data Types: table

    Response variable name, specified as the name of a variable in tbl. If mdl.ResponseName is the response variable name, then you do not need to specify ResponseVarName.

    If you specify ResponseVarName, you must specify it as a character vector or string scalar. For example, if the response variable Y is stored as tbl.Y, then specify it as "Y". Otherwise, the software treats all columns of tbl, including Y, as predictors.

    Data Types: char | string

    Response data, specified as a numeric column vector with the same number of rows as tbl or X. Each entry in Y is the true response to the data in the corresponding row of tbl or X.

    The software treats NaN values in Y as missing values. Observations with missing values for Y are not used in the loss calculation.

    Data Types: double | single

    Predictor data, specified as a numeric matrix.

    Each row of X corresponds to one observation, and each column corresponds to one variable. The number of rows in X must equal the number of rows in Y.

    The variables that make up the columns of X must have the same order as the predictor variables used to train mdl.

    Data Types: double | single

    Name-Value Arguments

    collapse all

    Specify optional pairs of arguments as Name1=Value1,...,NameN=ValueN, where Name is the argument name and Value is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.

    Before R2021a, use commas to separate each name and value, and enclose Name in quotes.

    Example: loss(mdl,X,UseParallel=true) specifies to use an exponential loss function, and to run in parallel.

    Loss function, specified as "mse" (mean squared error) or as a function handle. If you pass a function handle fun, loss calls it as

    fun(Y,Yfit,W)

    where Y, Yfit, and W are numeric vectors of the same length.

    • Y is the observed response.

    • Yfit is the predicted response.

    • W is the observation weights.

    The returned value of fun(Y,Yfit,W) must be a scalar.

    Example: LossFun="mse"

    Example: LossFun=@Lossfun

    Data Types: char | string | function_handle

    Flag to run in parallel, specified as a numeric or logical 1 (true) or 0 (false). If you specify UseParallel=true, the loss function executes for-loop iterations by using parfor. The loop runs in parallel when you have Parallel Computing Toolbox™.

    Example: UseParallel=true

    Data Types: logical

    Observation weights, specified as a numeric vector or the name of a variable in tbl. The software weighs the observations in each row of X or tbl with the corresponding weight in Weights. The formula for loss with Weights is described in the section Weighted Mean Squared Error.

    If you specify Weights as a numeric vector, then the size of Weights must be equal to the number of rows in X or tbl.

    If you specify Weights as the name of a variable in tbl, you must do so as a character vector or string scalar. For example, if the weights are stored as tbl.W, then specify Weights as "W". Otherwise, the software treats all columns of Tbl, including tbl.W, as predictors.

    If you do not specify your own loss function, then the software normalizes Weights to sum up to the value of the prior probability in the respective class.

    Example: Weights="W"

    Data Types: single | double | char | string

    More About

    collapse all

    Extended Capabilities

    expand all

    Version History

    Introduced in R2026a