resubLoss
Resubstitution regression loss
Description
returns the regression loss by resubstitution (L), or the in-sample regression loss, for the
trained regression model L
= resubLoss(Mdl
)Mdl
using the training data stored in
Mdl.X
and the corresponding responses stored in
Mdl.Y
.
The interpretation of L
depends on the loss function
(LossFun
) and weighting scheme (Mdl.W
). In
general, better models yield smaller loss values. The default LossFun
value is "mse"
(mean squared error).
specifies additional options using one or more name-value arguments. For example,
L
= resubLoss(Mdl
,Name=Value
)IncludeInteractions=false
specifies to exclude interaction terms from a
generalized additive model Mdl
.
Examples
Resubstitution Loss
Train a generalized additive model (GAM), then calculate the resubstitution loss using the mean squared error (MSE).
Load the patients
data set.
load patients
Create a table that contains the predictor variables (Age
, Diastolic
, Smoker
, Weight
, Gender
, SelfAssessedHealthStatus
) and the response variable (Systolic
).
tbl = table(Age,Diastolic,Smoker,Weight,Gender,SelfAssessedHealthStatus,Systolic);
Train a univariate GAM that contains the linear terms for the predictors in tbl
.
Mdl = fitrgam(tbl,"Systolic")
Mdl = RegressionGAM PredictorNames: {'Age' 'Diastolic' 'Smoker' 'Weight' 'Gender' 'SelfAssessedHealthStatus'} ResponseName: 'Systolic' CategoricalPredictors: [3 5 6] ResponseTransform: 'none' Intercept: 122.7800 IsStandardDeviationFit: 0 NumObservations: 100
Mdl
is a RegressionGAM
model object.
Calculate the resubstitution loss using the mean squared error (MSE).
L = resubLoss(Mdl)
L = 4.1957
Compute Custom Resubstitution Loss
Load the sample data and store in a table
.
load fisheriris tbl = table(meas(:,1),meas(:,2),meas(:,3),meas(:,4),species,... 'VariableNames',{'meas1','meas2','meas3','meas4','species'});
Fit a GPR model using the first measurement as the response and the other variables as the predictors.
mdl = fitrgp(tbl,'meas1');
Predict the responses using the trained model.
ypred = predict(mdl,tbl);
Compute the mean absolute error.
n = height(tbl);
y = tbl.meas1;
fun = @(y,ypred,w) sum(abs(y-ypred))/n;
L = resubLoss(mdl,'lossfun',fun)
L = 0.2345
Compare GAMs by Examining Regression Loss
Train a generalized additive model (GAM) that contains both linear and interaction terms for predictors, and estimate the regression loss (mean squared error, MSE) with and without interaction terms for the training data and test data. Specify whether to include interaction terms when estimating the regression loss.
Load the carbig
data set, which contains measurements of cars made in the 1970s and early 1980s.
load carbig
Specify Acceleration
, Displacement
, Horsepower
, and Weight
as the predictor variables (X
) and MPG
as the response variable (Y
).
X = [Acceleration,Displacement,Horsepower,Weight]; Y = MPG;
Partition the data set into two sets: one containing training data, and the other containing new, unobserved test data. Reserve 10 observations for the new test data set.
rng('default') % For reproducibility n = size(X,1); newInds = randsample(n,10); inds = ~ismember(1:n,newInds); XNew = X(newInds,:); YNew = Y(newInds);
Train a generalized additive model that contains all the available linear and interaction terms in X
.
Mdl = fitrgam(X(inds,:),Y(inds),'Interactions','all');
Mdl
is a RegressionGAM
model object.
Compute the resubstitution MSEs (that is, the in-sample MSEs) both with and without interaction terms in Mdl
. To exclude interaction terms, specify 'IncludeInteractions',false
.
resubl = resubLoss(Mdl)
resubl = 0.0292
resubl_nointeraction = resubLoss(Mdl,'IncludeInteractions',false)
resubl_nointeraction = 4.7330
Compute the regression MSEs both with and without interaction terms for the test data set. Use a memory-efficient model object for the computation.
CMdl = compact(Mdl);
CMdl
is a CompactRegressionGAM
model object.
l = loss(CMdl,XNew,YNew)
l = 12.8604
l_nointeraction = loss(CMdl,XNew,YNew,'IncludeInteractions',false)
l_nointeraction = 15.6741
Including interaction terms achieves a smaller error for the training data set and test data set.
Input Arguments
Mdl
— Regression machine learning model
full regression model object
Regression machine learning model, specified as a full regression model object, as given in the following table of supported models.
Model | Regression Model Object |
---|---|
Gaussian process regression model | RegressionGP |
Generalized additive model (GAM) | RegressionGAM |
Neural network model | RegressionNeuralNetwork |
Name-Value Arguments
Specify optional pairs of arguments as
Name1=Value1,...,NameN=ValueN
, where Name
is
the argument name and Value
is the corresponding value.
Name-value arguments must appear after other arguments, but the order of the
pairs does not matter.
Before R2021a, use commas to separate each name and value, and enclose
Name
in quotes.
Example: resubLoss(Mdl,IncludeInteractions=false)
excludes interaction
terms from a generalized additive model Mdl
.
IncludeInteractions
— Flag to include interaction terms
true
| false
Flag to include interaction terms of the model, specified as true
or
false
. This argument is valid only for a generalized
additive model. That is, you can specify this argument only when
Mdl
is RegressionGAM
.
The default value is true
if Mdl
contains interaction
terms. The value must be false
if the model does not contain interaction
terms.
Example: IncludeInteractions=false
Data Types: logical
LossFun
— Loss function
"mse"
(default) | function handle
Loss function, specified as "mse"
or a function handle.
"mse"
— Weighted mean squared error.Function handle — To specify a custom loss function, use a function handle. The function must have this form:
lossval = lossfun(Y,YFit,W)
The output argument
lossval
is a floating-point scalar.You specify the function name (
).lossfun
If
Mdl
is a model with one response variable, thenY
is a length-n numeric vector of observed responses, where n is the number of observations inTbl
orX
. IfMdl
is a model with multiple response variables, thenY
is an n-by-k numeric matrix of observed responses, where k is the number of response variables.YFit
is a length-n numeric vector or an n-by-k numeric matrix of corresponding predicted responses. The size ofYFit
must match the size ofY
.W
is an n-by-1 numeric vector of observation weights.
Example: LossFun=@
lossfun
Data Types: char
| string
| function_handle
OutputType
— Type of output loss
"average"
(default) | "per-response"
Since R2024b
Type of output loss, specified as "average"
or
"per-response"
. This argument is valid only for a neural network
model. That is, you can specify this argument only when Mdl
is a
RegressionNeuralNetwork
object.
Value | Description |
---|---|
"average" | resubLoss averages the loss values across all
response variables and returns a scalar value. |
"per-response" | resubLoss returns a vector, where each element
is the loss for one response variable. |
Example: OutputType="per-response"
Data Types: char
| string
PredictionForMissingValue
— Predicted response value to use for observations with missing predictor values
"median"
| "mean"
| "omitted"
| numeric scalar
Since R2023b
Predicted response value to use for observations with missing predictor values,
specified as "median"
, "mean"
,
"omitted"
, or a numeric scalar. This argument is valid only for a
Gaussian process regression or neural network model. That is, you can specify this
argument only when Mdl
is a RegressionGP
or RegressionNeuralNetwork
object.
Value | Description |
---|---|
"median" |
This value is the
default when |
"mean" | resubLoss uses the mean of the observed response
values in the training data as the predicted response value for observations
with missing predictor values. |
"omitted" | resubLoss excludes observations with missing
predictor values from the loss computation. |
Numeric scalar | resubLoss uses this value as the predicted
response value for observations with missing predictor values. |
If an observation is missing an observed response value or an observation weight,
then resubLoss
does not use the observation in the loss
computation.
Example: PredictionForMissingValue="omitted"
Data Types: single
| double
| char
| string
StandardizeResponses
— Flag to standardize response data
false
or 0
(default) | true
or 1
Since R2024b
Flag to standardize the response data before computing the loss, specified as a
numeric or logical 0
(false
) or
1
(true
). This argument is valid only for a
neural network model. That is, you can specify this argument only when
Mdl
is a RegressionNeuralNetwork
object.
If you set StandardizeResponses
to true
,
then the software centers and scales each response variable in
Mdl.Y
by the corresponding column mean and standard deviation.
Specify StandardizeResponses
as true
when you
have multiple response variables with very different scales and
OutputType
is "average"
. Do not standardize
the response data when you have only one response variable.
Example: StandardizeResponses=true
Data Types: single
| double
| logical
Output Arguments
L
— Regression loss
numeric scalar | numeric vector
Regression loss, returned as a numeric scalar or vector. The type of regression loss
depends on LossFun
.
When Mdl
is a model with one response variable,
L
is a numeric scalar. When Mdl
is a model
with multiple response variables, the size and interpretation of L
depend on OutputType
.
More About
Weighted Mean Squared Error
The weighted mean squared error measures the predictive inaccuracy of regression models. When you compare the same type of loss among many models, a lower error indicates a better predictive model.
The weighted mean squared error is calculated as follows:
where:
n is the number of rows of data.
xj is the jth row of data.
yj is the true response to xj.
f(xj) is the response prediction of the model
Mdl
to xj.w is the vector of observation weights.
Algorithms
resubLoss
computes the regression loss according to the corresponding
loss
function of the object (Mdl
). For a
model-specific description, see the loss
function reference pages in the
following table.
Model | Regression Model Object (Mdl ) | loss Object Function |
---|---|---|
Gaussian process regression model | RegressionGP | loss |
Generalized additive model | RegressionGAM | loss |
Neural network model | RegressionNeuralNetwork | loss |
Alternative Functionality
To compute the response loss for new predictor data, use the corresponding
loss
function of the object (Mdl
).
Extended Capabilities
GPU Arrays
Accelerate code by running on a graphics processing unit (GPU) using Parallel Computing Toolbox™. (since R2024b)
This function fully supports GPU arrays for RegressionNeuralNetwork
model objects. For more information, see Run MATLAB Functions on a GPU (Parallel Computing Toolbox).
Version History
Introduced in R2015bR2024b: Compute loss for neural network regression model trained with multiple response variables
You can create a neural network regression model with multiple response variables by
using the fitrnet
function.
Regardless of the number of response variables, the function returns a
RegressionNeuralNetwork
object. You can use the
resubLoss
object function to compute the resubstitution regression
loss.
In the call to resubLoss
, you can specify to return the average
loss or the loss for each response variable by using the OutputType
name-value argument. You can also specify whether to standardize the response data before
computing the loss by using the StandardizeResponses
name-value argument.
R2024b: Specify GPU arrays (requires Parallel Computing Toolbox)
resubLoss
fully supports GPU arrays for RegressionNeuralNetwork
model objects.
R2023b: Specify predicted response value to use for observations with missing predictor values
Starting in R2023b, when you predict or compute the loss, some regression models allow you to specify the predicted response value for observations with missing predictor values. Specify the PredictionForMissingValue
name-value argument to use a numeric scalar, the training set median, or the training set mean as the predicted value. When computing the loss, you can also specify to omit observations with missing predictor values.
This table lists the object functions that support the
PredictionForMissingValue
name-value argument. By default, the
functions use the training set median as the predicted response value for observations with
missing predictor values.
Model Type | Model Objects | Object Functions |
---|---|---|
Gaussian process regression (GPR) model | RegressionGP , CompactRegressionGP | loss , predict , resubLoss , resubPredict |
RegressionPartitionedGP | kfoldLoss , kfoldPredict | |
Gaussian kernel regression model | RegressionKernel | loss , predict |
RegressionPartitionedKernel | kfoldLoss , kfoldPredict | |
Linear regression model | RegressionLinear | loss , predict |
RegressionPartitionedLinear | kfoldLoss , kfoldPredict | |
Neural network regression model | RegressionNeuralNetwork , CompactRegressionNeuralNetwork | loss , predict , resubLoss , resubPredict |
RegressionPartitionedNeuralNetwork | kfoldLoss , kfoldPredict | |
Support vector machine (SVM) regression model | RegressionSVM , CompactRegressionSVM | loss , predict , resubLoss , resubPredict |
RegressionPartitionedSVM | kfoldLoss , kfoldPredict |
In previous releases, the regression model loss
and predict
functions listed above used NaN
predicted response values for observations with missing predictor values. The software omitted observations with missing predictor values from the resubstitution ("resub") and cross-validation ("kfold") computations for prediction and loss.
See Also
MATLAB Command
You clicked a link that corresponds to this MATLAB command:
Run the command by entering it in the MATLAB Command Window. Web browsers do not support MATLAB commands.
Select a Web Site
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .
You can also select a web site from the following list
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.
Americas
- América Latina (Español)
- Canada (English)
- United States (English)
Europe
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)
Asia Pacific
- Australia (English)
- India (English)
- New Zealand (English)
- 中国
- 日本Japanese (日本語)
- 한국Korean (한국어)