fitlm
Fit linear regression model
Syntax
Description
specifies which table variable contains the response data.mdl
= fitlm(tbl
,ResponseVarName
)
specifies additional options using one or more name-value pair arguments. For
example, you can specify which variables are categorical, perform robust
regression, or use observation weights.mdl
= fitlm(___,Name,Value
)
Examples
Fit Linear Regression Using Data in Matrix
Fit a linear regression model using a matrix input data set.
Load the carsmall
data set, a matrix input data set.
load carsmall
X = [Weight,Horsepower,Acceleration];
Fit a linear regression model by using fitlm
.
mdl = fitlm(X,MPG)
mdl = Linear regression model: y ~ 1 + x1 + x2 + x3 Estimated Coefficients: Estimate SE tStat pValue __________ _________ _________ __________ (Intercept) 47.977 3.8785 12.37 4.8957e-21 x1 -0.0065416 0.0011274 -5.8023 9.8742e-08 x2 -0.042943 0.024313 -1.7663 0.08078 x3 -0.011583 0.19333 -0.059913 0.95236 Number of observations: 93, Error degrees of freedom: 89 Root Mean Squared Error: 4.09 R-squared: 0.752, Adjusted R-Squared: 0.744 F-statistic vs. constant model: 90, p-value = 7.38e-27
The model display includes the model formula, estimated coefficients, and model summary statistics.
The model formula in the display, y ~ 1 + x1 + x2 + x3
, corresponds to .
The model display also shows the estimated coefficient information, which is stored in the Coefficients
property. Display the Coefficients
property.
mdl.Coefficients
ans=4×4 table
Estimate SE tStat pValue
__________ _________ _________ __________
(Intercept) 47.977 3.8785 12.37 4.8957e-21
x1 -0.0065416 0.0011274 -5.8023 9.8742e-08
x2 -0.042943 0.024313 -1.7663 0.08078
x3 -0.011583 0.19333 -0.059913 0.95236
The Coefficient
property includes these columns:
Estimate
— Coefficient estimates for each corresponding term in the model. For example, the estimate for the constant term (intercept
) is 47.977.SE
— Standard error of the coefficients.tStat
— t-statistic for each coefficient to test the null hypothesis that the corresponding coefficient is zero against the alternative that it is different from zero, given the other predictors in the model. Note thattStat = Estimate/SE
. For example, the t-statistic for the intercept is 47.977/3.8785 = 12.37.pValue
— p-value for the t-statistic of the two-sided hypothesis test. For example, the p-value of the t-statistic forx2
is greater than 0.05, so this term is not significant at the 5% significance level given the other terms in the model.
The summary statistics of the model are:
Number of observations
— Number of rows without anyNaN
values. For example,Number of observations
is 93 because theMPG
data vector has sixNaN
values and theHorsepower
data vector has oneNaN
value for a different observation, where the number of rows inX
andMPG
is 100.Error degrees of freedom
— n – p, where n is the number of observations, and p is the number of coefficients in the model, including the intercept. For example, the model has four predictors, so theError degrees of freedom
is 93 – 4 = 89.Root mean squared error
— Square root of the mean squared error, which estimates the standard deviation of the error distribution.R-squared
andAdjusted R-squared
— Coefficient of determination and adjusted coefficient of determination, respectively. For example, theR-squared
value suggests that the model explains approximately 75% of the variability in the response variableMPG
.F-statistic vs. constant model
— Test statistic for the F-test on the regression model, which tests whether the model fits significantly better than a degenerate model consisting of only a constant term.p-value
— p-value for the F-test on the model. For example, the model is significant with a p-value of 7.3816e-27.
You can find these statistics in the model properties (NumObservations
, DFE
, RMSE
, and Rsquared
) and by using the anova
function.
anova(mdl,'summary')
ans=3×5 table
SumSq DF MeanSq F pValue
______ __ ______ ______ __________
Total 6004.8 92 65.269
Model 4516 3 1505.3 89.987 7.3816e-27
Residual 1488.8 89 16.728
Use plot
to create an added variable plot (partial regression leverage plot) for the whole model except the constant (intercept) term.
plot(mdl)
Fit Linear Regression Using Data in Table
Load the sample data.
load carsmall
Store the variables in a table.
tbl = table(Weight,Acceleration,MPG,'VariableNames',{'Weight','Acceleration','MPG'});
Display the first five rows of the table.
tbl(1:5,:)
ans=5×3 table
Weight Acceleration MPG
______ ____________ ___
3504 12 18
3693 11.5 15
3436 11 18
3433 12 16
3449 10.5 17
Fit a linear regression model for miles per gallon (MPG). Specify the model formula by using Wilkinson notation.
lm = fitlm(tbl,'MPG~Weight+Acceleration')
lm = Linear regression model: MPG ~ 1 + Weight + Acceleration Estimated Coefficients: Estimate SE tStat pValue __________ __________ _______ __________ (Intercept) 45.155 3.4659 13.028 1.6266e-22 Weight -0.0082475 0.00059836 -13.783 5.3165e-24 Acceleration 0.19694 0.14743 1.3359 0.18493 Number of observations: 94, Error degrees of freedom: 91 Root Mean Squared Error: 4.12 R-squared: 0.743, Adjusted R-Squared: 0.738 F-statistic vs. constant model: 132, p-value = 1.38e-27
The model 'MPG~Weight+Acceleration'
in this example is equivalent to set the model specification as 'linear'
. For example,
lm2 = fitlm(tbl,'linear');
If you use a character vector for model specification and you do not specify the response variable, then fitlm
accepts the last variable in tbl
as the response variable and the other variables as the predictor variables.
Fit Linear Regression Using Specified Model Formula
Fit a linear regression model using a model formula specified by Wilkinson notation.
Load the sample data.
load carsmall
Store the variables in a table.
tbl = table(Weight,Acceleration,Model_Year,MPG,'VariableNames',{'Weight','Acceleration','Model_Year','MPG'});
Fit a linear regression model for miles per gallon (MPG) with weight and acceleration as the predictor variables.
lm = fitlm(tbl,'MPG~Weight+Acceleration')
lm = Linear regression model: MPG ~ 1 + Weight + Acceleration Estimated Coefficients: Estimate SE tStat pValue __________ __________ _______ __________ (Intercept) 45.155 3.4659 13.028 1.6266e-22 Weight -0.0082475 0.00059836 -13.783 5.3165e-24 Acceleration 0.19694 0.14743 1.3359 0.18493 Number of observations: 94, Error degrees of freedom: 91 Root Mean Squared Error: 4.12 R-squared: 0.743, Adjusted R-Squared: 0.738 F-statistic vs. constant model: 132, p-value = 1.38e-27
The p-value of 0.18493 indicates that Acceleration
does not have a significant impact on MPG
.
Remove Acceleration
from the model, and try improving the model by adding the predictor variable Model_Year
. First define Model_Year
as a categorical variable.
tbl.Model_Year = categorical(tbl.Model_Year);
lm = fitlm(tbl,'MPG~Weight+Model_Year')
lm = Linear regression model: MPG ~ 1 + Weight + Model_Year Estimated Coefficients: Estimate SE tStat pValue __________ __________ _______ __________ (Intercept) 40.11 1.5418 26.016 1.2024e-43 Weight -0.0066475 0.00042802 -15.531 3.3639e-27 Model_Year_76 1.9291 0.74761 2.5804 0.011488 Model_Year_82 7.9093 0.84975 9.3078 7.8681e-15 Number of observations: 94, Error degrees of freedom: 90 Root Mean Squared Error: 2.92 R-squared: 0.873, Adjusted R-Squared: 0.868 F-statistic vs. constant model: 206, p-value = 3.83e-40
Specifying modelspec
using Wilkinson notation enables you to update the model without having to change the design matrix. fitlm
uses only the variables that are specified in the formula. It also creates the necessary two dummy indicator variables for the categorical variable Model_Year
.
Fit Linear Regression Using Terms Matrix
Fit a linear regression model using a terms matrix.
Terms Matrix for Table Input
If the model variables are in a table, then a column of 0
s in a terms matrix represents the position of the response variable.
Load the hospital
data set.
load hospital
Store the variables in a table.
t = table(hospital.Sex,hospital.BloodPressure(:,1),hospital.Age,hospital.Smoker, ... 'VariableNames',{'Sex','BloodPressure','Age','Smoker'});
Represent the linear model 'BloodPressure ~ 1 + Sex + Age + Smoker'
using a terms matrix. The response variable is in the second column of the table, so the second column of the terms matrix must be a column of 0
s for the response variable.
T = [0 0 0 0;1 0 0 0;0 0 1 0;0 0 0 1]
T = 4×4
0 0 0 0
1 0 0 0
0 0 1 0
0 0 0 1
Fit a linear model.
mdl1 = fitlm(t,T)
mdl1 = Linear regression model: BloodPressure ~ 1 + Sex + Age + Smoker Estimated Coefficients: Estimate SE tStat pValue ________ ________ ________ __________ (Intercept) 116.14 2.6107 44.485 7.1287e-66 Sex_Male 0.050106 0.98364 0.050939 0.95948 Age 0.085276 0.066945 1.2738 0.2058 Smoker_1 9.87 1.0346 9.5395 1.4516e-15 Number of observations: 100, Error degrees of freedom: 96 Root Mean Squared Error: 4.78 R-squared: 0.507, Adjusted R-Squared: 0.492 F-statistic vs. constant model: 33, p-value = 9.91e-15
Terms Matrix for Matrix Input
If the predictor and response variables are in a matrix and column vector, then you must include 0
for the response variable at the end of each row in a terms matrix.
Load the carsmall
data set and define the matrix of predictors.
load carsmall
X = [Acceleration,Weight];
Specify the model 'MPG ~ Acceleration + Weight + Acceleration:Weight + Weight^2'
using a terms matrix. This model includes the main effect and two-way interaction terms for the variables Acceleration
and Weight
, and a second-order term for the variable Weight
.
T = [0 0 0;1 0 0;0 1 0;1 1 0;0 2 0]
T = 5×3
0 0 0
1 0 0
0 1 0
1 1 0
0 2 0
Fit a linear model.
mdl2 = fitlm(X,MPG,T)
mdl2 = Linear regression model: y ~ 1 + x1*x2 + x2^2 Estimated Coefficients: Estimate SE tStat pValue ___________ __________ _______ __________ (Intercept) 48.906 12.589 3.8847 0.00019665 x1 0.54418 0.57125 0.95261 0.34337 x2 -0.012781 0.0060312 -2.1192 0.036857 x1:x2 -0.00010892 0.00017925 -0.6076 0.545 x2^2 9.7518e-07 7.5389e-07 1.2935 0.19917 Number of observations: 94, Error degrees of freedom: 89 Root Mean Squared Error: 4.1 R-squared: 0.751, Adjusted R-Squared: 0.739 F-statistic vs. constant model: 67, p-value = 4.99e-26
Only the intercept and x2
term, which corresponds to the Weight
variable, are significant at the 5% significance level.
Linear Regression with Categorical Predictor
Fit a linear regression model that contains a categorical predictor. Reorder the categories of the categorical predictor to control the reference level in the model. Then, use anova
to test the significance of the categorical variable.
Model with Categorical Predictor
Load the carsmall
data set and create a linear regression model of MPG
as a function of Model_Year
. To treat the numeric vector Model_Year
as a categorical variable, identify the predictor using the 'CategoricalVars'
name-value pair argument.
load carsmall mdl = fitlm(Model_Year,MPG,'CategoricalVars',1,'VarNames',{'Model_Year','MPG'})
mdl = Linear regression model: MPG ~ 1 + Model_Year Estimated Coefficients: Estimate SE tStat pValue ________ ______ ______ __________ (Intercept) 17.69 1.0328 17.127 3.2371e-30 Model_Year_76 3.8839 1.4059 2.7625 0.0069402 Model_Year_82 14.02 1.4369 9.7571 8.2164e-16 Number of observations: 94, Error degrees of freedom: 91 Root Mean Squared Error: 5.56 R-squared: 0.531, Adjusted R-Squared: 0.521 F-statistic vs. constant model: 51.6, p-value = 1.07e-15
The model formula in the display, MPG ~ 1 + Model_Year
, corresponds to
,
where and are indicator variables whose value is one if the value of Model_Year
is 76 and 82, respectively. The Model_Year
variable includes three distinct values, which you can check by using the unique
function.
unique(Model_Year)
ans = 3×1
70
76
82
fitlm
chooses the smallest value in Model_Year
as a reference level ('70'
) and creates two indicator variables and . The model includes only two indicator variables because the design matrix becomes rank deficient if the model includes three indicator variables (one for each level) and an intercept term.
Model with Full Indicator Variables
You can interpret the model formula of mdl
as a model that has three indicator variables without an intercept term:
.
Alternatively, you can create a model that has three indicator variables without an intercept term by manually creating indicator variables and specifying the model formula.
temp_Year = dummyvar(categorical(Model_Year));
Model_Year_70 = temp_Year(:,1);
Model_Year_76 = temp_Year(:,2);
Model_Year_82 = temp_Year(:,3);
tbl = table(Model_Year_70,Model_Year_76,Model_Year_82,MPG);
mdl = fitlm(tbl,'MPG ~ Model_Year_70 + Model_Year_76 + Model_Year_82 - 1')
mdl = Linear regression model: MPG ~ Model_Year_70 + Model_Year_76 + Model_Year_82 Estimated Coefficients: Estimate SE tStat pValue ________ _______ ______ __________ Model_Year_70 17.69 1.0328 17.127 3.2371e-30 Model_Year_76 21.574 0.95387 22.617 4.0156e-39 Model_Year_82 31.71 0.99896 31.743 5.2234e-51 Number of observations: 94, Error degrees of freedom: 91 Root Mean Squared Error: 5.56
Choose Reference Level in Model
You can choose a reference level by modifying the order of categories in a categorical variable. First, create a categorical variable Year
.
Year = categorical(Model_Year);
Check the order of categories by using the categories
function.
categories(Year)
ans = 3x1 cell
{'70'}
{'76'}
{'82'}
If you use Year
as a predictor variable, then fitlm
chooses the first category '70'
as a reference level. Reorder Year
by using the reordercats
function.
Year_reordered = reordercats(Year,{'76','70','82'}); categories(Year_reordered)
ans = 3x1 cell
{'76'}
{'70'}
{'82'}
The first category of Year_reordered
is '76'
. Create a linear regression model of MPG
as a function of Year_reordered
.
mdl2 = fitlm(Year_reordered,MPG,'VarNames',{'Model_Year','MPG'})
mdl2 = Linear regression model: MPG ~ 1 + Model_Year Estimated Coefficients: Estimate SE tStat pValue ________ _______ _______ __________ (Intercept) 21.574 0.95387 22.617 4.0156e-39 Model_Year_70 -3.8839 1.4059 -2.7625 0.0069402 Model_Year_82 10.136 1.3812 7.3385 8.7634e-11 Number of observations: 94, Error degrees of freedom: 91 Root Mean Squared Error: 5.56 R-squared: 0.531, Adjusted R-Squared: 0.521 F-statistic vs. constant model: 51.6, p-value = 1.07e-15
mdl2
uses '76'
as a reference level and includes two indicator variables and .
Evaluate Categorical Predictor
The model display of mdl2
includes a p-value of each term to test whether or not the corresponding coefficient is equal to zero. Each p-value examines each indicator variable. To examine the categorical variable Model_Year
as a group of indicator variables, use anova
. Use the 'components'
(default) option to return a component ANOVA table that includes ANOVA statistics for each variable in the model except the constant term.
anova(mdl2,'components')
ans=2×5 table
SumSq DF MeanSq F pValue
______ __ ______ _____ __________
Model_Year 3190.1 2 1595.1 51.56 1.0694e-15
Error 2815.2 91 30.936
The component ANOVA table includes the p-value of the Model_Year
variable, which is smaller than the p-values of the indicator variables.
Specify Response and Predictor Variables for Linear Model
Fit a linear regression model to sample data. Specify the response and predictor variables, and include only pairwise interaction terms in the model.
Load sample data.
load hospital
Fit a linear model with interaction terms to the data. Specify weight as the response variable, and sex, age, and smoking status as the predictor variables. Also, specify that sex and smoking status are categorical variables.
mdl = fitlm(hospital,'interactions','ResponseVar','Weight',... 'PredictorVars',{'Sex','Age','Smoker'},... 'CategoricalVar',{'Sex','Smoker'})
mdl = Linear regression model: Weight ~ 1 + Sex*Age + Sex*Smoker + Age*Smoker Estimated Coefficients: Estimate SE tStat pValue ________ _______ ________ __________ (Intercept) 118.7 7.0718 16.785 6.821e-30 Sex_Male 68.336 9.7153 7.0339 3.3386e-10 Age 0.31068 0.18531 1.6765 0.096991 Smoker_1 3.0425 10.446 0.29127 0.77149 Sex_Male:Age -0.49094 0.24764 -1.9825 0.050377 Sex_Male:Smoker_1 0.9509 3.8031 0.25003 0.80312 Age:Smoker_1 -0.07288 0.26275 -0.27737 0.78211 Number of observations: 100, Error degrees of freedom: 93 Root Mean Squared Error: 8.75 R-squared: 0.898, Adjusted R-Squared: 0.892 F-statistic vs. constant model: 137, p-value = 6.91e-44
The weight of the patients do not seem to differ significantly according to age, or the status of smoking, or interaction of these factors with patient sex at the 5% significance level.
Fit Robust Linear Regression Model
Load the hald
data set, which measures the effect of cement composition on its hardening heat.
load hald
This data set includes the variables ingredients
and heat
. The matrix ingredients
contains the percent composition of four chemicals present in the cement. The vector heat
contains the values for the heat hardening after 180 days for each cement sample.
Fit a robust linear regression model to the data.
mdl = fitlm(ingredients,heat,'RobustOpts','on')
mdl = Linear regression model (robust fit): y ~ 1 + x1 + x2 + x3 + x4 Estimated Coefficients: Estimate SE tStat pValue ________ _______ ________ ________ (Intercept) 60.09 75.818 0.79256 0.4509 x1 1.5753 0.80585 1.9548 0.086346 x2 0.5322 0.78315 0.67957 0.51596 x3 0.13346 0.8166 0.16343 0.87424 x4 -0.12052 0.7672 -0.15709 0.87906 Number of observations: 13, Error degrees of freedom: 8 Root Mean Squared Error: 2.65 R-squared: 0.979, Adjusted R-Squared: 0.969 F-statistic vs. constant model: 94.6, p-value = 9.03e-07
For more details, see the topic Reduce Outlier Effects Using Robust Regression, which compares the results of a robust fit to a standard least-squares fit.
Compute Mean Absolute Error Using Cross-Validation
Compute the mean absolute error of a regression model by using 10-fold cross-validation.
Load the carsmall
data set. Specify the Acceleration
and Displacement
variables as predictors and the Weight
variable as the response.
load carsmall
X1 = Acceleration;
X2 = Displacement;
y = Weight;
Create the custom function regf
(shown at the end of this example). This function fits a regression model to training data and then computes predicted car weights on a test set. The function compares the predicted car weight values to the true values, and then computes the mean absolute error (MAE) and the MAE adjusted to the range of the test set car weights.
Note: If you use the live script file for this example, the regf
function is already included at the end of the file. Otherwise, you need to create this function at the end of your .m file or add it as a file on the MATLAB® path.
By default, crossval
performs 10-fold cross-validation. For each of the 10 training and test set partitions of the data in X1
, X2
, and y
, compute the MAE and adjusted MAE values using the regf
function. Find the mean MAE and mean adjusted MAE.
rng('default') % For reproducibility values = crossval(@regf,X1,X2,y)
values = 10×2
319.2261 0.1132
342.3722 0.1240
214.3735 0.0902
174.7247 0.1128
189.4835 0.0832
249.4359 0.1003
194.4210 0.0845
348.7437 0.1700
283.1761 0.1187
210.7444 0.1325
mean(values)
ans = 1×2
252.6701 0.1129
This code creates the function regf
.
function errors = regf(X1train,X2train,ytrain,X1test,X2test,ytest) tbltrain = table(X1train,X2train,ytrain, ... 'VariableNames',{'Acceleration','Displacement','Weight'}); tbltest = table(X1test,X2test,ytest, ... 'VariableNames',{'Acceleration','Displacement','Weight'}); mdl = fitlm(tbltrain,'Weight ~ Acceleration + Displacement'); yfit = predict(mdl,tbltest); MAE = mean(abs(yfit-tbltest.Weight)); adjMAE = MAE/range(tbltest.Weight); errors = [MAE adjMAE]; end
Input Arguments
tbl
— Input data
table | dataset array
Input data including predictor and response variables, specified as a table or dataset array. The predictor variables can be numeric, logical, categorical, character, or string. The response variable must be numeric or logical.
By default,
fitlm
takes the last variable as the response variable and the others as the predictor variables.To set a different column as the response variable, use the
ResponseVar
name-value pair argument.To use a subset of the columns as predictors, use the
PredictorVars
name-value pair argument.To define a model specification, set the
modelspec
argument using a formula or terms matrix. The formula or terms matrix specifies which columns to use as the predictor or response variables.
The variable names in a table do not have to be valid MATLAB® identifiers, but the names must not contain leading or trailing blanks. If the names are not valid, you cannot use a formula when you fit or adjust a model; for example:
You cannot specify
modelspec
using a formula.You cannot use a formula to specify the terms to add or remove when you use the
addTerms
function or theremoveTerms
function, respectively.You cannot use a formula to specify the lower and upper bounds of the model when you use the
step
orstepwiselm
function with the name-value pair arguments'Lower'
and'Upper'
, respectively.
You can verify the variable names in tbl
by using the isvarname
function. If the variable names are
not valid, then you can convert them by using the matlab.lang.makeValidName
function.
ResponseVarName
— Name of variable to use as response
string scalar | character vector
Name of the variable to use as the response, specified as a string scalar or character vector.
ResponseVarName
indicates which variable in
tbl
contains the response data. When you specify
ResponseVarName
, you must also specify the
tbl
input argument.
Data Types: char
| string
X
— Predictor variables
matrix
Predictor variables, specified as an n-by-p matrix,
where n is the number of observations and p is
the number of predictor variables. Each column of X
represents
one variable, and each row represents one observation.
By default, there is a constant term in the model, unless you
explicitly remove it, so do not include a column of 1s in X
.
Data Types: single
| double
modelspec
— Model specification
'linear'
(default) | character vector or string scalar naming the model | t-by-(p + 1) terms matrix | character vector or string scalar formula in the form 'y ~ terms'
Model specification, specified as one of these values.
A character vector or string scalar naming the model.
Value Model Type 'constant'
Model contains only a constant (intercept) term. 'linear'
Model contains an intercept and linear term for each predictor. 'interactions'
Model contains an intercept, linear term for each predictor, and all products of pairs of distinct predictors (no squared terms). 'purequadratic'
Model contains an intercept term and linear and squared terms for each predictor. 'quadratic'
Model contains an intercept term, linear and squared terms for each predictor, and all products of pairs of distinct predictors. 'poly
ijk
'Model is a polynomial with all terms up to degree i
in the first predictor, degreej
in the second predictor, and so on. Specify the maximum degree for each predictor by using numerals 0 though 9. The model contains interaction terms, but the degree of each interaction term does not exceed the maximum value of the specified degrees. For example,'poly13'
has an intercept and x1, x2, x22, x23, x1*x2, and x1*x22 terms, where x1 and x2 are the first and second predictors, respectively.A t-by-(p + 1) matrix, or a Terms Matrix, specifying terms in the model, where t is the number of terms and p is the number of predictor variables, and +1 accounts for the response variable. A terms matrix is convenient when the number of predictors is large and you want to generate the terms programmatically.
A character vector or string scalar Formula in the form
'y ~ terms'
,where the
terms
are in Wilkinson Notation. The variable names in the formula must be variable names intbl
or variable names specified byVarnames
. Also, the variable names must be valid MATLAB identifiers.The software determines the order of terms in a fitted model by using the order of terms in
tbl
orX
. Therefore, the order of terms in the model can be different from the order of terms in the specified formula.
When you specify modelspec
, you cannot use the
PredictorVars
name-value argument to specify the
predictor variables.
Example: 'quadratic'
Example: 'y ~ x1 + x2^2 + x1:x2'
Data Types: single
| double
| char
| string
Name-Value Arguments
Specify optional pairs of arguments as
Name1=Value1,...,NameN=ValueN
, where Name
is
the argument name and Value
is the corresponding value.
Name-value arguments must appear after other arguments, but the order of the
pairs does not matter.
Before R2021a, use commas to separate each name and value, and enclose
Name
in quotes.
Example: 'Intercept',false,'PredictorVars',[1,3],'ResponseVar',5,'RobustOpts','logistic'
specifies
a robust regression model with no constant term, where the algorithm
uses the logistic weighting function with the default tuning constant,
first and third variables are the predictor variables, and fifth variable
is the response variable.
CategoricalVars
— Categorical predictor list
string array | cell array of character vectors | logical or numeric index vector
Categorical predictor list, specified as the comma-separated pair consisting of
'CategoricalVars'
and either a string array or cell array of
character vectors containing categorical predictor names in the table or dataset array
tbl
, or a logical or numeric index vector indicating which
predictor columns are categorical.
If data is in a table or dataset array
tbl
, then, by default,fitlm
treats all categorical values, logical values, character arrays, string arrays, and cell arrays of character vectors as categorical predictors.If data is in matrix
X
, then the default value of'CategoricalVars'
is an empty matrix[]
. That is, no predictor is categorical unless you specify it as categorical.
For example, you can specify the second and third variables out of six as categorical using either of the following:
Example: 'CategoricalVars',[2,3]
Example: 'CategoricalVars',logical([0 1 1 0 0 0])
Data Types: single
| double
| logical
| string
| cell
Exclude
— Observations to exclude
logical or numeric index vector
Observations to exclude from the fit, specified as the comma-separated
pair consisting of 'Exclude'
and a logical or numeric
index vector indicating which observations to exclude from the fit.
For example, you can exclude observations 2 and 3 out of 6 using either of the following examples.
Example: 'Exclude',[2,3]
Example: 'Exclude',logical([0 1 1 0 0 0])
Data Types: single
| double
| logical
Intercept
— Indicator for constant term
true
(default) | false
Indicator for the constant term (intercept) in the fit, specified as the comma-separated pair
consisting of 'Intercept'
and either true
to
include or false
to remove the constant term from the model.
Use 'Intercept'
only when specifying the model using a character vector or
string scalar, not a formula or matrix.
Example: 'Intercept',false
PredictorVars
— Predictor variables
string array | cell array of character vectors | logical or numeric index vector
Predictor variables to use in the fit, specified as the comma-separated pair consisting of
'PredictorVars'
and either a string array or cell array of
character vectors of the variable names in the table or dataset array
tbl
, or a logical or numeric index vector indicating which
columns are predictor variables.
The string values or character vectors should be among the names in tbl
, or
the names you specify using the 'VarNames'
name-value pair
argument.
The default is all variables in X
, or all
variables in tbl
except for ResponseVar
.
For example, you can specify the second and third variables as the predictor variables using either of the following examples.
When you specify PredictorVars
, you cannot use the
modelspec
input argument to specify a terms matrix.
Example: 'PredictorVars',[2,3]
Example: 'PredictorVars',logical([0 1 1 0 0 0])
Data Types: single
| double
| logical
| string
| cell
ResponseVar
— Response variable
last column in tbl
(default) | character vector or string scalar containing variable name | logical or numeric index vector
Response variable to use in the fit, specified as the comma-separated pair consisting of
'ResponseVar'
and either a character vector or string scalar
containing the variable name in the table or dataset array tbl
, or a
logical or numeric index vector indicating which column is the response variable. You
typically need to use 'ResponseVar'
when fitting a table or dataset
array tbl
.
For example, you can specify the fourth variable, say yield
,
as the response out of six variables, in one of the following ways.
Example: 'ResponseVar','yield'
Example: 'ResponseVar',[4]
Example: 'ResponseVar',logical([0 0 0 1 0 0])
Data Types: single
| double
| logical
| char
| string
RobustOpts
— Indicator of robust fitting type
'off'
(default) | 'on'
| character vector | string scalar | structure
Indicator of the robust fitting type to use, specified as the comma-separated pair consisting
of 'RobustOpts'
and one of these values.
'off'
— No robust fitting.fitlm
uses ordinary least squares.'on'
— Robust fitting using the'bisquare'
weight function with the default tuning constant.Character vector or string scalar — Name of a robust fitting weight function from the following table.
fitlm
uses the corresponding default tuning constant specified in the table.Structure with the two fields
RobustWgtFun
andTune
.The
RobustWgtFun
field contains the name of a robust fitting weight function from the following table or a function handle of a custom weight function.The
Tune
field contains a tuning constant. If you do not set theTune
field,fitlm
uses the corresponding default tuning constant.
Weight Function Description Default Tuning Constant 'andrews'
w = (abs(r)<pi) .* sin(r) ./ r
1.339 'bisquare'
w = (abs(r)<1) .* (1 - r.^2).^2
(also called biweight)4.685 'cauchy'
w = 1 ./ (1 + r.^2)
2.385 'fair'
w = 1 ./ (1 + abs(r))
1.400 'huber'
w = 1 ./ max(1, abs(r))
1.345 'logistic'
w = tanh(r) ./ r
1.205 'ols'
Ordinary least squares (no weighting function) None 'talwar'
w = 1 * (abs(r)<1)
2.795 'welsch'
w = exp(-(r.^2))
2.985 function handle Custom weight function that accepts a vector r
of scaled residuals, and returns a vector of weights the same size asr
1 The default tuning constants of built-in weight functions give coefficient estimates that are approximately 95% as statistically efficient as the ordinary least-squares estimates, provided the response has a normal distribution with no outliers. Decreasing the tuning constant increases the downweight assigned to large residuals; increasing the tuning constant decreases the downweight assigned to large residuals.
The value r in the weight functions is
r = resid/(tune*s*sqrt(1–h))
,where
resid
is the vector of residuals from the previous iteration,tune
is the tuning constant,h
is the vector of leverage values from a least-squares fit, ands
is an estimate of the standard deviation of the error term given bys = MAD/0.6745
.MAD
is the median absolute deviation of the residuals from their median. The constant 0.6745 makes the estimate unbiased for the normal distribution. IfX
has p columns, the software excludes the smallest p absolute deviations when computing the median.
For robust fitting, fitlm
uses
M-estimation to formulate estimating equations and solves them using the method of Iteratively Reweighted Least Squares (IRLS).
Example: 'RobustOpts','andrews'
VarNames
— Names of variables
{'x1','x2',...,'xn','y'}
(default) | string array | cell array of character vectors
Names of variables, specified as the comma-separated pair consisting of
'VarNames'
and a string array or cell array of character vectors
including the names for the columns of X
first, and the name for the
response variable y
last.
'VarNames'
is not applicable to variables in a table or dataset
array, because those variables already have names.
The variable names do not have to be valid MATLAB identifiers, but the names must not contain leading or trailing blanks. If the names are not valid, you cannot use a formula when you fit or adjust a model; for example:
You cannot use a formula to specify the terms to add or remove when you use the
addTerms
function or theremoveTerms
function, respectively.You cannot use a formula to specify the lower and upper bounds of the model when you use the
step
orstepwiselm
function with the name-value pair arguments'Lower'
and'Upper'
, respectively.
Before specifying 'VarNames',varNames
, you can verify the variable
names in varNames
by using the isvarname
function. If the variable names are not valid, then you can
convert them by using the matlab.lang.makeValidName
function.
Example: 'VarNames',{'Horsepower','Acceleration','Model_Year','MPG'}
Data Types: string
| cell
Weights
— Observation weights
ones(n,1)
(default) | n-by-1 vector of nonnegative scalar values
Observation weights, specified as the comma-separated pair consisting
of 'Weights'
and an n-by-1 vector
of nonnegative scalar values, where n is the number
of observations.
Data Types: single
| double
Output Arguments
mdl
— Linear model
LinearModel
object
Linear model representing a least-squares fit of the response to the data, returned as a
LinearModel
object.
If the value of the 'RobustOpts'
name-value
pair is not []
or 'ols'
, the
model is not a least-squares fit, but uses the robust fitting function.
More About
Terms Matrix
A terms matrix T
is a
t-by-(p + 1) matrix specifying terms in a model,
where t is the number of terms, p is the number of
predictor variables, and +1 accounts for the response variable. The value of
T(i,j)
is the exponent of variable j
in term
i
.
For example, suppose that an input includes three predictor variables x1
,
x2
, and x3
and the response variable
y
in the order x1
, x2
,
x3
, and y
. Each row of T
represents one term:
[0 0 0 0]
— Constant term or intercept[0 1 0 0]
—x2
; equivalently,x1^0 * x2^1 * x3^0
[1 0 1 0]
—x1*x3
[2 0 0 0]
—x1^2
[0 1 2 0]
—x2*(x3^2)
The 0
at the end of each term represents the response variable. In
general, a column vector of zeros in a terms matrix represents the position of the response
variable. If you have the predictor and response variables in a matrix and column vector,
then you must include 0
for the response variable in the last column of
each row.
Formula
A formula for model specification is a character vector or string scalar of
the form '
.y
~
terms
'
y
is the response name.terms
represents the predictor terms in a model using Wilkinson notation.
To represent predictor and response variables, use the variable names of the table
input tbl
or the variable names specified by using
VarNames
. The default value of
VarNames
is
{'x1','x2',...,'xn','y'}
.
For example:
'y ~ x1 + x2 + x3'
specifies a three-variable linear model with intercept.'y ~ x1 + x2 + x3 – 1'
specifies a three-variable linear model without intercept. Note that formulas include a constant (intercept) term by default. To exclude a constant term from the model, you must include–1
in the formula.
A formula includes a constant term unless you explicitly remove the term using
–1
.
Wilkinson Notation
Wilkinson notation describes the terms in a model. The notation relates to the terms included in the model, not to the multipliers (coefficients) of those terms.
Wilkinson notation uses these symbols:
+
means include the next variable.–
means do not include the next variable.:
defines an interaction, which is a product of the terms.*
defines an interaction and all lower order terms.^
raises the predictor to a power, exactly as in*
repeated, so^
includes lower order terms as well.()
groups the terms.
This table shows typical examples of Wilkinson notation.
Wilkinson Notation | Terms in Standard Notation |
---|---|
1 | Constant (intercept) term |
x1^k , where k is a positive
integer | x1 ,
x12 , ...,
x1k |
x1 + x2 | x1 , x2 |
x1*x2 | x1 , x2 ,
x1*x2 |
x1:x2 | x1*x2 only |
–x2 | Do not include x2 |
x1*x2 + x3 | x1 , x2 , x3 ,
x1*x2 |
x1 + x2 + x3 + x1:x2 | x1 , x2 , x3 ,
x1*x2 |
x1*x2*x3 – x1:x2:x3 | x1 , x2 , x3 ,
x1*x2 , x1*x3 ,
x2*x3 |
x1*(x2 + x3) | x1 , x2 , x3 ,
x1*x2 , x1*x3 |
For more details, see Wilkinson Notation.
Tips
To access the model properties of the
LinearModel
objectmdl
, you can use dot notation. For example,mdl.Residuals
returns a table of the raw, Pearson, Studentized, and standardized residual values for the model.After training a model, you can generate C/C++ code that predicts responses for new data. Generating C/C++ code requires MATLAB Coder™. For details, see Introduction to Code Generation.
Algorithms
The main fitting algorithm is QR decomposition. For robust fitting,
fitlm
uses M-estimation to formulate estimating equations and solves them using the method of Iteratively Reweighted Least Squares (IRLS).fitlm
treats a categorical predictor as follows:A model with a categorical predictor that has L levels (categories) includes L – 1 indicator variables. The model uses the first category as a reference level, so it does not include the indicator variable for the reference level. If the data type of the categorical predictor is
categorical
, then you can check the order of categories by usingcategories
and reorder the categories by usingreordercats
to customize the reference level. For more details about creating indicator variables, see Automatic Creation of Dummy Variables.fitlm
treats the group of L – 1 indicator variables as a single variable. If you want to treat the indicator variables as distinct predictor variables, create indicator variables manually by usingdummyvar
. Then use the indicator variables, except the one corresponding to the reference level of the categorical variable, when you fit a model. For the categorical predictorX
, if you specify all columns ofdummyvar(X)
and an intercept term as predictors, then the design matrix becomes rank deficient.Interaction terms between a continuous predictor and a categorical predictor with L levels consist of the element-wise product of the L – 1 indicator variables with the continuous predictor.
Interaction terms between two categorical predictors with L and M levels consist of the (L – 1)*(M – 1) indicator variables to include all possible combinations of the two categorical predictor levels.
You cannot specify higher-order terms for a categorical predictor because the square of an indicator is equal to itself.
fitlm
considersNaN
,''
(empty character vector),""
(empty string),<missing>
, and<undefined>
values intbl
,X
, andY
to be missing values.fitlm
does not use observations with missing values in the fit. TheObservationInfo
property of a fitted model indicates whether or notfitlm
uses each observation in the fit.
Alternative Functionality
For reduced computation time on high-dimensional data sets, fit a linear regression model using the
fitrlinear
function.To regularize a regression, use
fitrlinear
,lasso
,ridge
, orplsregress
.fitrlinear
regularizes a regression for high-dimensional data sets using lasso or ridge regression.lasso
removes redundant predictors in linear regression using lasso or elastic net.ridge
regularizes a regression with correlated terms using ridge regression.plsregress
regularizes a regression with correlated terms using partial least squares.
References
[1] DuMouchel, W. H., and F. L. O'Brien. “Integrating a Robust Option into a Multiple Regression Computing Environment.” Computer Science and Statistics: Proceedings of the 21st Symposium on the Interface. Alexandria, VA: American Statistical Association, 1989.
[2] Holland, P. W., and R. E. Welsch. “Robust Regression Using Iteratively Reweighted Least-Squares.” Communications in Statistics: Theory and Methods, A6, 1977, pp. 813–827.
[3] Huber, P. J. Robust Statistics. Hoboken, NJ: John Wiley & Sons, Inc., 1981.
[4] Street, J. O., R. J. Carroll, and D. Ruppert. “A Note on Computing Robust Regression Estimates via Iteratively Reweighted Least Squares.” The American Statistician. Vol. 42, 1988, pp. 152–154.
Extended Capabilities
Tall Arrays
Calculate with arrays that have more rows than fit in memory.
This function supports tall arrays for out-of-memory data with some limitations.
If any input argument to
fitlm
is a tall array, then all of the other inputs must be tall arrays as well. This includes nonempty variables supplied with the'Weights'
and'Exclude'
name-value pairs.The
'RobustOpts'
name-value pair is not supported with tall arrays.For tall data,
fitlm
returns aCompactLinearModel
object that contains most of the same properties as aLinearModel
object. The main difference is that the compact object is sensitive to memory requirements. The compact object does not include properties that include the data, or that include an array of the same size as the data. The compact object does not contain theseLinearModel
properties:Diagnostics
Fitted
ObservationInfo
ObservationNames
Residuals
Steps
Variables
You can compute the residuals directly from the compact object returned by
LM = fitlm(X,Y)
usingRES = Y - predict(LM,X); S = LM.RMSE; histogram(RES,linspace(-3*S,3*S,51))
If the
CompactLinearModel
object is missing lower order terms that include categorical factors:The
plotEffects
andplotInteraction
methods are not supported.The
anova
method with the'components'
option is not supported.
For more information, see Tall Arrays for Out-of-Memory Data.
GPU Arrays
Accelerate code by running on a graphics processing unit (GPU) using Parallel Computing Toolbox™.
This function fully supports GPU arrays. For more information, see Run MATLAB Functions on a GPU (Parallel Computing Toolbox).
Version History
Introduced in R2013bR2024a: Specify penalized likelihood estimation
fitlm
now supports penalized likelihood estimation by
Jeffreys prior. You can use Jeffreys prior to reduce the coefficient estimate bias
when fitting a generalized linear regression model to a separable data set or small
number of samples. To penalize the likelihood estimate, set the
LikelihoodPenalty
name-value argument to
"jeffreys-prior"
.
MATLAB Command
You clicked a link that corresponds to this MATLAB command:
Run the command by entering it in the MATLAB Command Window. Web browsers do not support MATLAB commands.
Select a Web Site
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .
You can also select a web site from the following list
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.
Americas
- América Latina (Español)
- Canada (English)
- United States (English)
Europe
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)
Asia Pacific
- Australia (English)
- India (English)
- New Zealand (English)
- 中国
- 日本Japanese (日本語)
- 한국Korean (한국어)