您可以先为超参数选择值，然后使用您选择的值对模型进行交叉验证，通过这样的迭代对超参数进行调整。这个过程会生成多个模型，其中估计的泛化误差最小的就是最佳模型。例如，要调整 SVM 模型，可以选择一组框约束和核尺度，使用每对值对模型进行交叉验证，然后比较它们的 10 折交叉验证均方误差估计值。
|Regression Learner||Train regression models to predict data using supervised machine learning|
|Sequential feature selection using custom criterion|
|Rank importance of predictors using ReliefF or RReliefF algorithm|
|Create partial dependence plot (PDP) and individual conditional expectation (ICE) plots|
|Fit linear regression model using stepwise regression|
|Create generalized linear regression model by stepwise regression|
|Confidence intervals of coefficient estimates of linear regression model|
|Linear hypothesis test on linear regression model coefficients|
|Durbin-Watson test with linear regression model object|
|Scatter plot or added variable plot of linear regression model|
|Added variable plot of linear regression model|
|Adjusted response plot of linear regression model|
|Plot observation diagnostics of linear regression model|
|Plot main effects of predictors in linear regression model|
|Plot interaction effects of two predictors in linear regression model|
|Plot residuals of linear regression model|
|Plot of slices through fitted linear regression surface|
|Confidence intervals of coefficient estimates of generalized linear model|
|Linear hypothesis test on generalized linear regression model coefficients|
|Analysis of deviance|
|Plot diagnostics of generalized linear regression model|
|Plot residuals of generalized linear regression model|
|Plot of slices through fitted generalized linear regression surface|
|Confidence intervals of coefficient estimates of nonlinear regression model|
|Linear hypothesis test on nonlinear regression model coefficients|
|Plot diagnostics of nonlinear regression model|
|Plot residuals of nonlinear regression model|
|Plot of slices through fitted nonlinear regression surface|
Workflow for training, comparing and improving regression models, including automated, manual, and parallel training.
In Regression Learner, automatically train a selection of models, or compare and tune options of linear regression models, regression trees, support vector machines, Gaussian process regression models, and ensembles of regression trees.
Identify useful predictors using plots, manually select features to include, and transform features using PCA in Regression Learner.
Compare model statistics and visualize results.
Learn about feature selection algorithms and explore the functions available for feature selection.
Perform Bayesian optimization using a fit function
or by calling
Create variables for Bayesian optimization.
Create the objective function for Bayesian optimization.
Set different types of constraints for Bayesian optimization.
Minimize cross-validation loss of a regression ensemble.
Visually monitor a Bayesian optimization.
Monitor a Bayesian optimization.
Understand the underlying algorithms for Bayesian optimization.
How Bayesian optimization works in parallel.
Speed up cross-validation using parallel computing.
Display and interpret linear regression output statistics.
Fit a linear regression model and examine the result.
Construct and analyze a linear regression model with interaction effects and interpret the results.
Evaluate a fitted model by using model properties and object functions.
In linear regression, the F-statistic is the test statistic for the analysis of variance (ANOVA) approach to test the significance of the model or the components in the model. The t-statistic is useful for making inferences about the regression coefficients.
Coefficient of determination (R-squared) indicates the proportionate amount of variation in the response variable y explained by the independent variables X in the linear regression model.
Estimated coefficient variances and covariances capture the precision of regression coefficient estimates.
Residuals are useful for detecting outlying y values and checking the linear regression assumptions with respect to the error term in the regression model.
The Durbin-Watson test assesses whether or not there is autocorrelation among the residuals of time series data.
Cook's distance is useful for identifying outliers in the X values (observations for predictor variables).
The hat matrix provides a measure of leverage.
Delete-1 change in covariance (
identifies the observations that are influential in the regression
Generalized linear models use linear methods to describe a potentially nonlinear relationship between predictor terms and a response variable.
Parametric nonlinear models represent the relationship between a continuous response variable and one or more continuous predictor variables.