Main Content

Train Kernel Approximation Model Using Regression Learner App

This example shows how to create and compare various kernel approximation regression models using the Regression Learner app, and export trained models to the workspace to make predictions for new data. Kernel approximation models are typically useful for performing nonlinear regression with many observations. For large in-memory data, kernel approximation models tend to train and predict faster than SVM models with Gaussian kernels.

  1. In the MATLAB® Command Window, load the carbig data set, and create a table containing the different variables.

    load carbig
    cartable = table(Acceleration,Cylinders,Displacement, ...
        Horsepower,Model_Year,Weight,Origin,MPG);
  2. Click the Apps tab, and then click the Show more arrow on the right to open the apps gallery. In the Machine Learning and Deep Learning group, click Regression Learner.

  3. On the Learn tab, in the File section, click New Session and select From Workspace.

  4. In the New Session from Workspace dialog box, select the table cartable from the Data Set Variable list.

    As shown in the dialog box, the app selects MPG as the response and the other variables as predictors. For this example, do not change the selections.

    New Session from Workspace dialog box

  5. To accept the default validation scheme and continue, click Start Session. The default validation option is 5-fold cross-validation, to protect against overfitting.

    Regression Learner creates a plot of the response with the record number on the x-axis.

  6. Use the response plot to investigate which variables are useful for predicting the response. To visualize the relation between different predictors and the response, select different variables in the X list under X-axis to the right of the plot. Observe which variables are correlated most clearly with the response.

  7. Create a selection of kernel approximation models. On the Learn tab, in the Models section, click the arrow to open the gallery. In the Kernel Approximation Regression Models group, click All Kernels.

  8. In the Train section, click Train All and select Train All.

    Note

    • If you have Parallel Computing Toolbox™, then the Use Parallel button is selected by default. After you click Train All and select Train All or Train Selected, the app opens a parallel pool of workers. During this time, you cannot interact with the software. After the pool opens, you can continue to interact with the app while models train in parallel.

    • If you do not have Parallel Computing Toolbox, then the Use Background Training check box in the Train All menu is selected by default. After you select an option to train models, the app opens a background pool. After the pool opens, you can continue to interact with the app while models train in the background.

    Regression Learner trains one of each kernel approximation option in the gallery, as well as the default fine tree model. In the Models pane, the app outlines the RMSE (Validation) (root mean squared error) of the best model.

  9. Select a model in the Models pane to view the results. On the Learn tab, in the Plots and Results section, click the arrow to open the gallery, and then click Response in the Validation Results group. Examine the response plot for the trained model. True responses are blue, and predicted responses are yellow.

    Response plot of car data modeled by a kernel approximation model

    Note

    Validation introduces some randomness into the results. Your model validation results can vary from the results shown in this example.

  10. Under X-axis, select Horsepower and examine the response plot. Both the true and predicted responses are now plotted. Show the prediction errors, drawn as vertical lines between the predicted and true responses, by selecting the Errors check box under Plot to the right of the plot.

  11. For more information on the currently selected model, consult the Summary tab. Check and compare additional model characteristics, such as R-squared (coefficient of determination), MAE (mean absolute error), and prediction speed. To learn more, see View Model Metrics in Summary Tab and Models Pane. In the Summary tab, you can also find details on the currently selected model type, such as options used for training the model.

  12. Plot the predicted response versus the true response. On the Learn tab, in the Plots and Results section, click the arrow to open the gallery, and then click Predicted vs. Actual (Validation) in the Validation Results group. Use this plot to determine how well the regression model makes predictions for different response values.

    Plot of the predicted response versus the true response for a kernel approximation model

    A perfect regression model has predicted responses equal to the true responses, so all the points lie on a diagonal line. The vertical distance from the line to any point is the error of the prediction for that point. A good model has small errors, so the predictions are scattered near the line. Typically, a good model has points scattered roughly symmetrically around the diagonal line. If you can see any clear patterns in the plot, you can most likely improve your model.

  13. For each remaining model, select the model in the Models pane, open the predicted versus actual plot, and then compare the results across the models. For more information, see Compare Model Plots by Changing Layout.

  14. To try to improve the models, include different features. In the Models gallery, select All Kernels again. See if you can improve the models by removing features with low predictive power. In the Summary tab, click Feature Selection to expand the section.

    In the Feature Selection section, clear the check boxes for Acceleration and Cylinders to exclude them from the predictors. The response plot shows that these variables are not highly correlated with the response variable.

    In the Train section, click Train All and select Train All or Train Selected to train the kernel approximation models using the new set of features.

  15. Observe the new models in the Models pane. These models are the same kernel approximation models as before, but trained using only five of the seven predictors. For each model, the app displays how many predictors are used. To check which predictors are used, click a model in the Models pane and consult the Feature Selection section of the Summary tab.

    The models with the two features removed perform comparably to the models with all predictors. The models predict no better using all the predictors compared to using only a subset of them. If data collection is expensive or difficult, you might prefer a model that performs satisfactorily without some predictors.

  16. Select the model in the Models pane with the lowest validation RSME (best model), and view the residuals plot. On the Learn tab, in the Plots and Results section, click the arrow to open the gallery, and then click Residuals (Validation) in the Validation Results group. The residuals plot displays the difference between the predicted and true responses. To display the residuals as a line graph select Lines under Style.

    Under X-axis, select the variable to plot on the x-axis. Choose the true response, predicted response, record number, or one of the predictors.

    Plot of the residuals for a kernel approximation model

    Typically, a good model has residuals scattered roughly symmetrically around 0. If you can see any clear patterns in the residuals, you can most likely improve your model.

  17. You can try to further improve the best model in the Models pane by changing its hyperparameters. First, duplicate the model. Right-click the model and select Duplicate.

    Then, in the Summary tab of the duplicated model, try changing some of the hyperparameter settings, like the kernel scale parameter or the regularization strength. Train the new model by clicking Train All and selecting Train Selected.

    To learn more about kernel approximation model settings, see Kernel Approximation Models.

  18. You can export a compact version of the trained model to the workspace. On the Learn tab, click Export, click Export Model and select Export Model. In the Export Regression Model dialog box, the check box to include the training data is disabled because kernel approximation models do not store training data. In the dialog box, click OK to accept the default variable name.

  19. To examine the code for training this model, click Generate Function in the Export section.

Tip

Use the same workflow to evaluate and compare the other regression model types you can train in Regression Learner.

To train all the nonoptimizable regression model presets available for your data set:

  1. On the Learn tab, in the Models section, click the arrow to open the gallery of regression models.

  2. In the Get Started group, click All.

    Option selected for training all available model types

  3. In the Train section, click Train All and select Train All.

To learn about other regression model types, see Train Regression Models in Regression Learner App.

See Also

|

Related Topics