Main Content

resume

Resume training support vector machine (SVM) classifier

Description

UpdatedSVMModel = resume(SVMModel,numIter) returns an updated support vector machine (SVM) classifier UpdatedSVMModel by training the SVM classifier SVMModel for numIter more iterations. Like SVMModel, the updated SVM classifier is a ClassificationSVM classifier.

resume continues applying the training options set when SVMModel was trained with fitcsvm.

example

UpdatedSVMModel = resume(SVMModel,numIter,Name,Value) returns UpdatedSVMModel with additional options specified by one or more name-value pair arguments. For example, you can specify the verbosity level.

example

Examples

collapse all

Train an SVM classifier and intentionally cause the solver to fail to converge onto a solution. Then resume training the classifier without having to restart the entire learning process.

Load the ionosphere data set.

load ionosphere
rng(1); % For reproducibility

Train an SVM classifier. Specify that the optimization routine uses at most 50 iterations.

SVMModel = fitcsvm(X,Y,'IterationLimit',50);
DidConverge = SVMModel.ConvergenceInfo.Converged
DidConverge = logical
   0

Reason = SVMModel.ConvergenceInfo.ReasonForConvergence
Reason = 
'NoConvergence'

DidConverge = 0 indicates that the optimization routine did not converge onto a solution. Reason states the reason why the routine did not converge. Therefore, SVMModel is a partially trained SVM classifier.

Resume training the SVM classifier for another 1500 iterations.

UpdatedSVMModel = resume(SVMModel,1500);
DidConverge = UpdatedSVMModel.ConvergenceInfo.Converged
DidConverge = logical
   1

Reason = UpdatedSVMModel.ConvergenceInfo.ReasonForConvergence
Reason = 
'DeltaGradient'

DidConverge indicates that the optimization routine converged onto a solution. Reason indicates that the gradient difference (DeltaGradient) reached its tolerance level (DeltaGradientTolerance). Therefore, SVMModel is a fully trained SVM classifier.

Train an SVM classifier and intentionally cause the solver to fail to converge onto a solution. Then resume training the classifier without having to restart the entire learning process. Compare values of the resubstitution loss for the partially trained classifier and the fully trained classifier.

Load the ionosphere data set.

load ionosphere

Train an SVM classifier. Specify that the optimization routine uses at most 100 iterations. Monitor the algorithm specifying that the software prints diagnostic information every 50 iterations.

SVMModel = fitcsvm(X,Y,'IterationLimit',100,'Verbose',1,'NumPrint',50);
|===================================================================================================================================|
|   Iteration  | Set  |   Set Size   |  Feasibility  |     Delta     |      KKT      |  Number of   |   Objective   |   Constraint  |
|              |      |              |      Gap      |    Gradient   |   Violation   |  Supp. Vec.  |               |   Violation   |
|===================================================================================================================================|
|            0 |active|          351 |  9.971591e-01 |  2.000000e+00 |  1.000000e+00 |            0 |  0.000000e+00 |  0.000000e+00 |
|           50 |active|          351 |  8.064425e-01 |  3.736929e+00 |  2.161317e+00 |           60 | -3.628863e+01 |  1.110223e-16 |

 SVM optimization did not converge to the required tolerance.

The software prints an iterative display to the Command Window. The printout indicates that the optimization routine has not converged onto a solution.

Estimate the resubstitution loss of the partially trained SVM classifier.

partialLoss = resubLoss(SVMModel)
partialLoss = 
0.1054

The training sample misclassification error is approximately 12%.

Resume training the classifier for another 1500 iterations. Specify that the software print diagnostic information every 250 iterations.

UpdatedSVMModel = resume(SVMModel,1500,'NumPrint',250)
|===================================================================================================================================|
|   Iteration  | Set  |   Set Size   |  Feasibility  |     Delta     |      KKT      |  Number of   |   Objective   |   Constraint  |
|              |      |              |      Gap      |    Gradient   |   Violation   |  Supp. Vec.  |               |   Violation   |
|===================================================================================================================================|
|          250 |active|          351 |  1.137406e-01 |  1.688486e+00 |  1.064098e+00 |          100 | -7.654307e+01 |  1.477984e-15 |
|          500 |active|          351 |  2.458986e-03 |  8.900780e-02 |  5.353919e-02 |          103 | -7.819650e+01 |  1.570792e-15 |
|          750 |active|          351 |  6.861149e-04 |  2.041818e-02 |  1.045385e-02 |          103 | -7.820930e+01 |  1.499668e-15 |
|         1000 |active|          351 |  5.992844e-05 |  1.878806e-03 |  1.095583e-03 |          103 | -7.820958e+01 |  1.606354e-15 |
|         1072 |active|          351 |  3.992245e-05 |  9.877142e-04 |  5.324559e-04 |          103 | -7.820959e+01 |  1.823194e-15 |

 Exiting Active Set upon convergence due to DeltaGradient.
UpdatedSVMModel = 
  ClassificationSVM
             ResponseName: 'Y'
    CategoricalPredictors: []
               ClassNames: {'b'  'g'}
           ScoreTransform: 'none'
          NumObservations: 351
                    Alpha: [103x1 double]
                     Bias: -3.8829
         KernelParameters: [1x1 struct]
           BoxConstraints: [351x1 double]
          ConvergenceInfo: [1x1 struct]
          IsSupportVector: [351x1 logical]
                   Solver: 'SMO'


The software resumes at iteration 1000 and uses the same verbosity level as the one set when you trained the model using fitcsvm. The printout indicates that the algorithm converged. Therefore, UpdatedSVMModel is a fully trained ClassificationSVM classifier.

updatedLoss = resubLoss(UpdatedSVMModel)
updatedLoss = 
0.0769

The training sample misclassification error of the fully trained classifier is approximately 8%.

Input Arguments

collapse all

Full, trained SVM classifier, specified as a ClassificationSVM model trained with fitcsvm.

Number of iterations to continue training the SVM classifier, specified as a positive integer.

Data Types: double

Name-Value Arguments

Specify optional pairs of arguments as Name1=Value1,...,NameN=ValueN, where Name is the argument name and Value is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.

Before R2021a, use commas to separate each name and value, and enclose Name in quotes.

Example: resume(SVMModel,500,'Verbose',2) trains SVMModel for 500 more iterations and specifies displaying diagnostic messages and saving convergence criteria at every iteration.

Verbosity level, specified as the comma-separated pair consisting of 'Verbose' and 0, 1, or 2. Verbose controls the amount of optimization information displayed in the Command Window and saved as a structure to SVMModel.ConvergenceInfo.History.

This table summarizes the verbosity level values.

ValueDescription
0The software does not display or save convergence information.
1The software displays diagnostic messages and saves convergence criteria every numprint iterations, where numprint is the value of the 'NumPrint' name-value pair argument.
2The software displays diagnostic messages and saves convergence criteria at every iteration.

By default, Verbose is the value that fitcsvm uses to train SVMModel.

Example: 'Verbose',1

Data Types: single

Number of iterations between diagnostic message printouts, specified as the comma-separated pair consisting of 'NumPrint' and a nonnegative integer.

If you set 'Verbose',1 and 'NumPrint',numprint, then the software displays all optimization diagnostic messages from SMO [1] and ISDA [2] every numprint iterations in the Command Window.

By default, NumPrint is the value that fitcsvm uses to train SVMModel.

Example: 'NumPrint',500

Data Types: single

Tips

If optimization does not converge and the solver is 'SMO' or 'ISDA', then try to resume training the SVM classifier.

References

[1] Fan, R.-E., P.-H. Chen, and C.-J. Lin. “Working set selection using second order information for training support vector machines.” Journal of Machine Learning Research, Vol. 6, 2005, pp. 1889–1918.

[2] Kecman V., T. -M. Huang, and M. Vogt. “Iterative Single Data Algorithm for Training Kernel Machines from Huge Data Sets: Theory and Performance.” Support Vector Machines: Theory and Applications. Edited by Lipo Wang, 255–274. Berlin: Springer-Verlag, 2005.

Extended Capabilities

Version History

Introduced in R2014a