templateSVM
Support vector machine template
Description
returns a support vector
machine (SVM) learner template suitable for training classification or regression
models.t
= templateSVM
returns a template with additional options specified by one or more name-value
arguments.t
= templateSVM(Name,Value
)
For example, you can specify the box constraint, the kernel function, or whether to standardize the predictors.
If you specify the type of model by using the Type
name-value argument, then the display of t
in the Command Window
shows all options as empty ([]
), except those that you specify
using name-value arguments. If you do not specify the type of model, then the
display suppresses the empty options. During training, the software uses default
values for empty options.
Examples
Create Default SVM Template
Create a default SVM template using the templateSVM
function.
t = templateSVM
t = Fit template for SVM.
t
is a template object for an SVM learner. All of the properties of t
are empty. When you pass t
to a training function, such as fitcecoc
for ECOC multiclass classification, the software sets the empty properties to their respective default values. For example, the software sets KernelFunction
to "linear"
and Type
to "classification"
. For details on other default values, see fitcsvm
and fitrsvm
.
Create SVM Template for ECOC Multiclass Learning
Create a nondefault SVM template for use in fitcecoc
.
Load Fisher's iris data set.
load fisheriris
Create a template for an SVM classifier and specify to use a Gaussian kernel function.
t = templateSVM("KernelFunction","gaussian","Type","classification")
t = Fit template for classification SVM. Alpha: [0x1 double] BoxConstraint: [] CacheSize: [] CachingMethod: '' ClipAlphas: [] DeltaGradientTolerance: [] Epsilon: [] GapTolerance: [] KKTTolerance: [] IterationLimit: [] KernelFunction: 'gaussian' KernelScale: [] KernelOffset: [] KernelPolynomialOrder: [] NumPrint: [] Nu: [] OutlierFraction: [] RemoveDuplicates: [] ShrinkagePeriod: [] Solver: '' StandardizeData: [] SaveSupportVectors: [] VerbosityLevel: [] Version: 2 Method: 'SVM' Type: 'classification'
All properties of the template object are empty except for KernelFunction
, Method
and Type
. When trained on, the software fills in the empty properties with their respective default values.
Specify t
as a learner for an ECOC multiclass model.
Mdl = fitcecoc(meas,species,"Learners",t);
Mdl
is a ClassificationECOC
multiclass classifier. By default, the software trains Mdl
using the one-versus-one coding design.
Display the in-sample (resubstitution) misclassification error.
L = resubLoss(Mdl,"LossFun","classiferror")
L = 0.0200
Retain and Discard Support Vectors of SVM Binary Learners
When you train an ECOC model with linear SVM binary learners, fitcecoc
empties the Alpha
, SupportVectorLabels
, and SupportVectors
properties of the binary learners by default. You can choose instead to retain the support vectors and related values, and then discard them from the model later.
Load Fisher's iris data set.
load fisheriris rng(1); % For reproducibility
Train an ECOC model using the entire data set. Specify retaining the support vectors by passing in the appropriate SVM template.
t = templateSVM('SaveSupportVectors',true); MdlSV = fitcecoc(meas,species,'Learners',t);
MdlSV
is a trained ClassificationECOC
model with linear SVM binary learners. By default, fitcecoc
implements a one-versus-one coding design, which requires three binary learners for three-class learning.
Access the estimated (alpha) values using dot notation.
alpha = cell(3,1); alpha{1} = MdlSV.BinaryLearners{1}.Alpha; alpha{2} = MdlSV.BinaryLearners{2}.Alpha; alpha{3} = MdlSV.BinaryLearners{3}.Alpha; alpha
alpha=3×1 cell array
{ 3x1 double}
{ 3x1 double}
{23x1 double}
alpha
is a 3-by-1 cell array that stores the estimated values of .
Discard the support vectors and related values from the ECOC model.
Mdl = discardSupportVectors(MdlSV);
Mdl
is similar to MdlSV
, except that the Alpha
, SupportVectorLabels
, and SupportVectors
properties of all the linear SVM binary learners are empty ([]
).
areAllEmpty = @(x)isempty([x.Alpha x.SupportVectors x.SupportVectorLabels]); cellfun(areAllEmpty,Mdl.BinaryLearners)
ans = 3x1 logical array
1
1
1
Compare the sizes of the two ECOC models.
vars = whos('Mdl','MdlSV'); 100*(1 - vars(1).bytes/vars(2).bytes)
ans = 4.9037
Mdl
is about 5% smaller than MdlSV
.
Reduce your memory usage by compacting Mdl
and then clearing Mdl
and MdlSV
from the workspace.
CompactMdl = compact(Mdl); clear Mdl MdlSV;
Predict the label for a random row of the training data using the more efficient SVM model.
idx = randsample(size(meas,1),1)
idx = 63
predictedLabel = predict(CompactMdl,meas(idx,:))
predictedLabel = 1x1 cell array
{'versicolor'}
trueLabel = species(idx)
trueLabel = 1x1 cell array
{'versicolor'}
Input Arguments
Name-Value Arguments
Specify optional pairs of arguments as
Name1=Value1,...,NameN=ValueN
, where Name
is
the argument name and Value
is the corresponding value.
Name-value arguments must appear after other arguments, but the order of the
pairs does not matter.
Before R2021a, use commas to separate each name and value, and enclose
Name
in quotes.
Example: 'BoxConstraint',0.1,'KernelFunction','gaussian','Standardize',1
specifies
a box constraint of 0.1
, to use the Gaussian (RBF)
kernel, and to standardize the predictors.
BoxConstraint
— Box constraint
1 (default) | positive scalar
Box constraint,
specified as the comma-separated pair consisting of 'BoxConstraint'
and
a positive scalar.
For one-class learning, the software always sets the box constraint
to 1
.
For more details on the relationships and algorithmic behavior
of BoxConstraint
, Cost
, Prior
, Standardize
,
and Weights
, see Algorithms.
Example: 'BoxConstraint',100
Data Types: double
| single
CacheSize
— Cache size
1000
(default) | 'maximal'
| positive scalar
Cache size, specified as the comma-separated pair consisting
of 'CacheSize'
and 'maximal'
or
a positive scalar.
If CacheSize
is 'maximal'
,
then the software reserves enough memory to hold the entire n-by-n Gram matrix.
If CacheSize
is a positive scalar, then the
software reserves CacheSize
megabytes of memory
for training the model.
Example: 'CacheSize','maximal'
Data Types: double
| single
| char
| string
ClipAlphas
— Flag to clip alpha coefficients
true
(default) | false
Flag to clip alpha coefficients, specified as the comma-separated
pair consisting of 'ClipAlphas'
and either true
or false
.
Suppose that the alpha coefficient for observation j is αj and the box constraint of observation j is Cj, j = 1,...,n, where n is the training sample size.
Value | Description |
---|---|
true | At each iteration, if αj is near 0 or near Cj, then MATLAB® sets αj to 0 or to Cj, respectively. |
false | MATLAB does not change the alpha coefficients during optimization. |
MATLAB stores the final values of α in
the Alpha
property of the trained SVM model object.
ClipAlphas
can affect SMO and ISDA convergence.
Example: 'ClipAlphas',false
Data Types: logical
DeltaGradientTolerance
— Tolerance for gradient difference
nonnegative scalar
Tolerance for the gradient difference between upper and lower
violators obtained by Sequential Minimal Optimization (SMO) or Iterative
Single Data Algorithm (ISDA), specified as the comma-separated pair
consisting of 'DeltaGradientTolerance'
and a nonnegative
scalar.
If DeltaGradientTolerance
is 0
,
then the software does not use the tolerance for the gradient difference
to check for optimization convergence.
The default values are:
1e-3
if the solver is SMO (for example, you set'Solver','SMO'
)0
if the solver is ISDA (for example, you set'Solver','ISDA'
)
Example: 'DeltaGradientTolerance',1e-2
Data Types: double
| single
GapTolerance
— Feasibility gap tolerance
0
(default) | nonnegative scalar
Feasibility gap tolerance obtained by SMO or ISDA, specified
as the comma-separated pair consisting of 'GapTolerance'
and
a nonnegative scalar.
If GapTolerance
is 0
,
then the software does not use the feasibility gap tolerance to check
for optimization convergence.
Example: 'GapTolerance',1e-2
Data Types: double
| single
IterationLimit
— Maximal number of numerical optimization iterations
1e6
(default) | positive integer
Maximal number of numerical optimization iterations, specified
as the comma-separated pair consisting of 'IterationLimit'
and
a positive integer.
The software returns a trained model regardless of whether the
optimization routine successfully converges. Mdl.ConvergenceInfo
contains
convergence information.
Example: 'IterationLimit',1e8
Data Types: double
| single
KernelFunction
— Kernel function
'linear'
| 'gaussian'
| 'rbf'
| 'polynomial'
| function name
Kernel function used to compute the elements of the Gram
matrix, specified as the comma-separated pair consisting of
'KernelFunction'
and a kernel function name. Suppose
G(xj,xk)
is element (j,k) of the Gram matrix, where
xj and
xk are
p-dimensional vectors representing observations j
and k in X
. This table describes supported
kernel function names and their functional forms.
Kernel Function Name | Description | Formula |
---|---|---|
'gaussian' or 'rbf' | Gaussian or Radial Basis Function (RBF) kernel, default for one-class learning |
|
'linear' | Linear kernel, default for two-class learning |
|
'polynomial' | Polynomial kernel. Use
'PolynomialOrder',
to specify a polynomial kernel of order
q . |
|
You can set your own kernel function, for example,
kernel
, by setting 'KernelFunction','kernel'
.
The value kernel
must have this form.
function G = kernel(U,V)
U
is an m-by-p matrix. Columns correspond to predictor variables, and rows correspond to observations.V
is an n-by-p matrix. Columns correspond to predictor variables, and rows correspond to observations.G
is an m-by-n Gram matrix of the rows ofU
andV
.
kernel.m
must be on the MATLAB path.
It is a good practice to avoid using generic names for kernel functions. For example, call a
sigmoid kernel function 'mysigmoid'
rather than
'sigmoid'
.
Example: 'KernelFunction','gaussian'
Data Types: char
| string
KernelOffset
— Kernel offset parameter
nonnegative scalar
Kernel offset parameter, specified as the comma-separated pair
consisting of 'KernelOffset'
and a nonnegative
scalar.
The software adds KernelOffset
to each element
of the Gram matrix.
The defaults are:
0
if the solver is SMO (that is, you set'Solver','SMO'
)0.1
if the solver is ISDA (that is, you set'Solver','ISDA'
)
Example: 'KernelOffset',0
Data Types: double
| single
KernelScale
— Kernel scale parameter
1
(default) | 'auto'
| positive scalar
Kernel scale parameter, specified as the comma-separated pair
consisting of 'KernelScale'
and 'auto'
or
a positive scalar. The software divides all elements of the predictor
matrix X
by the value of KernelScale
.
Then, the software applies the appropriate kernel norm to compute
the Gram matrix.
If you specify
'auto'
, then the software selects an appropriate scale factor using a heuristic procedure. This heuristic procedure uses subsampling, so estimates can vary from one call to another. Therefore, to reproduce results, set a random number seed usingrng
before training.If you specify
KernelScale
and your own kernel function, for example,'KernelFunction','kernel'
, then the software throws an error. You must apply scaling withinkernel
.
Example: 'KernelScale','auto'
Data Types: double
| single
| char
| string
KKTTolerance
— Karush-Kuhn-Tucker complementarity conditions violation tolerance
nonnegative scalar
Karush-Kuhn-Tucker
(KKT) complementarity conditions violation tolerance, specified
as the comma-separated pair consisting of 'KKTTolerance'
and
a nonnegative scalar.
If KKTTolerance
is 0
,
then the software does not use the KKT complementarity conditions
violation tolerance to check for optimization convergence.
The default values are:
0
if the solver is SMO (for example, you set'Solver','SMO'
)1e-3
if the solver is ISDA (for example, you set'Solver','ISDA'
)
Example: 'KKTTolerance',1e-2
Data Types: double
| single
NumPrint
— Number of iterations between optimization diagnostic message output
1000
(default) | nonnegative integer
Number of iterations between optimization diagnostic message
output, specified as the comma-separated pair consisting of 'NumPrint'
and
a nonnegative integer.
If you specify 'Verbose',1
and 'NumPrint',numprint
, then
the software displays all optimization diagnostic messages from SMO and ISDA every
numprint
iterations in the Command Window.
Example: 'NumPrint',500
Data Types: double
| single
OutlierFraction
— Expected proportion of outliers in training data
0
(default) | numeric scalar in the interval [0,1)
Expected proportion of outliers in the training data, specified
as the comma-separated pair consisting of 'OutlierFraction'
and
a numeric scalar in the interval [0,1).
Suppose that you set 'OutlierFraction',outlierfraction
, where
outlierfraction
is a value greater than 0.
For two-class learning, the software implements robust learning. In other words, the software attempts to remove 100*
outlierfraction
% of the observations when the optimization algorithm converges. The removed observations correspond to gradients that are large in magnitude.For one-class learning, the software finds an appropriate bias term such that
outlierfraction
of the observations in the training set have negative scores.
Example: 'OutlierFraction',0.01
Data Types: double
| single
PolynomialOrder
— Polynomial kernel function order
3
(default) | positive integer
Polynomial kernel function order, specified as the comma-separated
pair consisting of 'PolynomialOrder'
and a positive
integer.
If you set 'PolynomialOrder'
and KernelFunction
is
not 'polynomial'
, then the software throws an error.
Example: 'PolynomialOrder',2
Data Types: double
| single
SaveSupportVectors
— Store support vectors, their labels, and the estimated α coefficients
true
| false
Store support vectors, their labels, and the estimated α coefficients
as properties of the resulting model, specified as the comma-separated
pair consisting of 'SaveSupportVectors'
and true
or false
.
If SaveSupportVectors
is true
, the resulting model
stores the support vectors in the SupportVectors
property, their labels in the SupportVectorLabels
property, and the estimated
α coefficients in the Alpha
property
of the compact, SVM learners.
If SaveSupportVectors
is false
and KernelFunction
is 'linear'
,
the resulting model does not store the support vectors and the related
estimates.
To reduce memory consumption by compact SVM models, specify SaveSupportVectors
.
For linear, SVM binary learners in an ECOC model, the default
value is false
. Otherwise, the default value is true
.
Example: 'SaveSupportVectors',true
Data Types: logical
ShrinkagePeriod
— Number of iterations between reductions of active set
0
(default) | nonnegative integer
Number of iterations between reductions of the active set, specified as the
comma-separated pair consisting of 'ShrinkagePeriod'
and a
nonnegative integer.
If you set 'ShrinkagePeriod',0
, then the software does not shrink
the active set.
Example: 'ShrinkagePeriod',1000
Data Types: double
| single
Solver
— Optimization routine
'ISDA'
| 'L1QP'
| 'SMO'
Optimization routine, specified as the comma-separated pair consisting of 'Solver'
and a value in this table.
Value | Description |
---|---|
'ISDA' | Iterative Single Data Algorithm (see [4]) |
'L1QP' | Uses quadprog (Optimization Toolbox) to implement L1 soft-margin minimization by quadratic programming. This option requires an Optimization Toolbox™ license. For more details, see Quadratic Programming Definition (Optimization Toolbox). |
'SMO' | Sequential Minimal Optimization (see [2]) |
The default value is 'ISDA'
if you set 'OutlierFraction'
to a positive value for two-class learning, and 'SMO'
otherwise.
Example: 'Solver','ISDA'
Standardize
— Flag to standardize predictor data
false
(default) | true
Flag to standardize the predictor data, specified as the comma-separated
pair consisting of 'Standardize'
and true
(1
)
or false
(0)
.
If you set 'Standardize',true
:
The software centers and scales each column of the predictor data (
X
) by the weighted column mean and standard deviation, respectively (for details on weighted standardizing, see Algorithms). MATLAB does not standardize the data contained in the dummy variable columns generated for categorical predictors.The software trains the classifier using the standardized predictor matrix, but stores the unstandardized data in the classifier property
X
.
Example: 'Standardize',true
Data Types: logical
Type
— SVM model type
"classification"
| "regression"
Since R2023b
SVM model type, specified as "classification"
or
"regression"
.
Value | Description |
---|---|
"classification" | Create a classification SVM learner template. If
you do not specify Type as
"classification" , the fitting
functions fitcecoc ,
testckfold , and fitsemiself set this value when you
pass t to them. |
"regression" | Create a regression SVM learner template. If you
do not specify Type as
"regression" , the fitting
function directforecaster sets this value when
you pass t to it. |
Example: "Type","classification"
Data Types: char
| string
RemoveDuplicates
— Flag to replace duplicate observations with single observations
false
(default) | true
Flag to replace duplicate observations with single observations in the
training data, specified as true
or
false
.
If RemoveDuplicates
is true
,
then the software replaces duplicate observations in the training data
with a single observation of the same value. The weight of the single
observation is equal to the sum of the weights of the corresponding
removed duplicates (see Weights
for
classification and Weights
for
regression).
Tip
If your data set contains many duplicate observations, then
specifying "RemoveDuplicates",true
can decrease
convergence time considerably.
Data Types: logical
Verbose
— Verbosity level
0
(default) | 1
| 2
Verbosity level, specified as the comma-separated pair consisting of
'Verbose'
and 0
, 1
, or
2
. The value of Verbose
controls the amount of
optimization information that the software displays in the Command Window and saves the
information as a structure to Mdl.ConvergenceInfo.History
.
This table summarizes the available verbosity level options.
Value | Description |
---|---|
0 | The software does not display or save convergence information. |
1 | The software displays diagnostic messages and saves convergence
criteria every numprint iterations, where
numprint is the value of the name-value pair
argument 'NumPrint' . |
2 | The software displays diagnostic messages and saves convergence criteria at every iteration. |
Example: 'Verbose',1
Data Types: double
| single
Since R2023b
Epsilon
— Half the width of epsilon-insensitive band
iqr(Y)/13.49
(default) | nonnegative scalar value
Half the width of the epsilon-insensitive band, specified as a nonnegative scalar value.
The default Epsilon
value is
iqr(Y)/13.49
, which is an estimate of a tenth of
the standard deviation using the interquartile range of the response
variable Y
. If iqr(Y)
is equal to
zero, then the default Epsilon
value is 0.1.
Example: "Epsilon",0.3
Data Types: single
| double
Output Arguments
t
— SVM learner template
template object
SVM learner template suitable for training classification or regression models, returned as a template object. During training, the software uses default values for empty options.
Tips
By default and for efficiency, fitcecoc
empties the Alpha
, SupportVectorLabels
,
and SupportVectors
properties
for all linear SVM binary learners. fitcecoc
lists Beta
, rather than
Alpha
, in the model display.
To store Alpha
, SupportVectorLabels
, and
SupportVectors
, pass a linear SVM template that specifies storing
support vectors to fitcecoc
. For example,
enter:
t = templateSVM('SaveSupportVectors',true) Mdl = fitcecoc(X,Y,'Learners',t);
You can remove the support vectors and related values by passing the resulting
ClassificationECOC
model to
discardSupportVectors
.
References
[1] Christianini, N., and J. C. Shawe-Taylor. An Introduction to Support Vector Machines and Other Kernel-Based Learning Methods. Cambridge, UK: Cambridge University Press, 2000.
[2] Fan, R.-E., P.-H. Chen, and C.-J. Lin. “Working set selection using second order information for training support vector machines.” Journal of Machine Learning Research, Vol 6, 2005, pp. 1889–1918.
[3] Hastie, T., R. Tibshirani, and J. Friedman. The Elements of Statistical Learning, Second Edition. NY: Springer, 2008.
[4] Kecman V., T. -M. Huang, and M. Vogt. “Iterative Single Data Algorithm for Training Kernel Machines from Huge Data Sets: Theory and Performance.” In Support Vector Machines: Theory and Applications. Edited by Lipo Wang, 255–274. Berlin: Springer-Verlag, 2005.
[5] Scholkopf, B., J. C. Platt, J. C. Shawe-Taylor, A. J. Smola, and R. C. Williamson. “Estimating the Support of a High-Dimensional Distribution.” Neural Comput., Vol. 13, Number 7, 2001, pp. 1443–1471.
[6] Scholkopf, B., and A. Smola. Learning with Kernels: Support Vector Machines, Regularization, Optimization and Beyond, Adaptive Computation and Machine Learning. Cambridge, MA: The MIT Press, 2002.
Version History
Introduced in R2014bR2023b: Support for regression learner templates
SVM supports the creation of regression learner templates. Specify the
Type
name-value argument as "regression" in the call to the
function. When creating a regression learner template, you can additionally specify
the Epsilon
name-value argument.
See Also
fitcecoc
| ClassificationECOC
| ClassificationSVM
| RegressionSVM
| fitcsvm
| fitrsvm
MATLAB 命令
您点击的链接对应于以下 MATLAB 命令:
请在 MATLAB 命令行窗口中直接输入以执行命令。Web 浏览器不支持 MATLAB 命令。
Select a Web Site
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .
You can also select a web site from the following list:
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.
Americas
- América Latina (Español)
- Canada (English)
- United States (English)
Europe
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)