CompactClassificationSVM
Compact support vector machine (SVM) for one-class and binary classification
Description
CompactClassificationSVM
is a compact version of the support vector machine (SVM) classifier. The compact classifier does not include the data used for training the SVM classifier. Therefore, you cannot perform some tasks, such as cross-validation, using the compact classifier. Use a compact SVM classifier for tasks such as predicting the labels of new data.
Creation
Create a CompactClassificationSVM
model from a full, trained
ClassificationSVM
classifier by using
compact
.
Properties
SVM Properties
Alpha
— Trained classifier coefficients
numeric vector
This property is read-only.
Trained classifier coefficients, specified as an s-by-1 numeric
vector. s is the number of support vectors in the trained classifier,
sum(Mdl.IsSupportVector)
.
Alpha
contains the trained classifier coefficients from the dual
problem, that is, the estimated Lagrange multipliers. If you remove duplicates by using
the RemoveDuplicates
name-value pair argument of fitcsvm
, then for a given set of
duplicate observations that are support vectors, Alpha
contains one
coefficient corresponding to the entire set. That is, MATLAB® attributes a nonzero coefficient to one observation from the set of
duplicates and a coefficient of 0
to all other duplicate observations
in the set.
Data Types: single
| double
Beta
— Linear predictor coefficients
numeric vector
This property is read-only.
Linear predictor coefficients, specified as a numeric vector. The length of
Beta
is equal to the number of predictors used to train the
model.
MATLAB expands categorical variables in the predictor data using full dummy
encoding. That is, MATLAB creates one dummy variable for each level of each categorical variable.
Beta
stores one value for each predictor variable, including the
dummy variables. For example, if there are three predictors, one of which is a
categorical variable with three levels, then Beta
is a numeric vector
containing five values.
If KernelParameters.Function
is 'linear'
, then
the classification score for the observation x is
Mdl
stores β,
b, and s in the properties
Beta
, Bias
, and
KernelParameters.Scale
, respectively.
To estimate classification scores manually, you must first apply any transformations
to the predictor data that were applied during training. Specifically, if you specify
'Standardize',true
when using fitcsvm
, then
you must standardize the predictor data manually by using the mean
Mdl.Mu
and standard deviation Mdl.Sigma
, and
then divide the result by the kernel scale in
Mdl.KernelParameters.Scale
.
All SVM functions, such as resubPredict
and predict
, apply any required transformation before estimation.
If KernelParameters.Function
is not 'linear'
,
then Beta
is empty ([]
).
Data Types: single
| double
Bias
— Bias term
scalar
This property is read-only.
Bias term, specified as a scalar.
Data Types: single
| double
KernelParameters
— Kernel parameters
structure array
This property is read-only.
Kernel parameters, specified as a structure array. The kernel parameters property contains the fields listed in this table.
Field | Description |
---|---|
Function | Kernel function used to compute the elements of the Gram
matrix. For details, see |
Scale | Kernel scale parameter used to scale all elements of the
predictor data on which the model is trained. For details, see
|
To display the values of KernelParameters
, use dot notation. For
example, Mdl.KernelParameters.Scale
displays the kernel scale
parameter value.
The software accepts KernelParameters
as inputs and does not modify
them.
Data Types: struct
SupportVectorLabels
— Support vector class labels
s-by-1 numeric vector
This property is read-only.
Support vector class labels, specified as an s-by-1 numeric vector.
s is the number of support vectors in the trained classifier,
sum(Mdl.IsSupportVector)
.
A value of +1
in SupportVectorLabels
indicates
that the corresponding support vector is in the positive class
(ClassNames{2}
). A value of –1
indicates that
the corresponding support vector is in the negative class
(ClassNames{1}
).
If you remove duplicates by using the RemoveDuplicates
name-value pair argument of fitcsvm
, then for a given set of
duplicate observations that are support vectors, SupportVectorLabels
contains one unique support vector label.
Data Types: single
| double
SupportVectors
— Support vectors
s-by-p numeric matrix
This property is read-only.
Support vectors in the trained classifier, specified as an
s-by-p numeric matrix. s is
the number of support vectors in the trained classifier,
sum(Mdl.IsSupportVector)
, and p is the number
of predictor variables in the predictor data.
SupportVectors
contains rows of the predictor data
X
that MATLAB considers to be support vectors. If you specify
'Standardize',true
when training the SVM classifier using
fitcsvm
, then SupportVectors
contains the
standardized rows of X
.
If you remove duplicates by using the RemoveDuplicates
name-value pair argument of fitcsvm
, then for a given set of
duplicate observations that are support vectors, SupportVectors
contains one unique support vector.
Data Types: single
| double
Other Classification Properties
CategoricalPredictors
— Categorical predictor indices
vector of positive integers | []
This property is read-only.
Categorical predictor
indices, specified as a vector of positive integers. CategoricalPredictors
contains index values indicating that the corresponding predictors are categorical. The index
values are between 1 and p
, where p
is the number of
predictors used to train the model. If none of the predictors are categorical, then this
property is empty ([]
).
Data Types: double
ClassNames
— Unique class labels
categorical array | character array | logical vector | numeric vector | cell array of character vectors
This property is read-only.
Unique class labels used in training, specified as a categorical or character array,
logical or numeric vector, or cell array of character vectors.
ClassNames
has the same data type as the class labels
Y
. (The software treats string arrays as cell arrays of character
vectors.)
ClassNames
also determines the class order.
Data Types: single
| double
| logical
| char
| cell
| categorical
Cost
— Misclassification cost
numeric square matrix
This property is read-only.
Misclassification cost, specified as a numeric square matrix.
For two-class learning, the
Cost
property stores the misclassification cost matrix specified by theCost
name-value argument of the fitting function. The rows correspond to the true class and the columns correspond to the predicted class. That is,Cost(i,j)
is the cost of classifying a point into classj
if its true class isi
. The order of the rows and columns ofCost
corresponds to the order of the classes inClassNames
.For one-class learning,
Cost = 0
.
Data Types: double
ExpandedPredictorNames
— Expanded predictor names
cell array of character vectors
This property is read-only.
Expanded predictor names, specified as a cell array of character vectors.
If the model uses dummy variable encoding for categorical variables, then
ExpandedPredictorNames
includes the names that describe the
expanded variables. Otherwise, ExpandedPredictorNames
is the same as
PredictorNames
.
Data Types: cell
Mu
— Predictor means
numeric vector | []
This property is read-only.
Predictor means, specified as a numeric vector. If you specify
'Standardize',1
or 'Standardize',true
when you
train an SVM classifier using fitcsvm
, the length of
Mu
is equal to the number of predictors.
MATLAB expands categorical variables in the predictor data using dummy variables.
Mu
stores one value for each predictor variable, including the
dummy variables. However, MATLAB does not standardize the columns that contain categorical
variables.
If you set 'Standardize',false
when you train the SVM classifier
using fitcsvm
, then Mu
is an empty vector
([]
).
Data Types: single
| double
PredictorNames
— Predictor variable names
cell array of character vectors
This property is read-only.
Predictor variable names, specified as a cell array of character vectors. The order of the
elements in PredictorNames
corresponds to the order in which the
predictor names appear in the training data.
Data Types: cell
Prior
— Prior probabilities
numeric vector
This property is read-only.
Prior probabilities for each class, specified as a numeric vector.
For two-class learning, if you specify a cost matrix, then the software updates the prior probabilities by incorporating the penalties described in the cost matrix.
For two-class learning, the software normalizes the prior probabilities specified by the
Prior
name-value argument of the fitting function so that the probabilities sum to 1. ThePrior
property stores the normalized prior probabilities. The order of the elements ofPrior
corresponds to the elements ofMdl.ClassNames
.For one-class learning,
Prior = 1
.
Data Types: single
| double
ScoreTransform
— Score transformation
character vector | function handle
Score transformation, specified as a character vector or function handle. ScoreTransform
represents a built-in transformation function or a function handle for transforming predicted classification scores.
To change the score transformation function to function
, for example, use dot notation.
For a built-in function, enter a character vector.
Mdl.ScoreTransform = 'function';
This table describes the available built-in functions.
Value Description 'doublelogit'
1/(1 + e–2x) 'invlogit'
log(x / (1 – x)) 'ismax'
Sets the score for the class with the largest score to 1, and sets the scores for all other classes to 0 'logit'
1/(1 + e–x) 'none'
or'identity'
x (no transformation) 'sign'
–1 for x < 0
0 for x = 0
1 for x > 0'symmetric'
2x – 1 'symmetricismax'
Sets the score for the class with the largest score to 1, and sets the scores for all other classes to –1 'symmetriclogit'
2/(1 + e–x) – 1 For a MATLAB function or a function that you define, enter its function handle.
Mdl.ScoreTransform = @function;
function
must accept a matrix (the original scores) and return a matrix of the same size (the transformed scores).
Data Types: char
| function_handle
Sigma
— Predictor standard deviations
[]
(default) | numeric vector
This property is read-only.
Predictor standard deviations, specified as a numeric vector.
If you specify 'Standardize',true
when you train the SVM classifier
using fitcsvm
, the length of Sigma
is equal to
the number of predictor variables.
MATLAB expands categorical variables in the predictor data using dummy variables.
Sigma
stores one value for each predictor variable, including the
dummy variables. However, MATLAB does not standardize the columns that contain categorical
variables.
If you set 'Standardize',false
when you train the SVM classifier
using fitcsvm
, then Sigma
is an empty vector
([]
).
Data Types: single
| double
Object Functions
compareHoldout | Compare accuracies of two classification models using new data |
discardSupportVectors | Discard support vectors for linear support vector machine (SVM) classifier |
edge | Find classification edge for support vector machine (SVM) classifier |
fitPosterior | Fit posterior probabilities for compact support vector machine (SVM) classifier |
gather | Gather properties of Statistics and Machine Learning Toolbox object from GPU |
incrementalLearner | Convert binary classification support vector machine (SVM) model to incremental learner |
lime | Local interpretable model-agnostic explanations (LIME) |
loss | Find classification error for support vector machine (SVM) classifier |
margin | Find classification margins for support vector machine (SVM) classifier |
partialDependence | Compute partial dependence |
plotPartialDependence | Create partial dependence plot (PDP) and individual conditional expectation (ICE) plots |
predict | Classify observations using support vector machine (SVM) classifier |
shapley | Shapley values |
update | Update model parameters for code generation |
Examples
Reduce Size of SVM Classifier
Reduce the size of a full support vector machine (SVM) classifier by removing the training data. Full SVM classifiers (that is, ClassificationSVM
classifiers) hold the training data. To improve efficiency, use a smaller classifier.
Load the ionosphere
data set.
load ionosphere
Train an SVM classifier. Standardize the predictor data and specify the order of the classes.
SVMModel = fitcsvm(X,Y,'Standardize',true,... 'ClassNames',{'b','g'})
SVMModel = ClassificationSVM ResponseName: 'Y' CategoricalPredictors: [] ClassNames: {'b' 'g'} ScoreTransform: 'none' NumObservations: 351 Alpha: [90x1 double] Bias: -0.1342 KernelParameters: [1x1 struct] Mu: [0.8917 0 0.6413 0.0444 0.6011 0.1159 0.5501 0.1194 0.5118 0.1813 0.4762 0.1550 0.4008 0.0934 0.3442 0.0711 0.3819 -0.0036 0.3594 -0.0240 0.3367 0.0083 0.3625 -0.0574 0.3961 -0.0712 0.5416 -0.0695 0.3784 ... ] (1x34 double) Sigma: [0.3112 0 0.4977 0.4414 0.5199 0.4608 0.4927 0.5207 0.5071 0.4839 0.5635 0.4948 0.6222 0.4949 0.6528 0.4584 0.6180 0.4968 0.6263 0.5191 0.6098 0.5182 0.6038 0.5275 0.5785 0.5085 0.5162 0.5500 0.5759 0.5080 ... ] (1x34 double) BoxConstraints: [351x1 double] ConvergenceInfo: [1x1 struct] IsSupportVector: [351x1 logical] Solver: 'SMO'
SVMModel
is a ClassificationSVM
classifier.
Reduce the size of the SVM classifier.
CompactSVMModel = compact(SVMModel)
CompactSVMModel = CompactClassificationSVM ResponseName: 'Y' CategoricalPredictors: [] ClassNames: {'b' 'g'} ScoreTransform: 'none' Alpha: [90x1 double] Bias: -0.1342 KernelParameters: [1x1 struct] Mu: [0.8917 0 0.6413 0.0444 0.6011 0.1159 0.5501 0.1194 0.5118 0.1813 0.4762 0.1550 0.4008 0.0934 0.3442 0.0711 0.3819 -0.0036 0.3594 -0.0240 0.3367 0.0083 0.3625 -0.0574 0.3961 -0.0712 0.5416 -0.0695 0.3784 ... ] (1x34 double) Sigma: [0.3112 0 0.4977 0.4414 0.5199 0.4608 0.4927 0.5207 0.5071 0.4839 0.5635 0.4948 0.6222 0.4949 0.6528 0.4584 0.6180 0.4968 0.6263 0.5191 0.6098 0.5182 0.6038 0.5275 0.5785 0.5085 0.5162 0.5500 0.5759 0.5080 ... ] (1x34 double) SupportVectors: [90x34 double] SupportVectorLabels: [90x1 double]
CompactSVMModel
is a CompactClassificationSVM
classifier.
Display the amount of memory used by each classifier.
whos('SVMModel','CompactSVMModel')
Name Size Bytes Class Attributes CompactSVMModel 1x1 30749 classreg.learning.classif.CompactClassificationSVM SVMModel 1x1 140279 ClassificationSVM
The full SVM classifier (SVMModel
) is more than four times larger than the compact SVM classifier (CompactSVMModel
).
To label new observations efficiently, you can remove SVMModel
from the MATLAB® Workspace, and then pass CompactSVMModel
and new predictor values to predict
.
To further reduce the size of the compact SVM classifier, use the discardSupportVectors
function to discard support vectors.
Train and Cross-Validate SVM Classifier
Load the ionosphere
data set.
load ionosphere
Train and cross-validate an SVM classifier. Standardize the predictor data and specify the order of the classes.
rng(1); % For reproducibility CVSVMModel = fitcsvm(X,Y,'Standardize',true,... 'ClassNames',{'b','g'},'CrossVal','on')
CVSVMModel = ClassificationPartitionedModel CrossValidatedModel: 'SVM' PredictorNames: {'x1' 'x2' 'x3' 'x4' 'x5' 'x6' 'x7' 'x8' 'x9' 'x10' 'x11' 'x12' 'x13' 'x14' 'x15' 'x16' 'x17' 'x18' 'x19' 'x20' 'x21' 'x22' 'x23' 'x24' 'x25' 'x26' 'x27' 'x28' 'x29' 'x30' 'x31' 'x32' 'x33' 'x34'} ResponseName: 'Y' NumObservations: 351 KFold: 10 Partition: [1x1 cvpartition] ClassNames: {'b' 'g'} ScoreTransform: 'none'
CVSVMModel
is a ClassificationPartitionedModel
cross-validated SVM classifier. By default, the software implements 10-fold cross-validation.
Alternatively, you can cross-validate a trained ClassificationSVM
classifier by passing it to crossval
.
Inspect one of the trained folds using dot notation.
CVSVMModel.Trained{1}
ans = CompactClassificationSVM ResponseName: 'Y' CategoricalPredictors: [] ClassNames: {'b' 'g'} ScoreTransform: 'none' Alpha: [78x1 double] Bias: -0.2210 KernelParameters: [1x1 struct] Mu: [0.8888 0 0.6320 0.0406 0.5931 0.1205 0.5361 0.1286 0.5083 0.1879 0.4779 0.1567 0.3924 0.0875 0.3360 0.0789 0.3839 9.6066e-05 0.3562 -0.0308 0.3398 -0.0073 0.3590 -0.0628 0.4064 -0.0664 0.5535 -0.0749 0.3835 ... ] (1x34 double) Sigma: [0.3149 0 0.5033 0.4441 0.5255 0.4663 0.4987 0.5205 0.5040 0.4780 0.5649 0.4896 0.6293 0.4924 0.6606 0.4535 0.6133 0.4878 0.6250 0.5140 0.6075 0.5150 0.6068 0.5222 0.5729 0.5103 0.5061 0.5478 0.5712 0.5032 ... ] (1x34 double) SupportVectors: [78x34 double] SupportVectorLabels: [78x1 double]
Each fold is a CompactClassificationSVM
classifier trained on 90% of the data.
Estimate the generalization error.
genError = kfoldLoss(CVSVMModel)
genError = 0.1168
On average, the generalization error is approximately 12%.
References
[1] Hastie, T., R. Tibshirani, and J. Friedman. The Elements of Statistical Learning, Second Edition. NY: Springer, 2008.
[2] Scholkopf, B., J. C. Platt, J. C. Shawe-Taylor, A. J. Smola, and R. C. Williamson. “Estimating the Support of a High-Dimensional Distribution.” Neural Computation. Vol. 13, Number 7, 2001, pp. 1443–1471.
[3] Christianini, N., and J. C. Shawe-Taylor. An Introduction to Support Vector Machines and Other Kernel-Based Learning Methods. Cambridge, UK: Cambridge University Press, 2000.
[4] Scholkopf, B., and A. Smola. Learning with Kernels: Support Vector Machines, Regularization, Optimization and Beyond, Adaptive Computation and Machine Learning. Cambridge, MA: The MIT Press, 2002.
Extended Capabilities
C/C++ Code Generation
Generate C and C++ code using MATLAB® Coder™.
Usage notes and limitations:
To integrate the prediction of an SVM classification model into Simulink®, you can use the ClassificationSVM Predict block in the Statistics and Machine Learning Toolbox™ library or a MATLAB Function block with the
predict
function.When you train an SVM model by using
fitcsvm
, the following restrictions apply.The value of the
'ScoreTransform'
name-value pair argument cannot be an anonymous function. For generating code that predicts posterior probabilities given new observations, pass a trained SVM model tofitPosterior
orfitSVMPosterior
. TheScoreTransform
property of the returned model contains an anonymous function that represents the score-to-posterior-probability function and is configured for code generation.For fixed-point code generation, the value of the
'ScoreTransform'
name-value pair argument cannot be'invlogit'
. Also, the value of the'KernelFunction'
name-value pair argument must be'gaussian'
,'linear'
, or'polynomial'
.For fixed-point code generation and code generation with a coder configurer, the following additional restrictions apply.
Categorical predictors (
logical
,categorical
,char
,string
, orcell
) are not supported. You cannot use theCategoricalPredictors
name-value argument. To include categorical predictors in a model, preprocess them by usingdummyvar
before fitting the model.Class labels with the
categorical
data type are not supported. Both the class label value in the training data (Tbl
orY
) and the value of theClassNames
name-value argument cannot be an array with thecategorical
data type.
For more information, see Introduction to Code Generation.
GPU Arrays
Accelerate code by running on a graphics processing unit (GPU) using Parallel Computing Toolbox™.
Usage notes and limitations:
The following object functions fully support GPU arrays:
The following object functions offer limited support for GPU arrays:
For more information, see Run MATLAB Functions on a GPU (Parallel Computing Toolbox).
Version History
Introduced in R2014aR2022a: Cost
property stores the user-specified cost matrix
Starting in R2022a, the Cost
property stores the user-specified cost
matrix, so that you can compute the observed misclassification cost using the specified cost
value. The software stores normalized prior probabilities (Prior
)
that do not reflect the penalties described in the cost matrix. To compute the observed
misclassification cost, specify the LossFun
name-value argument as
"classifcost"
when you call the loss
function.
Note that model training has not changed and, therefore, the decision boundaries between classes have not changed.
For training, the fitting function updates the specified prior probabilities by
incorporating the penalties described in the specified cost matrix, and then normalizes the
prior probabilities and observation weights. This behavior has not changed. In previous
releases, the software stored the default cost matrix in the Cost
property and stored the prior probabilities used for training in the
Prior
property. Starting in R2022a, the software stores the
user-specified cost matrix without modification, and stores normalized prior probabilities that do
not reflect the cost penalties. For more details, see Misclassification Cost Matrix, Prior Probabilities, and Observation Weights.
Some object functions use the Cost
and Prior
properties:
The
loss
function uses the cost matrix stored in theCost
property if you specify theLossFun
name-value argument as"classifcost"
or"mincost"
.The
loss
andedge
functions use the prior probabilities stored in thePrior
property to normalize the observation weights of the input data.
If you specify a nondefault cost matrix when you train a classification model, the object functions return a different value compared to previous releases.
If you want the software to handle the cost matrix, prior
probabilities, and observation weights in the same way as in previous releases, adjust the prior
probabilities and observation weights for the nondefault cost matrix, as described in Adjust Prior Probabilities and Observation Weights for Misclassification Cost Matrix. Then, when you train a
classification model, specify the adjusted prior probabilities and observation weights by using
the Prior
and Weights
name-value arguments, respectively,
and use the default cost matrix.
MATLAB Command
You clicked a link that corresponds to this MATLAB command:
Run the command by entering it in the MATLAB Command Window. Web browsers do not support MATLAB commands.
Select a Web Site
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .
You can also select a web site from the following list
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.
Americas
- América Latina (Español)
- Canada (English)
- United States (English)
Europe
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)
Asia Pacific
- Australia (English)
- India (English)
- New Zealand (English)
- 中国
- 日本Japanese (日本語)
- 한국Korean (한국어)