designecoc
Coding matrix for reducing error-correcting output code to binary
Description
returns
the coding matrix M
= designecoc(K
,name
)M
that reduces the error-correcting
output code (ECOC) design specified by name
and K
classes
to a binary problem. M
has K
rows
and L columns, with each row corresponding to a
class and each column corresponding to a binary learner. name
and K
determine
the value of L.
You can view or customize M
, and then specify
it as the coding matrix for training an ECOC multiclass classifier
using fitcecoc
.
returns
the coding matrix with additional options specified by one or more M
= designecoc(K
,name
,Name,Value
)Name,Value
pair
arguments.
For example, you can specify the number of trials when generating a dense or sparse, random coding matrix.
Examples
Train ECOC Classifiers Using a Custom Coding Design
Consider the arrhythmia
data set. There are 16 classes in the study, 13 of which are represented in the data. The first class indicates that the subject did not have arrhythmia, and the last class indicates that the subject's arrhythmia state was not recorded. Suppose that the other classes are ordinal levels indicating the severity of arrhythmia. Train an ECOC classifier using a custom coding design specified by the description of the classes.
Load the arrhythmia
data set.
load arrhythmia K = 13; % Number of distinct classes
Construct a coding matrix that describes the nature of the classes.
OrdMat = designecoc(11,'ordinal'); nOM = size(OrdMat); class1VSOrd = [1; -ones(11,1); 0]; class1VSClass16 = [1; zeros(11,1); -1]; OrdVSClass16 = [0; ones(11,1); -1]; Coding = [class1VSOrd class1VSClass16 OrdVSClass16,... [zeros(1,nOM(2)); OrdMat; zeros(1,nOM(2))]]
Coding = 13×13
1 1 0 0 0 0 0 0 0 0 0 0 0
-1 0 1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1
-1 0 1 1 -1 -1 -1 -1 -1 -1 -1 -1 -1
-1 0 1 1 1 -1 -1 -1 -1 -1 -1 -1 -1
-1 0 1 1 1 1 -1 -1 -1 -1 -1 -1 -1
-1 0 1 1 1 1 1 -1 -1 -1 -1 -1 -1
-1 0 1 1 1 1 1 1 -1 -1 -1 -1 -1
-1 0 1 1 1 1 1 1 1 -1 -1 -1 -1
-1 0 1 1 1 1 1 1 1 1 -1 -1 -1
-1 0 1 1 1 1 1 1 1 1 1 -1 -1
⋮
Train an ECOC classifier using the custom coding design Coding
and specify that the binary learners are decision trees.
Mdl = fitcecoc(X,Y,'Coding',Coding,'Learner','Tree');
Estimate the in-sample classification error.
genErr = resubLoss(Mdl)
genErr = 0.1460
Choose Among Several Random Coding Designs
If you request a random coding matrix by specifying sparserandom
or denserandom
, then, by default, designecoc
generates 10,000 random matrices. Then, it chooses the matrix with the largest, minimal, pair-wise row distances based on the Hamming measure. You can specify to generate more matrices to increase the chance of obtaining a better one, or you can generate several coding matrices, and then see which performs best.
Load the arrhythmia
data set. Reserve the observations classified into class 16 (i.e., those that do not have an arrhythmia classification) as new data.
load arrhythmia
oosIdx = Y == 16;
isIdx = ~oosIdx;
Y = categorical(Y(isIdx));
tabulate(Y)
Value Count Percent 1 245 56.98% 2 44 10.23% 3 15 3.49% 4 15 3.49% 5 13 3.02% 6 25 5.81% 7 3 0.70% 8 2 0.47% 9 9 2.09% 10 50 11.63% 14 4 0.93% 15 5 1.16%
K = numel(unique(Y));
Generate four random coding design matrices such that the first two are dense and the second two are sparse. Specify to find the best out of 20,000 variates.
rng(1); % For reproducibility Coding = cell(4,1); % Preallocate for coding matrices CodingTypes = {'denserandom','denserandom','sparserandom','sparserandom'}; for j = 1:4; Coding{j} = designecoc(K,CodingTypes{j},'NumTrials',2e4); end
Coding
is a 4-by-1 cell array, where each cell is a coding design matrix. The matrices have K
rows, but the number of columns (i.e., binary learners) might vary.
Train and cross validate ECOC classifiers using the 15-fold cross validation. Specify that each ECOC classifier be trained using a classification tree, and the random coding matrix stored in Coding
.
Mdl = cell(4,1); % Preallocate for the ECOC classifiers for j = 1:4; Mdl{j} = fitcecoc(X(isIdx,:),Y,'Learners','tree',... 'Coding',Coding{j},'KFold',15); end
Warning: One or more of the unique class values in GROUP is not present in one or more folds. For classification problems, either remove this class from the data or use N instead of GROUP to obtain nonstratified partitions. For regression problems with continuous response, use N.
Warning: One or more of the unique class values in GROUP is not present in one or more folds. For classification problems, either remove this class from the data or use N instead of GROUP to obtain nonstratified partitions. For regression problems with continuous response, use N.
Warning: One or more of the unique class values in GROUP is not present in one or more folds. For classification problems, either remove this class from the data or use N instead of GROUP to obtain nonstratified partitions. For regression problems with continuous response, use N.
Warning: One or more of the unique class values in GROUP is not present in one or more folds. For classification problems, either remove this class from the data or use N instead of GROUP to obtain nonstratified partitions. For regression problems with continuous response, use N.
Mdl
is a 4-by-1 cell array of ClassificationPartitionedECOC
models. Several classes have low relative frequency in the data, and so there is a chance that, during cross validation, some in-sample folds will not train using observations from those classes.
Estimate the 15-fold classification error for each classifier.
genErr = nan(4,1); for j = 1:4; genErr(j) = kfoldLoss(Mdl{j}); end genErr
genErr = 4×1
0.2256
0.2116
0.2116
0.2209
Though the generalization error is still high, the best performing model, based solely on the out-of-sample classification error, is the model that used the coding design Coding{3}
.
You can try to improve the generalization error by tuning some parameters of the binary learners. For example, you can specify to use the twoing rule or deviance for the split criterion, rather than the default Gini's diversity index. You might also specify to use surrogate splits since there are missing values in the data.
Input Arguments
K
— Number of classes
positive integer
Number of classes, specified as a positive integer.
K
specifies the number of rows of the coding
matrix M
.
Data Types: single
| double
name
— Coding design name
'binarycomplete'
| 'denserandom'
| 'onevsall'
| 'onevsone'
| 'sparserandom'
| ...
Coding design name, specified as a value in the following table. The table summarizes the coding schemes.
Value | Number of Binary Learners | Description |
---|---|---|
'allpairs' and 'onevsone' | K(K – 1)/2 | For each binary learner, one class is positive, another is negative, and the software ignores the rest. This design exhausts all combinations of class pair assignments. |
'binarycomplete' | This design partitions the classes into all binary combinations, and does not ignore any
classes. For each binary learner, all class assignments are
–1 and 1 with at least one positive
class and one negative class in the assignment. | |
'denserandom' | Random, but approximately 10 log2K | For each binary learner, the software randomly assigns classes into positive or negative classes, with at least one of each type. For more details, see Random Coding Design Matrices. |
'onevsall' | K | For each binary learner, one class is positive and the rest are negative. This design exhausts all combinations of positive class assignments. |
'ordinal' | K – 1 | For the first binary learner, the first class is negative and the rest are positive. For the second binary learner, the first two classes are negative and the rest are positive, and so on. |
'sparserandom' | Random, but approximately 15 log2K | For each binary learner, the software randomly assigns classes as positive or negative with probability 0.25 for each, and ignores classes with probability 0.5. For more details, see Random Coding Design Matrices. |
'ternarycomplete' | This design partitions the classes into all ternary combinations. All class assignments are
0 , –1 , and 1 with
at least one positive class and one negative class in each assignment. |
Name-Value Arguments
Specify optional pairs of arguments as
Name1=Value1,...,NameN=ValueN
, where Name
is
the argument name and Value
is the corresponding value.
Name-value arguments must appear after other arguments, but the order of the
pairs does not matter.
Before R2021a, use commas to separate each name and value, and enclose
Name
in quotes.
Example: 'NumTrials',1000
specifies to generate 1000
random
matrices.
NumTrials
— Number of random coding matrices to generate
10000
(default) | positive integer
Number of random coding matrices to generate, specified as the
comma-separated pair consisting of 'NumTrials'
and
a positive integer.
The software:
Generates
NumTrials
matrices, and selects the one with the maximal, pair-wise row distance.Ignores
NumTrials
for all values ofname
except'denserandom'
and'sparserandom'
.
Example: 'NumTrials',1000
Data Types: single
| double
Output Arguments
M
— Coding matrix
numeric matrix
Coding matrix that reduces an ECOC scheme to binary, returned
as a numeric matrix. M
has K
rows
and L columns, where L is the
number of binary learners. Each row corresponds to a class and each
column corresponds to a binary learner.
The elements of M
are -1
, 0
,
or 1
, and the value corresponds to a dichotomous
class assignment. This table describes the meaning of M(i,j)
,
that is, the class that learner j
assigns to observations
in class i
.
Value | Dichotomous Class Assignment |
---|---|
–1 | Learner j assigns observations in class i to a negative
class. |
0 | Before training, learner j removes observations
in class i from the data set. |
1 | Learner j assigns observations in class i to a positive
class. |
The binary learners for designs denserandom
, binarycomplete
,
and onevsall
do not assign 0
to
observations in any class.
Tips
The number of binary learners grows with the number of classes. For a problem with many classes, the
binarycomplete
andternarycomplete
coding designs are not efficient. However:If K ≤ 4, then use
ternarycomplete
coding design rather thansparserandom
.If K ≤ 5, then use
binarycomplete
coding design rather thandenserandom
.
You can display the coding design matrix of a trained ECOC classifier by entering
Mdl.CodingMatrix
into the Command Window.You should form a coding matrix using intimate knowledge of the application, and taking into account computational constraints. If you have sufficient computational power and time, then try several coding matrices and choose the one with the best performance (e.g., check the confusion matrices for each model using
confusionchart
).Leave-one-out cross-validation (
Leaveout
) is inefficient for data sets with many observations. Instead, use k-fold cross-validation (KFold
).
Algorithms
Custom Coding Design Matrices
Custom coding matrices must have a certain form. The software validates a custom coding matrix by ensuring:
Every element is –1, 0, or 1.
Every column contains as least one –1 and one 1.
For all distinct column vectors u and v, u ≠ v and u ≠ –v.
All row vectors are unique.
The matrix can separate any two classes. That is, you can move from any row to any other row following these rules:
Move vertically from 1 to –1 or –1 to 1.
Move horizontally from a nonzero element to another nonzero element.
Use a column of the matrix for a vertical move only once.
If it is not possible to move from row i to row j using these rules, then classes i and j cannot be separated by the design. For example, in the coding design
classes 1 and 2 cannot be separated from classes 3 and 4 (that is, you cannot move horizontally from –1 in row 2 to column 2 because that position contains a 0). Therefore, the software rejects this coding design.
Random Coding Design Matrices
For a given number of classes K, the software generates random coding design matrices as follows.
The software generates one of these matrices:
Dense random — The software assigns 1 or –1 with equal probability to each element of the K-by-Ld coding design matrix, where .
Sparse random — The software assigns 1 to each element of the K-by-Ls coding design matrix with probability 0.25, –1 with probability 0.25, and 0 with probability 0.5, where .
If a column does not contain at least one 1 and one –1, then the software removes that column.
For distinct columns u and v, if u = v or u = –v, then the software removes v from the coding design matrix.
The software randomly generates 10,000 matrices by default, and retains the matrix with the largest, minimal, pairwise row distance based on the Hamming measure ([2]) given by
where mkjl is an element of coding design matrix j.
References
[1] Fürnkranz, Johannes. “Round Robin Classification.” J. Mach. Learn. Res., Vol. 2, 2002, pp. 721–747.
[2] Escalera, S., O. Pujol, and P. Radeva. “Separability of ternary codes for sparse designs of error-correcting output codes.” Pattern Recog. Lett. Vol. 30, Issue 3, 2009, pp. 285–297.
Version History
Introduced in R2014b
See Also
MATLAB 命令
您点击的链接对应于以下 MATLAB 命令:
请在 MATLAB 命令行窗口中直接输入以执行命令。Web 浏览器不支持 MATLAB 命令。
Select a Web Site
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .
You can also select a web site from the following list:
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.
Americas
- América Latina (Español)
- Canada (English)
- United States (English)
Europe
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)