How to find the important features from many features of a MFFC speech sample??
4 次查看(过去 30 天)
显示 更早的评论
i am doing a speech classification; which will classify four words: go; stop; left and right. i am using MFFC for feature extraction and Neural network for classifier.
But the problem i am facing is that the MFFC features matrix of a single sample is huge around 124x13 where 124 is the number of frames and 13 is the number of MFC coefficient.
if i bring it to a column matrix it will be 1612x1; which is huge;
so how can i reduced this matrix by finding only the most important features?
0 个评论
回答(2 个)
Ilya
2012-9-20
编辑:Ilya
2012-9-20
I described feature ranking/selection tools available from Statistics Tlbx here: http://www.mathworks.com/matlabcentral/answers/33808-select-machine-learning-features-in-matlab
With the exception of sequentialfs, all these techniques are based on specific classification or regression algorithms. If you select features using ensembles of decision trees, for instance, there is no guarantee the selected set will be also optimal for your neural net. sequentialfs on the other hand is going to be quite slow for that many features.
Regularized discriminant analysis with thresholding (released in 12a) is a fast method suitable for data with thousands of predictors. Here is an example showing how it can be used for feature selection: http://www.mathworks.com/help/stats/discriminant-analysis.html#btaf5dv I am no expert on speech classification, but generally if you have 1612 features, you can often get good classification by a simple linear method (which is what discriminant analysis provides).
Greg Heath
2012-9-21
For a c-class classifier, use a target matrix with columns of eye(c). Then input and target matrices will have the sizes
[ I N ] = size(x) % I = 13, N =124
[ O N ] = size(t) % O = c = 4
Neq = prod(size(t)) % Number of equations
[z meanx stdx ]= zscore(x); % Standardize
[zout iout] = find(abs(x) > tol) & Outlier check
% Decide what to do with outliers (keep, delete or trim). For convenience I will keep the same notation.
MSE00 = var(t,1,2) % Biased mean-squared-error reference
MSE00a = var(t,0,2) % Unbiased MSE ref. "a"adjusted for DOF lost when training and testing with the same data.
%To get a preliminary feel for the data, you can obtain a linear classifier using backslash and look at the size of the weights.
W = t/[ones(1,N) ; z];
Nw0 = numel(W) % = (I + 1)*O = Number of estimated weights
y0 = W*[ones(1,N) ; z]; % real valued output
e0 = t=y0; % error
MSE0 = sse(e0)/Neq % Biased mean square error
MSE0a = sse(e0)/(Neq-Nw0) % Unbiased MSE
R20 = 1-MSE0/MSE00 % Rsquared statistic
R2a = 1 -MSE0a/MSE00a % adjusted Rsquared
% Now that you have a good feel, you can quickly use STEPWISEFIT to select input variable subsets for models that are linear in the coefficients.
This should help until you get comfortable with the more complicated SEQUENTIALFS.
Hope this helps.
Thank you for officially accepting my answer.
Greg
2 个评论
Greg Heath
2012-9-23
Whoops! I did misread the problem size. It looks like you may have to use a lowpass filter to reduce the number of pixels before selecting a variable subset.
Sorry.
Greg
另请参阅
类别
在 Help Center 和 File Exchange 中查找有关 Sequence and Numeric Feature Data Workflows 的更多信息
产品
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!