How can I use LDA (Linear or Fisher Discrimnant Analysis) with an hardwritten digits dataset (like MNIST or USPS)?
5 次查看(过去 30 天)
显示 更早的评论
I mean that LDA create a projecton of two or more classes in order to show their separability ( http://courses.ee.sun.ac.za/Pattern_Recognition_813/lectures/lecture01/img35.png). In MNIST foe example i have 60.000 classes 28x28 that represent the hardwritten digits (training set) and 10.000 matrix 28x28 that represent the test set. I can use LDA to compare each class in the test set with a class in the training set, but how can I say after i applied LDA if the test class is similar to the train class?
Thx in advance.
4 个评论
Ilya
2012-10-3
You have not provided any new info. You just repeated what you said before. The pictures linked in your original post and in your comment do not explain what you want to do.
采纳的回答
Ilya
2012-10-4
I am not an expert in image analysis, but it seems you misunderstand what you need to do. LDA uses matrix X in which rows are observations (i.i.d. sampled from a vector random variable) and columns are predictors (elements of this random variable). Your 28x28 matrix is pixels. Its rows are not i.i.d. observations of a vector random variable. The entire 28x28 matrix is predictors. Sometimes in image analysis they just flatten out the matrix of pixels. Then you would have a 60k-by-784 matrix of training data and 10k-by-784 matrix of test data.
You could use the new ClassificationDiscriminant class or the old classify function from Statistics Tlbx to perform LDA. Or you could write your own code in the spirit of what you've shown above. ClassificationDiscriminant.fit and classify find K*(K-1)/2 linear decision boundaries between K classes. Your code suggests that you want to find K-1 directions for the optimal variable transformation by LDA for K classes.
For your specific code, do not use inv. Instead use eig(SB,Sw,'chol') to find V and D. There is only one usable eigenvector in V because the rank of SB is 1.
8 个评论
Ilya
2012-10-6
编辑:Ilya
2012-10-6
Quoting what I wrote earlier:
ClassificationDiscriminant.fit finds K*(K-1)/2 linear decision boundaries between K classes. Your code suggests that you want to find K-1 directions for the optimal variable transformation by LDA for K classes.
There is a difference between LDA for classification and LDA for supervised variable transformation.
After you found Sw and mean vectors for every class, you can compute Mahalanobis distance squared from observation at x to the mean of every class. For example, if mu1 is a column-vector for the mean vector of class 1 and x is a column-vector of predictors
(x-mu1)'*pinv(Sw)*(x-mu1)
is the Mahalanobis distance squared between x and mu1. The smaller the distance, the more likely this observation is from this class.
You will have to figure out the rest by yourself. Read a book on LDA.
更多回答(2 个)
Greg Heath
2012-10-6
编辑:Greg Heath
2012-10-6
It may help to forget LDA for a while and directly create a linear classifier using the slash operator. For example, since your images are of 10 digits 0:9, your target matrix should contains columns of the 10-dimensional unit matrix eye(10) where the row index of the 1 indicates the correct class index.
I doubt if you need all of the pixels in a 28X28 matrix. Therefore, I suggest averaging pixels to get a much smaller number I = nrowsr*ncolumns < 28*28.
Next, use the colon operator (:) to convert the matrices to column vectors. For each of the 10 classes choose a number of noisy training samples with Ntrni >> I for i = 1:10.
Form the input and target matrices with dimensions
[ I N ] = size(input)
[O N } = size(target)
O = 10 and N = sum(Ni) >> 10*I (e.g., ~ 100*I)
The linear model is
y = W * [ ones(1,N) ; input };
where the row of ones yield bias weights. The weight matrix is obtained from the slash LMSE solution
W = target/[ ones(1,N) ; input };
Class assignments are obtaned from
class = vec2ind(y);
I have always found this to be superior to LDA.
However, if for some reason you must use LDA, this provides an excellent model for comparisons.
0 个评论
Greg Heath
2012-10-6
编辑:Greg Heath
2012-10-6
You use the term HARDwritten. Do you mean HANDwritten?
There are only 10 digits 0:9. Therefore, there are only 10 classes.
The numbers 60,000 and 10,000 represent the number of total samples that belong to one of the 10 classes. You don't need anywhere near that number to train and test a good classifier.
As i mentioned in my previous answer, I don't believe you need 28*28 =784 dimensions to discriminate between c = 10 classes. Use averaging or a low pass filter to reduce the image sizes to I = nrows*ncolumns. Then use the colon operator (:) to unfold each image into an I dimensional column vector.
With LDA you project the I dimensional vectors into a c-1 = 9 dimensional space defined by the dominant eigenvectors of (Sw\Sb). It's been ~ 30 years since I've done this and I don't remember the details. However, once you get these 9 dimensional projections you can imagine the 10 class mean projections in 9-space and check the references on how to make the classifications.
Since I don't remember the details and if I was in a hurry, I would just assign the vector to the closest class mean projection.
Hope this helps.
Greg
另请参阅
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!