PCA and Fisher Linear Discreminant

3 次查看(过去 30 天)
fa
fa 2011-4-8
回答: TED MOSBY 2025-5-14
I am a new learner in PCA and Fisher Linear Discreminant algorithms. If anybody knows about these two algorithm,please email me .I have some questions.
My email is : fa_bnc@yahoo.com Thanks a lot

回答(1 个)

TED MOSBY
TED MOSBY 2025-5-14
PCA:
If you have "n" samples, each with "d" features. You want a smaller set of k features (< d) that preserves as much total variance as possible without using class labels.
Algorithm:
  1. Subtract the average of each feature so all features start from a common origin.
  2. Build a “covariance matrix” that tells you, for every pair of features, whether they rise and fall together or move in opposite ways.
  3. Run an eigen‑decomposition on that matrix. It returns a ranked list of directions (called principal components) ordered by how much overall variation they capture.
  4. Keep only the top k directions you care about.
  5. Multiply the original data by those top directions. The result is a compact version of your data with far fewer columns but still holding most of its original information.
Bottom-line: PCA is an unsupervised compression tool. It ignores any class labels you might have and focuses solely on where the data varies the most.
Fisher Linear Discriminant / Linear Discriminant Analysis:
LDA is supervised. It purposely uses the labels to find axes that best separate the classes, not the axes that merely carry the most variance.
Algorithm:
  1. Group your data by class and compute a mean for each group.
  2. For every class, look at how spread‑out its points are around its own mean. Combine these into an overall within‑class scatter measure.
  3. Compare each class mean to the grand mean of all data. Combine these into a between‑class scatter measure.
  4. Solve a generalised eigenproblem that balances the two scatter measures. The best directions are the ones where the ratio “between‑class / within‑class” is maximised.
  5. Choose up to C − 1 of those directions (where C is the number of classes).
  6. In that space, points from different classes are now as far apart as possible relative to their own cluster size—ideal for a simple linear classifier or for visual inspection.
Hope this helps!

类别

Help CenterFile Exchange 中查找有关 Dimensionality Reduction and Feature Extraction 的更多信息

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by