|降低维度||Reduce dimensionality using Principal Component Analysis (PCA) in Live Editor|
|Univariate feature ranking for classification using chi-square tests|
|Rank features for classification using minimum redundancy maximum relevance (MRMR) algorithm|
|Feature selection using neighborhood component analysis for classification|
|Univariate feature ranking for regression using F-tests|
|Rank features for regression using minimum redundancy maximum relevance (MRMR) algorithm|
|Feature selection using neighborhood component analysis for regression|
|Rank features for unsupervised learning using Laplacian scores|
|Compute partial dependence|
|Create partial dependence plot (PDP) and individual conditional expectation (ICE) plots|
|Predictor importance estimates by permutation of out-of-bag predictor observations for random forest of classification trees|
|Predictor importance estimates by permutation of out-of-bag predictor observations for random forest of regression trees|
|Estimates of predictor importance for classification tree|
|Estimates of predictor importance for classification ensemble of decision trees|
|Estimates of predictor importance for regression tree|
|Estimates of predictor importance for regression ensemble|
|Rank importance of predictors using ReliefF or RReliefF algorithm|
|Sequential feature selection using custom criterion|
|Perform stepwise regression|
|Create generalized linear regression model by stepwise regression|
|Feature extraction by using reconstruction ICA|
|Feature extraction by using sparse filtering|
|Transform predictors into extracted features|
|Classical multidimensional scaling|
|Mahalanobis distance to reference samples|
|Nonclassical multidimensional scaling|
|Format distance matrix|
|Feature selection for classification using neighborhood component analysis (NCA)|
|Feature selection for regression using neighborhood component analysis (NCA)|
|Feature extraction by reconstruction ICA|
|Feature extraction by sparse filtering|
- Introduction to Feature Selection
Learn about feature selection algorithms and explore the functions available for feature selection.
- Sequential Feature Selection
This topic introduces sequential feature selection and provides an example that selects features sequentially using a custom criterion and the
- Neighborhood Component Analysis (NCA) Feature Selection
Neighborhood component analysis (NCA) is a non-parametric method for selecting features with the goal of maximizing prediction accuracy of regression and classification algorithms.
- Regularize Discriminant Analysis Classifier
Make a more robust and simpler model by removing predictors without compromising the predictive power of the model.
- Select Predictors for Random Forests
Select split-predictors for random forests using interaction test algorithm.
- Feature Extraction
Feature extraction is a set of methods to extract high-level features from data.
- Feature Extraction Workflow
This example shows a complete workflow for feature extraction from image data.
- Extract Mixed Signals
This example shows how to use
ricato disentangle mixed audio signals.
t-SNE is a method for visualizing high-dimensional data by nonlinear reduction to two or three dimensions, while preserving some features of the original data.
- Visualize High-Dimensional Data Using t-SNE
This example shows how t-SNE creates a useful low-dimensional embedding of high-dimensional data.
- tsne Settings
This example shows the effects of various
- t-SNE Output Function
Output function description and example for t-SNE.
- 主成分分析 (PCA)
- Analyze Quality of Life in U.S. Cities Using PCA
Perform a weighted principal components analysis and interpret the results.
- Factor Analysis
Factor analysis is a way to fit a model to multivariate data to estimate interdependence of measured variables on a smaller number of unobserved (latent) factors.
- Analyze Stock Prices Using Factor Analysis
Use factor analysis to investigate whether companies within the same sector experience similar week-to-week changes in stock prices.
- Perform Factor Analysis on Exam Grades
This example shows how to perform factor analysis using Statistics and Machine Learning Toolbox™.
- Nonnegative Matrix Factorization
Nonnegative matrix factorization (NMF) is a dimension-reduction technique based on a low-rank approximation of the feature space.
- Perform Nonnegative Matrix Factorization
Perform nonnegative matrix factorization using the multiplicative and alternating least-squares algorithms.
- Multidimensional Scaling
Multidimensional scaling allows you to visualize how near points are to each other for many kinds of distance or dissimilarity metrics and can produce a representation of data in a small number of dimensions.
- Classical Multidimensional Scaling
cmdscaleto perform classical (metric) multidimensional scaling, also known as principal coordinates analysis.
- Classical Multidimensional Scaling Applied to Nonspatial Distances
This example shows how to perform classical multidimensional scaling using the
cmdscalefunction in Statistics and Machine Learning Toolbox™.
- Nonclassical Multidimensional Scaling
This example shows how to visualize dissimilarity data using nonclassical forms of multidimensional scaling (MDS).
- Nonclassical and Nonmetric Multidimensional Scaling
Perform nonclassical multidimensional scaling using
- Compare Handwritten Shapes Using Procrustes Analysis
Use Procrustes analysis to compare two handwritten numerals.