Video length is 4:44

Feature Extraction Using Diagnostic Feature Designer App

Melda Ulusoy, MathWorks

Use Diagnostic Feature Designer app to extract time-domain and spectral features from your data to design predictive maintenance algorithms.

In this example, measurements have been collected from a triplex pump under different fault conditions. The app lets you import this data and interactively visualize it. You can group the measurements by different fault conditions. After the time-domain and spectral features are extracted from the data, you can evaluate the effectiveness of the extracted features using histograms. You can also rank them to determine numerically which features are likely to best discriminate healthy and faulty behavior. Finally, the most effective features are exported to Classification Learner app for further evaluation of feature effectiveness and for training machine learning models.

Published: 26 Feb 2019

Feature Extraction Using the Diagnostic Feature Designer App

In this video, we’re going to demonstrate how you can use Diagnostic Feature Designer app to extract features for developing a predictive maintenance algorithm.

We start by importing our data set into the app. The data has been collected from a triplex pump under different fault conditions. It is stored in an ensemble which is a specialized datastore for developing predictive maintenance algorithms. The ensemble datastore contains 1.2 seconds long measurements of flow and pressure and also the fault codes for each of these measurements. After we import the data set, it shows up in the data browser. To visualize the flow signal, we select it and click Signal Trace. This plots all the measurements with different fault conditions. We can now group the measurements by the fault codes by selecting this option. If we zoom in using the panner strip below, we can better see how measurements are highlighted with different colors based on different fault types. Next, we’ll extract time-domain and spectral features from this data. We go back to the Feature Designer tab, and under this menu, we select signal features to generate statistics features. We’ll first use the flow data and later extract features from the pressure signal. Here, we have commonly used time-domain features such as the mean, standard deviation, kurtosis, and skewness. Now that we computed the time-domain features, we’ll continue with extracting spectral features. The app can use the time-domain data to estimate the signal spectra of these signals which can be then used to extract spectral features. We select spectral estimation and click power spectrum. Here, you can try out nonparametric or parametric methods to compute the spectrum and compare their results. We choose the auto-regressive model with a model order of 20. Next, to compute spectral features, we click here. We select the frequency band such that it includes the first four peaks. The reason is that due to noisy data at higher frequencies, it’s harder to distinguish the spectral peaks. Therefore, any features extracted from higher frequencies won’t contribute to the performance of machine learning models.

 So far, we identified time and spectral features from the flow data. You can repeat the same process with the pressure data and extract some additional features. Now, all the extracted features from flow and pressure data are stored in the FeatureTable1. After selecting this table, we can click on the Feature Table View that shows all the computed feature values in a tabular form. Different features are displayed on different columns. We can also use histograms which show distributions of the computed features. On these plots, different colors indicate different faults. Due to overlapping distributions of different fault types and high number of features, it’s hard to decide on which features are more separable and distinctive. The app lets us rank all the features to identify the ones that effectively separate different types of faults. On the Feature Designer tab, when we click Rank Features, the app uses one-way ANOVA to calculate ranking scores for all the features. The results of the ANOVA test are displayed on the right-hand side, whereas the bars on the left shows the normalized scores for different features. We can view the feature names by hovering over the bars. The features with a higher score are good candidates for training a machine learning model. For further evaluation of the extracted features, we can now export them to the Classification Learner, where we can train machine learning models for fault classification.