CREPE deep pitch estimation neural network
Audio Toolbox / Deep Learning
The CREPE block leverages a pretrained convolutional neural model to estimate pitch from an audio signal. This block requires Deep Learning Toolbox™.
Estimate Pitch Using CREPE Blocks
This example shows how to use the CREPE blocks to combine preprocessing, network inference, and postprocessing and obtain pitch estimations from an audio signal. See Estimate Pitch Using Deep Pitch Estimator Block for an example that uses the Deep Pitch Estimator block to perform the same task.
Adjust the parameters of the blocks to speed up computation and see the pitch estimations in real time as the audio plays.
Set the Overlap percentage (%) of the CREPE Preprocess block to
50. With a lower overlap percentage, the system processes frames less frequently.
Set the Number of output frames of the CREPE Preprocess block to
5. This causes the CREPE Preprocess block to buffer audio frames and pass them to the CREPE block in batches. Passing batches to the CREPE block improves computational efficiency by allowing it to process multiple frames in parallel. However, it also increases latency because the system outputs pitch estimations in batches instead of one at a time.
Set the Model capacity of the CREPE block to
Large. This model has fewer parameters than the full-size model, leading to faster computation at the cost of slightly lower accuracy.
Run the model to listen to a singing voice and view the estimated pitch in real time.
Port_1 — Preprocessed audio input
vector | 4-D array
Preprocessed input to the network, specified as a 1024-by-1-by-1-by-N array, where N is the number of audio frames. If the input has only one frame, the block accepts a vector.
The CREPE Preprocess block takes in an audio signal and outputs the preprocessed frames.
Model capacity — Size of trained neural network
Full (default) |
Model capacity, specified as
The smaller sizes correspond to fewer parameters in the model, leading to
faster computation but lower accuracy.
Mini-batch size — Size of mini-batches
128 (default) | positive integer
Size of mini-batches to use for prediction, specified as a positive integer. Larger mini-batch sizes require more memory but can lead to faster predictions.
 Kim, Jong Wook, Justin Salamon, Peter Li, and Juan Pablo Bello. “Crepe: A Convolutional Representation for Pitch Estimation.” In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 161–65. Calgary, AB: IEEE, 2018. https://doi.org/10.1109/ICASSP.2018.8461329.
C/C++ Code Generation
Generate C and C++ code using Simulink® Coder™.
Usage notes and limitations:
To generate generic C code that does not depend on third-party libraries, in the Configuration Parameters > Code Generation general category, set the Language parameter to
To generate C++ code, in the Configuration Parameters > Code Generation general category, set the Language parameter to
C++. To specify the target library for code generation, in the Code Generation > Interface category, set the Target Library parameter. Setting this parameter to
Nonegenerates generic C++ code that does not depend on third-party libraries.
For ERT-based targets, the Support: variable-size signals parameter in the Code Generation> Interface pane must be enabled.
For a list of networks and layers supported for code generation, see Networks and Layers Supported for Code Generation (MATLAB Coder).
Introduced in R2023a