- cwtLayer (Continuous Wavelet Transform Layer) typically does not have learnable parameters. It acts as a fixed, non-trainable feature extraction layer. It computes the CWT of the input signal using parameters you specify (e.g., wavelet type, frequency limits). The "parameters" you provide (like wavelet type, frequency range) are hyperparameters, not trainable weights. It transforms a time-domain signal into a time-frequency (scalogram) representation. This is useful as a preprocessing step for neural networks, allowing them to learn from the time-frequency features.
- cwtLayer: Converts the time-domain input to a time-frequency representation.
- Neural network layers (e.g., convolution, fully connected): Learn patterns in the time-frequency data.
- icwtLayer: Can be used at the end to reconstruct the time-domain signal from the modified time-frequency representation (e.g., for denoising, signal synthesis).
- Typical workflow: Input (time series) → cwtLayer → [Deep Network] → icwtLayer → Output (time series).
- This is especially useful for sequence-to-sequence tasks, such as denoising or forecasting.
- The output from cwtLayer (the "SCBT" data, i.e., the scalogram) is not meant to be trainable. Learnable parameters are added via subsequent layers (e.g., convolutional layers) that operate on the CWT output. The cwtLayer itself is just a transformation, not a learnable mapping.
- The icwtLayer takes the time-frequency representation (scalogram) and performs the inverse CWT to reconstruct the time-domain signal. In a network, you place the icwtLayer after all processing layers that operate on the time-frequency data. The output of the icwtLayer is the reconstructed time-domain signal, which you can use for further analysis or as the final prediction.