trainAutoencoder, how does ScaleData work?
3 次查看(过去 30 天)
显示 更早的评论
Im building an anomaly detection model using an autoencoder. I first use the "trainAutoencoder" function with my normal training data, and then the "predict" function on a testset containing normal data and anomaly data. Then I plot the reconstruction error.
When I set the 'ScaleData' parameter to true in the "trainAutoencoder" function call, the anomaly separation is fine. However, when I scale my data prior to "trainAutoencoder", and set the 'ScaleData' to false, the anomaly separation is worse. I use the same "mapminmax" function to scale the data in range [0,1], as done in "trainAutoencoder".
What else does "ScaleData" do to the data?
0 个评论
回答(1 个)
Avadhoot
2024-4-10
I understand that you are trying to replicate the functionality of "scaleData" argument in "trainAutoEncoder" using "mapminmax" function. The difference in the results is because the "mapminmax" function maps the input data to a fixed range. i.e. [-1,1]. Whereas the "scaleData" argument in "trainAutoencoder" scales the data to a scale which is suitable for the specific activation functions used in the autoencoder. This scaling is consistently applied to the training as well as testing data. Hence, the performance is better in the case of "scaleData" in comparison to just using "mapminmax".
For more information on "scaleData" refer to the below documentation:
https://www.mathworks.com/help/deeplearning/ref/trainautoencoder.html?s_tid=doc_ta#buxdjrt-ScaleData
I hope it helps.
0 个评论
另请参阅
类别
在 Help Center 和 File Exchange 中查找有关 Sequence and Numeric Feature Data Workflows 的更多信息
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!