How can I only train replacing layer in transfer learning rather than re-training all network?

1 次查看(过去 30 天)
This website give an example how to realize the transfer learning https://uk.mathworks.com/help/deeplearning/examples/transfer-learning-using-alexnet.html;jsessionid=b3db20d33f3d4b2dec586eb40cd9 but there is a problem, it will re-training all network of transfer learning, it would take a lot of time to train it. If we can only train the replace layer and reserve the transfer layer, it would cost low energy. layers = [ layersTransfer fullyConnectedLayer(numClasses,'WeightLearnRateFactor',20,'BiasLearnRateFactor',20) softmaxLayer classificationLayer]; where we can see the layersTransfer layer is the transfer layer and the parameters of it have been trained. If I can transfer these layerstransfer's parameters and only train the other 3 new layers, it may save a lot of time. How can I do it? I know there is a activation function could extract features from any layer of CNN, how can I use these features extracted from layersTansfer and train the 3 new layers, it may be feasible. How can I realize it?
Thank you for your answer!

回答(0 个)

类别

Help CenterFile Exchange 中查找有关 Image Data Workflows 的更多信息

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by