Pocket Guide

Transfer Learning with Pretrained Networks

Why Transfer Learning?

Instead of designing and training an architecture from scratch, you can perform transfer learning. That is, you can modify and retrain a pretrained network. Retraining the network is much faster and requires much less data than starting fresh.

How Does Transfer Learning Work?

The early network layers are looking for primitive features. In the later layers, the network combines them into more complex features, and then ultimately into final patterns that can be labeled. With transfer learning, you can take advantage of a pretrained network’s ability to recognize primitive features and just replace the last few layers in the network that do the classification.

Model training replace

Interactive Transfer Learning

Replacing the final layers of a pretrained network can be done using the Deep Network Designer app. For example, only two layers (the fully connected layer and the output layer) at the end of GoogLeNet need to be replaced for transfer learning.

Deep Network Designer App

Example: Classifying Hand Motions

The GoogLeNet model is retrained to recognize high-five patterns in three-axis accelerometer data. MATLAB Support Package for Arduino Hardware is used to read accelerometer data from the MPU-9250 through the Arduino. The data is converted to color images using scalogram representation. The final GoogLeNet layers are replaced and the network is retrained with the new data.