GPU vs. CPU in training time

I am doing deep learning on a CNN composed of four convolutional-max-pooling layers with about 100 kernels at each convolutional layer. I am using .mat files that each contain a matrix of size 22x1000 as the training data. An image datastore has been created as a container of the training data where I use a special ReadFun to read those .mat files.
GPU: GTX 1660 super
CPU: i7-9700
MATLAB version: R2019b
hard drive: SSD nvme m.2
Mini-batch size: 400
The training time of one epoch on the GPU is just half that of the CPU! I believe that the training time on GPU must be much less than on CPU only which is not the case with me. Any suggestions to handle that?
May raw data is EEG signals. Each EEG trial composed of 22x1000, where 22 is the number of time signals and 1000 is the number of time points in each signal. Any suggestions to store the data in a different way or creating a different datastore to achieve better GPU utilization?

3 个评论

Have you tried using the DispatchInBackground training option to load your data in the background?
Awesome! This really reduced the training time on the GPU. Now the speedup is enhanced.
Now I am using imageInputLayer[22 1000 1] as the input layer of my CNN, is this correct for my case?
If the 22 time signals are different channels of each observation then your input data should be 1000-by-1-by-22 for a temporal convolutional network using 2-D convolution.

请先登录,再进行评论。

 采纳的回答

Joss Knight
Joss Knight 2020-12-11

0 个投票

Use the DispatchInBackground training option to improve throughput when your data access and preprocessing is costly.

更多回答(0 个)

类别

帮助中心File Exchange 中查找有关 EEG/MEG/ECoG 的更多信息

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by