Performance of GPU & CPU during deep learning

17 次查看(过去 30 天)
Hello,
I need to know the role of both the GPU and CPU during deep learning. i.e. what each of them does during training?
Also, is it possible to measure the overhead time required for transferring data between memory and GPU?
Any help is appreciated!

采纳的回答

David Willingham
David Willingham 2020-10-22
编辑:David Willingham 2020-10-22
For training, you can use either the CPU or the GPU. For certain Deep Learning problems which can a long time to train on a CPU, such Image Classification, training on the GPU is orders of magnitude faster. In general, if you have access to a reasonably powerful GPU use it for training. CPU can help when you need to scale up and run parallel jobs to utlize multiple cores on a machine that doesn't have access to multiple GPU's. An example, running an experiment to tune hyperparameters by training multiple networks at once in parallel using the Experiment Manager.
For inference (calling a trained model), CPU is generally sufficient. As an example for image classiciation you can get ~30 predictions / second on a CPU. However if you require very high frame rates or need to run a batch job, using a GPU will help you achieve faster results.
On overhead time of transferring data, here is a post which discusses it. It shows a few ways you can use various time functions for measuring execution time on a gpu. However I don't see data transfer time being a deciding factor between CPU / GPU.
My recommendation:
Training a single model
If you have access to a GPU, either locally or through a cloud platform, create a test to check if your training converges quicker or not. Good news is this is easy to do. Simply:
  1. Set the 'ExecutionEnvironment' setting to 'cpu' in your trainingoptions.
  2. Start training your model and monitor the training progress plot. Make note of how long each epoch takes to train.
  3. Stop the training.
  4. Set the 'ExecutionEnvironment' setting to 'gpu' in your trainingoptions.
  5. Start training your model and monitor the training progress plot. Make note of how long each epoch takes to train.
  6. Stop the training.
In most cases you'll observer the GPU is faster.
Hopefully this helps answer your question.
Regards,
  5 个评论
David Willingham
David Willingham 2020-10-22
Hi Ali,
It doesn't seem unreasonable at all. My suggestion is to look at what workflows are best served on each, I.e. CPU and GPU. CPU's are better suited for parallel workflows that surround the training, GPU's for training an individual network. MATLAB has out of the box support for both of these when using common networks.
However, if you really want to tinker around and build your own training routine that combines them both, I suggest you look to build your own custom training loop. Here's an example of using this for parallel computing using custom training loops you could get started with.
David
pb
pb 2023-11-25
@David Willingham, just reviving this post for a quick question. I know in many cases, in such scenarios a new post is preferred, but it's such a quick and direct follow up to this felt it to be appropriate to ask here.
Can a classiifer trained on a CPU, be used on a GPU? For instance, if a classiifer is trained on a CPU, and I later load it in a different code with
load classifier;
For later use with the predict function as follows:
class = predict(classifier,imageFeatures,'ObservationsIn','columns');
Would it be valid, if after loading the classifier, I wrote:
classifier = gpuArray(classifier);
And similarly, for the image (img) whose features I am extracting into the imageFeatures variable, wrote:
img = gpuArray(img);
Would this be a valid implementation or do I need to go back and retrian the classifier on a GPU. Further, can the variable class, above, be used directly in the rest of my code if it runs a CPU, or do I need to do:
class = gather(class);
For GPU computing I realize these might be very basic questions, but I don't fully understand how it works yet

请先登录,再进行评论。

更多回答(0 个)

类别

Help CenterFile Exchange 中查找有关 Parallel and Cloud 的更多信息

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by