Running Code on GPU Seems much Slower than Doing so on CPU

39 次查看(过去 30 天)
Hi there,
I am using a Thinkpad W550, and my GPU is Quadro K620M. As I simply ran the following code, the profile showed that running on the GPU was much slower.
function Test_GPU()
a = [10^8, 18^8];
h = a;
c = conv2(h, a, 'full');
% Running in doube precision got a similar result
aa = single(gpuArray([10^8, 18^8]));
hh = aa;
cc = conv2(hh, aa, 'full');
end
So I ran the official gpuBench()
The result is astonishing! Running on the GPU IS slower, much much more slower.
The first picture shows the result from GPU, and the second, CPU.
I will be very grateful if anyone could tell me why. Many thanks
  2 个评论
Theron FARRELL
Theron FARRELL 2019-5-27
A follow-up question. After gpuBench finished running, no HTML report was given.
Anything with browser setting, etc.?
Jan
Jan 2019-5-27
a = [10^8, 18^8] is a [1x2] vector. For a speed comparison, this job is too tiny.

请先登录,再进行评论。

采纳的回答

Andrea Picciau
Andrea Picciau 2019-5-29
编辑:Andrea Picciau 2019-5-29
You don't need to disable JIT acceleration. Rather, you need to measure using timeit and gputimeit like so:
% CPU data
a = ones([100, 100], 'single');
h = a;
% GPU data
aa = gpuArray(a);
hh = gpuArray(h);
% Measuring CONV2 with one output
cpuTime = timeit(@() conv2(h, a, 'full'), 1);
gpuTime = gputimeit(@() conv2(hh, aa, 'full'), 1);
Why you might want to do this:
  • MATLAB uses lazy evaluation to schedule the operations on your GPU, which introduces some asynchronicity in the GPU's behaviour. The same mechanism is not used on the CPU.
  • gputimeit takes lazy evaluation into consideration and also repeats the measure several times, weighing caching effects, overheads, and first-time costs.
  • timeit also repeats the measure several times, but it doesn't take lazy evaluation into consideration.
  • tic/toc neither repeat the measure nor takes lazy evaluation into consideration.
  • the profiler is somewhat similar to tic/toc but it also introduces some overhead in the measurement because it has to trace the whole call stack (which is why is useful for investigating rather than extracting rigorous measurements).
What results do you get? Let us know.
Given your setup, it woudln't be strange if gpuTime>cpuTime. Laptop GPUs are usually not optimized for computing, and it might be the case that yours is driving the graphics too.
  5 个评论
Walter Roberson
Walter Roberson 2019-5-30
profile is probably best suited for determining how many times a line was invoked, and for determining which general sections of code are the most time consuming.
However, the actual amount of computation required might be substantially less if only the Just In Time compiler had not been turned off by profiling -- and that in turn means that in some cases the most expensive section of code is not what it appears at first. Once you have identified a general section of code as being expensive then unless it makes obvious sense that one particular line will certainly be the most expensive part, it might be better to rewrite the section into smaller functions to narrow down costs and so restrict your optimization efforts to functions that are disproportinately expensive.
To narrow down which of several different implementations is most efficient for a purpose, create test functions for each variety and time them with timeit(). Repeat the timeit() a number of times as the timings can vary a fair bit. Be careful on interpreting the results: if you have two different routines you are calling timeit() on and the first one seems to be more expensive, then sometimes the real issue is something to do with JIT. This leads me to doing tests such as
N = 50;
timesA1 = zeros(1,N); timesA2 = zeros(1,N); timesB1 = zeros(1,N);
FA = @FirstVariation; FB = @SecondVariation
for K = 1 : N; timesA1(K) = timeit(FA, 0); end
for K = 1 : N; timesB1(K) = timeit(FB, 0); end
for K = 1 : N; timesA2(K) = timeit(FA, 0); end
It is common for timesA1 to trend much higher than timesB1 and then for timesA2 to show times much closer to timesB1 -- that is, re-testing exactly the same thing can sometimes end up much faster for reasons that are not at all clear.
For timing tests on variations involving GPU use, always use gputimeit()

请先登录,再进行评论。

更多回答(2 个)

Walter Roberson
Walter Roberson 2019-5-27
The Quadro 620M was a Maxwell architecture, GM108 chip. That architecture does double precision at 1/32 of single precision.
MTimes operations are delegated to LAPACK by MATLAB for sufficiently large arrays. LAPACK automatically uses all available CPU cores.
My CPU shows up as faster for double precision MTIMES and backslach than my GTX 780M does, but the GPU was much faster for single precision, and is faster for double precision FFT than my CPU measures as.
  8 个评论
Andrea Picciau
Andrea Picciau 2019-5-29
编辑:Walter Roberson 2019-5-29
@Jan: Sorry, I meant to say "Theron". I changed my previous comment to fix that.
Jan
Jan 2019-5-29
@Theron: I do not undestand, why you expect arrayfun to have a positive effect on the processing speed. The opposite is expected.
Starting the profiler disables the JIT accleration automatically, because the JIT can re-oreder the commands if it improves the speed, but then there is no relation between the timings and te code lines anymore. This means, that running the profiler can affect the run time massively, especially for loops. Of course this sounds to be counter-productive for the job of a profiler - and it is so, in fact. Therefore the profiler and tic/toc should be used both, because they have different advantages and disadvantages. For measuring the speed of single commands or elementary loops, the profiler is not a good choice.

请先登录,再进行评论。


Miguel
Miguel 2024-10-27,15:43
I am running a vehicle simulation on GPU vs CPU, and takes hughe ammount of time, and I have a gaming PC, why?

类别

Help CenterFile Exchange 中查找有关 Get Started with GPU Coder 的更多信息

标签

产品


版本

R2019a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by