GTX-1080ti Shows 9 GB Memory Available

11 次查看(过去 30 天)
When I run gpuDevice on my GTX-1080ti (with 11GB total memory), it gives the following results. Does anyone know why it shows only 9.0 GB available (allowing for the byte-to-GB conversion of 1,073,741,824)? This is 2 GB less than the Total Memory.
Update1: I wrote some Matlab code using "gpuArray" and filled it with int64's until the program crashed from "out of memory", and yeah, I can only get up to just over 9 GB before it crashes. Here's my output:
Device GeForce GTX 1080 Ti has Total Installed Memory 11.00 GB
Device GeForce GTX 1080 Ti has Available Memory 9.03 GB
Array has 34809 x 34809 8-byte (int64) elements
Array consumes: 9.028 GB
Device GeForce GTX 1080 Ti has Available Memory 0.4031 MB after array fills GPU
Total GPU Memory minus Array Memory = 1.97 GB
Update2: When I go into Visual Studio and run "cudaMemGetInfo" on the GPU to get total, free, and used, the results right after I've filled the memory using gpuArray above show 10.7452 GB used (total - free), 0.254 GB free, and total 11GB. And at the same time Windows 10 Task Manager is showing 9.6 GB out of 11 GB is dedicated/used.
So I'm guessing the "available/free" numbers don't account for other apps using or requesting memory? Though why isn't Task Manager showing 10.7 GB used instead of 9.6 GB? Confusing...
Thanks
Name: 'GeForce GTX 1080 Ti'
Index: 1
ComputeCapability: '6.1'
SupportsDouble: 1
DriverVersion: 9.1000
ToolkitVersion: 8
MaxThreadsPerBlock: 1024
MaxShmemPerBlock: 49152
MaxThreadBlockSize: [1024 1024 64]
MaxGridSize: [2.1475e+09 65535 65535]
SIMDWidth: 32
TotalMemory: 1.1811e+10
AvailableMemory: 9.6938e+09
MultiprocessorCount: 28
ClockRateKHz: 1657500
ComputeMode: 'Default'
GPUOverlapsTransfers: 1
KernelExecutionTimeout: 1
CanMapHostMemory: 1
DeviceSupported: 1
DeviceSelected: 1
  4 个评论
J. McCabe
J. McCabe 2018-9-8
By the way, I did the same thing on a GTX-1070, which isn't running any monitors, and prior to filling up the 8GB of memory there was only 0.2 GB being used according to Task Manager. When I filled the memory with the biggest array that would fit (6.56 GB), the total would be 6.56 + 0.2 or about 6.8 GB out of the total 8 GB. So again, there's about 1.2 GB unavailable.
J. McCabe
J. McCabe 2018-9-8
The other strange thing is that with both cards I'm filling the memory up to the point just below where it crashes with an out of memory error, so it's not like this 1.2-1.4 GB is being reserved for something and might be made available upon request. Rather it actually runs out of memory.

请先登录,再进行评论。

回答(3 个)

Joss Knight
Joss Knight 2018-9-8
I answered this question to the best of my ability when it was first asked in this question by Pavel Sinha.
He indicates that this is a generally known issue. As far as I can tell, this is just the way the driver behaves with this card, it has nothing to do with MATLAB or any other particular CUDA application. I am still waiting for a response from NVIDIA on what is going on here.
  2 个评论
J. McCabe
J. McCabe 2018-9-8
Excellent !! Thanks much for checking with NVIDIA.
Matt J
Matt J 2018-9-8
I'll be very interested to know, too. I just recently ordered a 1080 Ti.

请先登录,再进行评论。


Joss Knight
Joss Knight 2018-9-12
NVIDIA have responded to confirm that this is expected behaviour. In summary:
  • WDDM2 releases 90% of available memory to CUDA.
  • A single application is only allowed to allocate 90% of that 90%, i.e. 81% of total memory.
  • NVIDIA are working with Microsoft to tighten this bound, and/or remove it on non-display cards.

J. McCabe
J. McCabe 2018-9-9
编辑:J. McCabe 2018-9-9
After some testing, I believe I was able to answer this. Apparently Microsoft claims that it holds out some GPU VRAM so that an application can't grab it all, but if another app comes along and requests it Windows 10 makes it available.
I did some testing that seems to confirm that. I started with no significant apps running, and my 1080ti GPU showed 0.6 GB "dedicated" VRAM in Task Manager. I then started and ran my Matlab code that fills the GPU VRAM (using gpuArray) to the max it saw available, which was 9 GB. Task Manager then (correctly) showed that 9.6 GB of VRAM was being used ("dedicated"). If I increased the gpuArray size even slightly it would crash on "out of memory". So it appeared that Windows was blocking 1.4 GB from being used.
However, I then opened a 3D rendering application which used around 2GB of GPU memory, and it loaded fine. In fact Task Manager showed 10.8 GB out of 11 GB of GPU memory "dedicated". So it appears that what Microsoft is saying actually IS true. Maybe any one individual process can't access more than 9 GB, but if another process comes along and asks for VRAM it will get it. When I look in Task Manager under the Details tab, where you can see the Dedicated & Shared GPU Memory for each process, it shows Matlab taking about 8.6 GB and the 3D rendering process taking 2.1 GB, which indeed totals 10.7 GB. BTW, both are also taking some "shared" GPU memory, which I believe is system RAM. There are also some other system processes taking GPU memory at the same time, amounting to maybe 0.2 GB or less.
So it seems clear that any one process can't access more than 9 GB on a 1080ti, but if someone else comes along it can grab whatever is remaining, so that the total utilization is just under 11 GB. So maybe the challenge is breaking the problem into multiple separate processes if you want to access the entire GPU VRAM.
  1 个评论
Matt J
Matt J 2018-9-9
Do you have a link to the Microsoft page about this? I wonder if the memory is still withheld if you have additional display cards.

请先登录,再进行评论。

类别

Help CenterFile Exchange 中查找有关 Get Started with GPU Coder 的更多信息

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by