GPU Array lazy evaluation?
显示 更早的评论
I have found this behavior of the MATLAB gpuArray class by chance, as a consequence of several unexpected out of memory error on some code which is of none importance here. Can anybody explain this? An array, say 40000*5000 in single precision should take 0.8 GB. If I run the following operations in sequence
H=zeros([40000*5000 1],'single','gpuArray');
H=H+randn([40000*5000 1],'single','gpuArray');
H=H+randn([40000*5000 1],'single','gpuArray');
H=zeros([40000*5000 1],'single','gpuArray');
the GPU RAM shows 3297MB occupied, of which 200 are taken by default by MATLAB when the GPU is selected. However, if I clear the variable H the memory is freed totally, which means that is not a leak. It seems that MATLAB for certain operations reserves some memory or even duplicates the target matrix. Furthermore, if I check the available memory (not the free memory) it shows the expected matrix size being taken. However, the "locked" memory is unavailable, because I have been monitoring my computations and the loop just crashes when the max mem size is reached.
Can anybody, please, explain this behavior and if a workaround exists, still using the gpuArray?
I am running a NVIDIA GTX 1080ti with XEON E5-2860 v2 on ubuntu 16.04, with MATLAB2018a
回答(1 个)
Joss Knight
2018-5-19
MATLAB doesn't take 200MB on device selection, the CUDA driver does.
MATLAB pools up to a quarter of the GPU memory by default. This considerably improves performance by reducing the number of synchronous allocations.
You can turn off pooling using
feature('GpuAllocPoolSizeKb', 0)
类别
在 帮助中心 和 File Exchange 中查找有关 GPU Computing 的更多信息
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!