Direct GPU-to-GPU Communication with Parallel Computing Toolbox / SPMD
4 次查看(过去 30 天)
显示 更早的评论
I am using spmd to enable parallel computing with multiple GPUs on one workstation. Basically, the GPUs do some calculation, broadcast their results, update their parameters, and iterate. The problem is, using labSend (actually, gplus in my case) to aggregate and broadcast the results is pretty slow. It is first pulling the results off of the GPU, copying to system memory, sending to other workers, then uploading to the other GPUs.
I understand that CUDA now has Peer-to-Peer memory access capability. This way, multiple-GPUs can directly access each other's memory. http://www.nvidia.com/docs/IO/116711/sc11-multi-gpu.pdf This is accomplished with a function like: cudaMemcpyPeerAsync().
Thus, I would like to have a gplus() or labSend() that copies a gpuArray directly to the memory of another GPU on another worker.
Is this possible today? If not, is it something you are working on?
Thanks, Jon
0 个评论
回答(1 个)
Edric Ellis
2015-4-27
编辑:Edric Ellis
2015-4-27
Unfortunately, as you observe, Parallel Computing Toolbox currently has no means by which to achieve this. I believe you can use the peer-to-peer memory copying across multiple processes within a single node, which means you could use the GPU MEX interface to copy data.
另请参阅
类别
在 Help Center 和 File Exchange 中查找有关 GPU Computing 的更多信息
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!