I have 4 cores + CUDA supported graphics card. Is this equivalent to 5 cores?
3 次查看(过去 30 天)
显示 更早的评论
Hello
I want to maximize my computer's resources for using parallel computing, probably using spmd. I have 4 cores and a CUDA-supported graphics card, through gpuArray. Does this mean that I can use 5 cores, or does the gpu also require a core from the start?
If this is equivalent to 5 cores, how can I use these?
Thank you
采纳的回答
Matt J
2013-1-1
编辑:Matt J
2013-1-2
No, the kinds of computations that a GPU can do is different from a CPU, and therefore it cannot function like an additional CPU core. The GPU actually contains many hundred cores of its own, but these cores are specialized, capable of only very simple operations. You can only use the graphics card in conjunction with gpuArray.
3 个评论
Walter Roberson
2013-1-3
As far as I understand, if you were to start a gpu calculation, and then were to start smpd, then the gpu and the smpd could potentially run in parallel, with you gathering the gpu results after the smpd session finished. But if you are not using a .cu to supply a kernel that can run for a fair while by itself, then the gpu would run out things to do, as there is no "master session" running beside the smpd sessions and keeping the gpu fed.
If I recall, it is possible for the individual smpd labs to connect with the gpu, at least in the more recent versions. I do not recall the restrictions now; what I recall is that it used to be described as requiring one gpu per smpd lab, but that now there is a way to share.
What I have no idea about is whether, if you start gpuarray() going, and then start smpd sessions, whether the task of managing the gpu would get any cpu time. I am not aware of any Tesla-based graphics cards, and without Tesla the GPU remains in a mode of being limited to 30ms kernels (because the graphics subsystem needs to use the card too.)
It would not surprise me in the least if I got some of the details wrong in this; I do not have the toolbox to play with, so I've just been following along as people say interesting things. But perhaps something in what I wrote might trigger you to ask your question a different way.
Matt J
2013-1-3
As far as I understand, if you were to start a gpu calculation, and then were to start smpd, then the gpu and the smpd could potentially run in parallel, with you gathering the gpu results after the smpd session finished. But if you are not using a .cu to supply a kernel that can run for a fair while by itself, then the gpu would run out things to do, as there is no "master session" running beside the smpd sessions and keeping the gpu fed.
That seems very strange to me if you can do that. Wouldn't you need some kind of M-code version of _syncthreads() that you could call from your mfile to make sure that both the SPMD and gpu operations are simultaneously finished before proceeding?
更多回答(0 个)
另请参阅
类别
在 Help Center 和 File Exchange 中查找有关 GPU Computing in MATLAB 的更多信息
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!