When using GPU with neural net, I run out of shared memory per block; is there a way to handle?
3 次查看(过去 30 天)
显示 更早的评论
I want to train a neural net with several hundred images (75x75 pixels, or 5625 elements each). This works in native Matlab. When I try to train using 'useGPU' I get the error "The shared memory size for a kernel must be a positive integer, and must not exceed the device's limit on the amount of shared memory per block (49152 bytes)." coming from nnGPU.codeHints. The code:
net1=feedforwardnet(10);
xg=nndata2gpu(inputMatrix);
tg=nndata2gpu(targetMatrix);
net2=configure(net1,inputMatrix,targetMatrix);
net2=train(net2,xg,tg);
Is there a way to tell the neural net training system to process the training in smaller chunks? Or some other smarter way to do this?
0 个评论
回答(1 个)
Mark Hudson Beale
2013-6-19
编辑:Mark Hudson Beale
2013-7-5
I was able to reproduce your error. In MATLAB 13a the nndata2gpu array transformation is no longer required and if gpuArray is used (instead of nndata2gpu) the required amount of shared memory will be reduced.
d = gpuDevice
d.MaxShmemPerBlock
Using 13a and gpuArray I was able to train the following random problem on a mobile GPU with these specs: NVIDIA GeForce GT 650M 1024 MB in MATLAB 13a.
x = rand(5626,500);
t = rand(1,500);
X = gpuArray(x);
T = gpuArray(t);
net = feedforwardnet(10);
net = configure(net,x,t);
net.trainFcn = 'trainscg';
net = train(net,X,T);
I hope that helps!
另请参阅
类别
在 Help Center 和 File Exchange 中查找有关 Deep Learning Toolbox 的更多信息
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!