How to speed up code using GPU?

1 次查看(过去 30 天)
khan
khan 2015-4-10
评论: Greg Heath 2015-4-20
Hi all, I have a general question, I have a neural network where the input is 80x60x13x2000.
In current setup i take one sample (80x60x13) at a time to process it for final output. Where in the first hidden layer it becomes 76x56x11x3, in second becomes 38x28x9x3, and in third becomes 34x24x7x3.
Now can any body tell me how can i use GPU at first and third layer in such a way that it becomes faster. Previously i converted all data to gpuArray, but it became worse.
Can anybody guide me how to better utilize it?
With Best Regards
khan
  1 个评论
Greg Heath
Greg Heath 2015-4-20
Sizes of inputs, targets and outputs are 2-dimensional. I have no idea how your description relates to 2-D matrix signals and a hidden layer net topology.
Typically,
[ I N ] = size(input)
[ O N ] = size(target)
[ O N ] = size(output)
The corresponding node topology is
I-H-O for a single hidden layer
I-H1-H2-O for a double hidden layer
Please try to explain your problem in these terms.

请先登录,再进行评论。

回答(0 个)

类别

Help CenterFile Exchange 中查找有关 Image Data Workflows 的更多信息

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by