fsolve and GPU Computation

12 次查看(过去 30 天)
Sven
Sven 2017-9-16
编辑: Matt J 2017-9-17
Can fsolve be used using GPU computation? Or can it internally profit from GPU computation?
If not, is it known if it will be made possible to use fsolve with GPU computation?
Is it automatically using GPU on certain conditions?
Or is there a way I could modify fsolve to be able to use it on a GPU?
I am asking because I have multidimensional equations with substantial grids that should supposedly be highly parallelizable. I could try to parallelize within the function which is to be solved, but I suppose it should be more efficient if GPU usage can be established at the fsolve level.
Thank you in advance for any advice.

回答(1 个)

Matt J
Matt J 2017-9-16
编辑:Matt J 2017-9-16
I could try to parallelize within the function which is to be solved, but I suppose it should be more efficient if GPU usage can be established at the fsolve level.
No. The greatest benefit will be if you GPU-optimize your objective function and Jacobian calculations. The heavy internal computations done by FSOLVE are mainly linear equation solving and other matrix algebra operations. You can benefit those computations best by computing your Jacobian in sparse form, if applicable.
If you are using the trust region algorithm, then you can also use the 'JacobianMultiplyFcn' option appropriately. You can implement that with your own gpuArray operations, but I think sparsity, where it can be applied, will have more of an impact.
  3 个评论
Sven
Sven 2017-9-17
Thank you for this advice, I have given sparsity to little thought. Though to the high dimensionality I think any improvement is valuable, so I will see where I can apply GPU usage as well.
Will have a look as well at the trust region algo and check if in case I can implement JacobianMultiplyFcn accordingly.
Just thought when GPU could be used at fsolve level the overhead costs of passing the GPUarray forth and back are being reduced. Heard these overhead costs might be an issue in general.
Matt J
Matt J 2017-9-17
编辑:Matt J 2017-9-17
It would be better if fsolve allowed you to return gpuArrays from the objective function code. That way there would be no need to do any CPU-GPU transfers. On the other hand, you would have to have a huge number of equations for the transfer of the objective function vector to significantly slow you down.

请先登录,再进行评论。

类别

Help CenterFile Exchange 中查找有关 GPU Computing 的更多信息

标签

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by