Can I use my gpu to fasten my multiobjective optimization using gamultiobj?

4 次查看(过去 30 天)
I am running a multiobjective optimization program using gamultiobj and parallel computing (cpu cores). Can I use gpu to speed up this process?

采纳的回答

Walter Roberson
Walter Roberson 2024-9-30
In order to use GPU inside of parallel computations you would need to have one distinct GPU for each parallel computation.
It is not possible for parallel computations to "share" a single GPU.
Probably your best bet is to set UseVectorized and not set UseParallel. Then have your evaluation routine transfer the block of input data to GPU, work with it on GPU, and then gather() the result back from GPU.
It is not possible to keep the population on GPU; you need to transfer to GPU, work with it, transfer back.
  2 个评论
Shiv
Shiv 2024-10-1
Thanks Walter.
I can't use vectorized iput anyway due to complex objective function defiinition and using gpuarray for a minor part in that calculation; hence not much benefit. I had read and knew that gpu cores can't be parallelized, still i was hoping to be wrong and find out if there was any other way to utilize this to fasten up my optimization.
I am using parallel progeamming with possible gpuarray calculations in my code. However, the functions working with gpuArray is very limited; hence, the analysis time is still too high. can a ode solver like ode45 (or a modified code of it) be used on gpuArrays?
Walter Roberson
Walter Roberson 2024-10-4
ode45() has the same limitation: the input and output must be normal arrays.
If you have auxillary variables then they could be passed in as gpuarrays paramfun

请先登录,再进行评论。

更多回答(0 个)

产品


版本

R2023b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by