Arrayfun/gpuArray CUDA kernel need to be able to remember previous steps

Background
  1. The problem can be separated into a large number of independent sub-problems.
  2. All sub-problems share the same matrix parameters.
  3. Each sub-problem needs to remember the indices itself has visited up to this point.
  4. The goal is to process the sub-problems in paralell on the GPU.
Array indexing and memory allocation is not supported in this context. Is this function possible to achieve?

回答(1 个)

This is a bit too vague to answer. Without indexing, how can each subproblem retrieve its subset of the data? If you just mean indexed assignment is not allowed then sure, you could write an arrayfun perhaps that solves some independent problem for a subset of an array, as long as all the operations are scalar and the output is scalar. Not if the subproblems are completely different algorithms though.
Anyway, sorry, but not enough information to help.

类别

帮助中心File Exchange 中查找有关 GPU Computing 的更多信息

产品

版本

R2023b

提问:

2024-3-25

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by