Deep Learning - Distributed GPU Memory

1 次查看(过去 30 天)
Hello,
I have many, very large input matrices (detector values) connected by Fully Conectet Layer and the output is a Regression Layer to reconstruct an image from it (Only 1 image is used at a time!). Due to the lack of local correlation, a Fully Connected Layer is necessary and CNN cannot be used. But this is beyond the VRAM.
  1. Therefore the question if Matlab with Fully Connected can distribute this?
I have the choice to buy 2x rtx 8000 (2x48GB) or 4x Titan RTX (4x24GB). A RTX 8000 costs 2.5x as much as the Titan RTX and has only the same performance but with twice the memory of a Titan RTX.
2. NV-Link distribute GPU-RAM?
Thanks

回答(1 个)

Joss Knight
Joss Knight 2020-3-28
No, there is nothing like what you are after, to distribute the weights of a fully connected layer across multiple GPUs. You could implement it yourself using parallel language constructs, but I assume this is not what you're after.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by