GPU Coder vs. ONNXRuntime, is there a difference in inference speed?
9 次查看(过去 30 天)
显示 更早的评论
Since I can export from Matlab to ONNX format, why can't I just import my model into TensorRT etc.? Will I get significant speed increases or is the benefit of GPU Coder more about being able to compile all my other Matlab code into optimized Cuda?
Thanks in advance.
0 个评论
回答(1 个)
Joss Knight
2021-4-2
You can compile your network for TensorRT using GPU Coder if that's your intended target, no need to go through ONNX.
I don't believe MathWorks have any published benchmarks against ONNX runtime specifically. GPU Coder on the whole outperforms other frameworks, although it does depend on the network.
2 个评论
Matti Kaupenjohann
2022-1-7
Could you show/link the benchmark which includes the performance of gpucoder against other frameworks (which one?).
Joss Knight
2022-1-7
编辑:Joss Knight
2022-1-7
We don't publish the competitive benchmarks, you'll have to make a request through your sales agent. we can provide some numbers for MATLAB.
另请参阅
类别
在 Help Center 和 File Exchange 中查找有关 Deep Learning Toolbox 的更多信息
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!