Main Content

Effects of Custom Deep Learning Processor Parameters on Performance and Resource Utilization

Analyze how deep learning processor parameters affect deep learning network performance and bitstream resource utilization. Identify parameters that help improve performance and reduce resource utilization.

This table lists the deep learning processor parameters and their effects on performance and resource utilization.

Deep Learning Processor ParameterDeep Learning Processor ModuleParameter ActionEffect on PerformanceEffect on Resource Utilization
TargetFrequencyBase moduleIncrease target frequency.Improves performance.Marginal increase in lookup table (LUT) utilization.
ConvThreadNumberconvIncrease thread number.Improves performance.Increases resource utilization.
InputMemorySizeconvIncrease input memory size.Improves performance.Increases block RAM (BRAM) resource utilization.
OutputMemorySizeconvIncrease output memory size.Improves performance.Increases block RAM (BRAM) resource utilization.
FeatureSizeLimitconvIncrease feature size limit.Improves performance on networks with layers that have a large number of features.Increases block RAM (BRAM) resource utilization.
FCThreadNumberfcIncrease thread number.Improves performance.Increases resource utilization.
InputMemorySizefcIncrease input memory size.Improves performance.Increases Block RAM (BRAM) resource utilization.
OutputMemorySizefcIncrease output memory size.Improves performance.Increases Block RAM (BRAM) resource utilization.
InputMemorySizecustomIncrease input memory sizeImproves performance for DAG networks onlyIncreases resource utilization for DAG networks only.
OutputMemorySizecustomIncrease output memory sizeImproves performance for DAG networks onlyIncreases resource utilization for DAG networks only.
ProcessorDataTypeTop LevelChange data type to int8.Improves performance. There could be a drop in accuracy.Reduces resource utilization.

See Also

| | | |

Related Topics