论文标题
关于部分总和对DNN加速器中互连带宽和内存访问的影响
On the Impact of Partial Sums on Interconnect Bandwidth and Memory Accesses in a DNN Accelerator
论文作者
论文摘要
专门的加速器旨在满足深神经网络(DNN)应用程序的巨大资源需求。功率,性能和区域(PPA)约束限制了这些加速器中可用的MAC数量。需要大量Mac的卷积层通常被分配为多个迭代子任务。这给可用的系统资源(例如互连和内存带宽)造成了巨大压力。这些子任务的特征图的最佳分区可以大大减少带宽要求。一些加速器通过实现本地记忆来避免芯片或互连转移。但是,仍然执行内存访问,并且减少的带宽可以帮助节省此类体系结构的电源。在本文中,我们提出了一种一阶分析方法,以分区特征图,以最佳带宽并评估这种分区对带宽的影响。可以通过设计可以执行基本算术操作的活动内存控制器来保存此带宽。结果表明,最佳分区和主动存储器控制器最多可以减少带宽。
Dedicated accelerators are being designed to address the huge resource requirement of the deep neural network (DNN) applications. The power, performance and area (PPA) constraints limit the number of MACs available in these accelerators. The convolution layers which require huge number of MACs are often partitioned into multiple iterative sub-tasks. This puts huge pressure on the available system resources such as interconnect and memory bandwidth. The optimal partitioning of the feature maps for these sub-tasks can reduce the bandwidth requirement substantially. Some accelerators avoid off-chip or interconnect transfers by implementing local memories; however, the memory accesses are still performed and a reduced bandwidth can help in saving power in such architectures. In this paper, we propose a first order analytical method to partition the feature maps for optimal bandwidth and evaluate the impact of such partitioning on the bandwidth. This bandwidth can be saved by designing an active memory controller which can perform basic arithmetic operations. It is shown that the optimal partitioning and active memory controller can achieve up to 40% bandwidth reduction.