论文标题
深层神经网络的完全动态推断
Fully Dynamic Inference with Deep Neural Networks
论文作者
论文摘要
现代深层神经网络是强大且广泛适用的模型,可通过多级抽象提取与任务相关的信息。然而,他们的跨域成功经常以计算成本,高内存带宽和长期推断潜伏期为代价,这阻止了它们在资源受限和时间敏感的情况下的部署,例如边缘侧推理和自动驾驶汽车。尽管最近开发的用于创建有效的深神经网络的方法正在通过减少模型大小来使其现实世界的部署更可行,但它们并未以均值为基础完全利用输入属性,以最大程度地提高计算效率和任务准确性。特别是,大多数现有方法通常使用一种相同处理所有输入的一定大小的方法。通过以下事实,不同的图像需要准确地分类不同的特征嵌入,我们提出了一个完全动态的范式,该范式在图层和个体卷积过滤器/通道的层面上赋予具有分层推理动力学的深卷积神经网络。每个称为layer-net(l-net)和通道 - 网络(C-NET)的紧凑网络以每种结构为单位进行预测,这些网络是冗余的,因此应跳过层或过滤器/通道。 L-NET和C-NET还学习如何扩展保留的计算输出以最大化任务准确性。通过将L-NET和C-NET集成到称为LC-NET的联合设计框架中,我们在效率和分类准确性方面始终超过最先进的动态框架。在CIFAR-10数据集上,与其他动态推理方法相比,LC-NET可导致高达11.9 $ \ times $ $ $ \少于浮点操作(FLOPS),并且准确性高达3.3%。在ImageNet数据集上,LC-NET可实现高达1.4 $ \ times $ $ $较少的失败,并且比其他方法高达4.6%。
Modern deep neural networks are powerful and widely applicable models that extract task-relevant information through multi-level abstraction. Their cross-domain success, however, is often achieved at the expense of computational cost, high memory bandwidth, and long inference latency, which prevents their deployment in resource-constrained and time-sensitive scenarios, such as edge-side inference and self-driving cars. While recently developed methods for creating efficient deep neural networks are making their real-world deployment more feasible by reducing model size, they do not fully exploit input properties on a per-instance basis to maximize computational efficiency and task accuracy. In particular, most existing methods typically use a one-size-fits-all approach that identically processes all inputs. Motivated by the fact that different images require different feature embeddings to be accurately classified, we propose a fully dynamic paradigm that imparts deep convolutional neural networks with hierarchical inference dynamics at the level of layers and individual convolutional filters/channels. Two compact networks, called Layer-Net (L-Net) and Channel-Net (C-Net), predict on a per-instance basis which layers or filters/channels are redundant and therefore should be skipped. L-Net and C-Net also learn how to scale retained computation outputs to maximize task accuracy. By integrating L-Net and C-Net into a joint design framework, called LC-Net, we consistently outperform state-of-the-art dynamic frameworks with respect to both efficiency and classification accuracy. On the CIFAR-10 dataset, LC-Net results in up to 11.9$\times$ fewer floating-point operations (FLOPs) and up to 3.3% higher accuracy compared to other dynamic inference methods. On the ImageNet dataset, LC-Net achieves up to 1.4$\times$ fewer FLOPs and up to 4.6% higher Top-1 accuracy than the other methods.