论文标题

通过直接训练的较大的尖峰神经网络更深入

Going Deeper With Directly-Trained Larger Spiking Neural Networks

论文作者

Zheng, Hanle, Wu, Yujie, Deng, Lei, Hu, Yifan, Li, Guoqi

论文摘要

尖峰神经网络(SNN)在用于时空信息和事件驱动的信号处理的生物学上编码中有希望,这非常适合于神经形态硬件中的能节能实现。但是,SNN的独特工作模式使它们比传统网络更难以训练。目前,有两条主要路线可以探索具有高性能的深SNN培训。首先是将预先训练的ANN模型转换为其SNN版本,该模型通常需要一个较长的融合编码窗口,并且在训练过程中无法利用时空特征来解决时间任务。另一个是直接在时空域中训练SNN。但是,由于射击函数的二进制尖峰活动以及梯度消失或爆炸的问题,当前方法仅限于浅体系结构,因此很难利用大规模数据集(例如Imagenet)。为此,我们提出了一种基于新兴时空的后反向传播的阈值依赖性批归归归量表(TDBN)方法,称为“ STBP-TDBN”,从而直接训练非常深的SNN及其对神经型硬件的推理有效地实现。通过提出的方法和详细的快捷连接,我们将直接训练的SNN从浅层结构(<10层)显着扩展到非常深的结构(50层)。此外,我们理论上根据“块动力学等轴测学”理论分析了方法的有效性。最后,我们报告了卓越的准确性结果,其中包括CIFAR-10的93.15%,DVS-CIFAR10的67.8%和Imagenet的67.05%,而时间段很少。据我们所知,这是第一次在Imagenet上探索具有高性能的直接训练的深SNN。

Spiking neural networks (SNNs) are promising in a bio-plausible coding for spatio-temporal information and event-driven signal processing, which is very suited for energy-efficient implementation in neuromorphic hardware. However, the unique working mode of SNNs makes them more difficult to train than traditional networks. Currently, there are two main routes to explore the training of deep SNNs with high performance. The first is to convert a pre-trained ANN model to its SNN version, which usually requires a long coding window for convergence and cannot exploit the spatio-temporal features during training for solving temporal tasks. The other is to directly train SNNs in the spatio-temporal domain. But due to the binary spike activity of the firing function and the problem of gradient vanishing or explosion, current methods are restricted to shallow architectures and thereby difficult in harnessing large-scale datasets (e.g. ImageNet). To this end, we propose a threshold-dependent batch normalization (tdBN) method based on the emerging spatio-temporal backpropagation, termed "STBP-tdBN", enabling direct training of a very deep SNN and the efficient implementation of its inference on neuromorphic hardware. With the proposed method and elaborated shortcut connection, we significantly extend directly-trained SNNs from a shallow structure ( < 10 layer) to a very deep structure (50 layers). Furthermore, we theoretically analyze the effectiveness of our method based on "Block Dynamical Isometry" theory. Finally, we report superior accuracy results including 93.15 % on CIFAR-10, 67.8 % on DVS-CIFAR10, and 67.05% on ImageNet with very few timesteps. To our best knowledge, it's the first time to explore the directly-trained deep SNNs with high performance on ImageNet.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源