论文标题

通过HEBBIAN主要组件分析培训卷积神经网络

Training Convolutional Neural Networks With Hebbian Principal Component Analysis

论文作者

Lagani, Gabriele, Amato, Giuseppe, Falchi, Fabrizio, Gennaro, Claudio

论文摘要

最近的工作表明,在训练深层卷积神经网络时,在生物学上合理的HEBBIAN学习可以与返回式学习(Backprop)集成。特别是,已经表明,Hebbian学习可用于训练神经网络的较低或较高层。例如,Hebbian学习对于重新训练了预训练的深神经网络的较高层有效,从而达到了可比的精度W.R.T. SGD虽然需要更少的培训时期,但提出了转移学习的潜在应用。在本文中,我们以这些结果为基础,并通过使用非线性HEBBIAN主要成分分析(HPCA)学习规则进一步改善了Hebbian学习,而不是Hebbian Winner在先前工作中使用的所有(HWTA)策略。我们在计算机视觉的背景下测试这种方法。特别是,HPCA规则用于训练卷积神经网络,以便从CIFAR-10图像数据集中提取相关功能。我们探索的HPCA变体进一步改善了先前的结果,激发了对生物学上合理学习算法的进一步兴趣。

Recent work has shown that biologically plausible Hebbian learning can be integrated with backpropagation learning (backprop), when training deep convolutional neural networks. In particular, it has been shown that Hebbian learning can be used for training the lower or the higher layers of a neural network. For instance, Hebbian learning is effective for re-training the higher layers of a pre-trained deep neural network, achieving comparable accuracy w.r.t. SGD, while requiring fewer training epochs, suggesting potential applications for transfer learning. In this paper we build on these results and we further improve Hebbian learning in these settings, by using a nonlinear Hebbian Principal Component Analysis (HPCA) learning rule, in place of the Hebbian Winner Takes All (HWTA) strategy used in previous work. We test this approach in the context of computer vision. In particular, the HPCA rule is used to train Convolutional Neural Networks in order to extract relevant features from the CIFAR-10 image dataset. The HPCA variant that we explore further improves the previous results, motivating further interest towards biologically plausible learning algorithms.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源