论文标题
缓慢的神经动力学的后果
Consequences of Slow Neural Dynamics for Incremental Learning
论文作者
论文摘要
在人的大脑中,内部状态通常会随着时间的流逝(由于局部复发和其他内在电路特性)而被突然过渡相关。乍一看,内部状态的时间平滑度提出了学习输入输出映射的问题(例如,图像类别标签),因为输入的内部表示将包含当前输入和先前输入的混合物。但是,当使用自然数据(例如电影)培训时,输入中也有时间自相关。内部状态的时间“平滑度”如何影响训练数据也很顺利时学习的效率?它如何影响所学的各种表示?我们发现,在接受时间流畅的数据训练时,“慢”神经网络(配备了线性复发和门控机制)学会了比馈电网络更有效地对其进行分类。此外,具有线性复发和多时间尺度的网络可以学习“未混合”快速变化且缓慢变化的数据源的内部表示。这些发现共同证明了皮质动力学(其时间自相关)的基本特性如何作为归纳偏见,从而导致更有效的类别学习以及在环境中快速和缓慢来源的代表性分离。
In the human brain, internal states are often correlated over time (due to local recurrence and other intrinsic circuit properties), punctuated by abrupt transitions. At first glance, temporal smoothness of internal states presents a problem for learning input-output mappings (e.g. category labels for images), because the internal representation of the input will contain a mixture of current input and prior inputs. However, when training with naturalistic data (e.g. movies) there is also temporal autocorrelation in the input. How does the temporal "smoothness" of internal states affect the efficiency of learning when the training data are also temporally smooth? How does it affect the kinds of representations that are learned? We found that, when trained with temporally smooth data, "slow" neural networks (equipped with linear recurrence and gating mechanisms) learned to categorize more efficiently than feedforward networks. Furthermore, networks with linear recurrence and multi-timescale gating could learn internal representations that "un-mixed" quickly-varying and slowly-varying data sources. Together, these findings demonstrate how a fundamental property of cortical dynamics (their temporal autocorrelation) can serve as an inductive bias, leading to more efficient category learning and to the representational separation of fast and slow sources in the environment.