论文标题

罗森布拉特的第一个定理和深度学习的节俭

Rosenblatt's first theorem and frugality of deep learning

论文作者

Kirdin, A. N., Sidorov, S. V., Zolotykh, N. Y.

论文摘要

首先,罗森布拉特(Rosenblatt)关于浅网络无所不能的定理指出,如果培训集中没有差异,那么基本知名度可以解决任何分类问题。 Minsky和Papert考虑了对神经输入的限制的基本感知:在隐藏层处于每个神经元的接收场的有界数或相对较小的接收场的直径。他们证明,在这些约束下,基本的感知者无法解决某些问题,例如输入图像的连接性或像素中像素的奇偶校验。在本说明中,我们证明了Rosenblatt在工作中的第一个定理,展示了基础知名度如何解决旅行迷宫问题的版本,并分析了该解决方案的复杂性。我们还针对同一问题构建了深层网络算法。这要高得多。浅网络在隐藏层(Rosenblatt的$ a $ emements)上使用指数级的神经元,而对于深网,第二阶多项式复杂性就足够了。我们证明,对于同一复杂的问题,深网可能会小得多,并揭示了这种效果背后的启发式言论。

First Rosenblatt's theorem about omnipotence of shallow networks states that elementary perceptrons can solve any classification problem if there are no discrepancies in the training set. Minsky and Papert considered elementary perceptrons with restrictions on the neural inputs: a bounded number of connections or a relatively small diameter of the receptive field for each neuron at the hidden layer. They proved that under these constraints, an elementary perceptron cannot solve some problems, such as the connectivity of input images or the parity of pixels in them. In this note, we demonstrated first Rosenblatt's theorem at work, showed how an elementary perceptron can solve a version of the travel maze problem, and analysed the complexity of that solution. We constructed also a deep network algorithm for the same problem. It is much more efficient. The shallow network uses an exponentially large number of neurons on the hidden layer (Rosenblatt's $A$-elements), whereas for the deep network the second order polynomial complexity is sufficient. We demonstrated that for the same complex problem deep network can be much smaller and reveal a heuristic behind this effect.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源