论文标题

学识渊博的贪婪方法(LGM):一种新型的神经结构,用于稀疏编码及其他

Learned Greedy Method (LGM): A Novel Neural Architecture for Sparse Coding and Beyond

论文作者

Khatib, Rajaei, Simon, Dror, Elad, Michael

论文摘要

信号和图像处理领域受到深度神经网络的引入深处的影响。这些成功部署在各种现实世界中,获得了最先进的结果,并超过了知名且成熟的经典方法。尽管取得了令人印象深刻的成功,但在许多这些神经网络中使用的架构却没有明确的理由。因此,这些通常被视为缺乏任何可解释性的“黑匣子”机器。通过展开良好理解的迭代算法,对这种缺点的建设性补救措施是对此类网络的系统设计。这种方法的一个流行代表是迭代收缩阈值算法(ISTA)及其学识渊博的版本-Lista,旨在实现处理后信号的稀疏表示。在本文中,我们重新审视了这项稀疏的编码任务,并提出了针对同一目标的贪婪追求算法的展开版本。更具体地说,我们专注于众所周知的正交匹配匹配(OMP)算法,并介绍其展开和学习的版本。我们学到的贪婪方法(LGM)的关键特征是能够容纳动态数量的展开层,以及基于表示误差的停止机制,都适用于输入。我们开发了所提出的LGM体系结构的几种变体,并在各种实验中测试了其中的一些变体,以证明它们的灵活性和效率。

The fields of signal and image processing have been deeply influenced by the introduction of deep neural networks. These are successfully deployed in a wide range of real-world applications, obtaining state of the art results and surpassing well-known and well-established classical methods. Despite their impressive success, the architectures used in many of these neural networks come with no clear justification. As such, these are usually treated as "black box" machines that lack any kind of interpretability. A constructive remedy to this drawback is a systematic design of such networks by unfolding well-understood iterative algorithms. A popular representative of this approach is the Iterative Shrinkage-Thresholding Algorithm (ISTA) and its learned version -- LISTA, aiming for the sparse representations of the processed signals. In this paper we revisit this sparse coding task and propose an unfolded version of a greedy pursuit algorithm for the same goal. More specifically, we concentrate on the well-known Orthogonal-Matching-Pursuit (OMP) algorithm, and introduce its unfolded and learned version. Key features of our Learned Greedy Method (LGM) are the ability to accommodate a dynamic number of unfolded layers, and a stopping mechanism based on representation error, both adapted to the input. We develop several variants of the proposed LGM architecture and test some of them in various experiments, demonstrating their flexibility and efficiency.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源