论文标题

深度学习表示的图表

Graphs for deep learning representations

论文作者

Lassance, Carlos

论文摘要

近年来,深度学习方法已在各种机器学习任务中实现了最先进的性能,包括图像分类和多语言自动文本翻译。这些体系结构经过培训,可以以端到端的方式解决机器学习任务。为了达到顶级性能,这些架构通常需要大量可训练的参数。有多种不良后果,为了解决这些问题,希望能够打开深度学习体系结构的黑匣子。从有问题的角度来看,由于表示的高维和训练过程的随机性,因此很难这样做。在本文中,我们通过基于图形信号处理(GSP)的最新进展引入图形形式主义来研究这些体系结构。也就是说,我们使用图表代表深神经网络的潜在空间。我们展示了此图形式主义使我们能够回答各种问题,包括:确保概括能力,减少学习过程设计中的任意选择量,改善对输入中添加的小扰动的鲁棒性以及降低计算复杂性

In recent years, Deep Learning methods have achieved state of the art performance in a vast range of machine learning tasks, including image classification and multilingual automatic text translation. These architectures are trained to solve machine learning tasks in an end-to-end fashion. In order to reach top-tier performance, these architectures often require a very large number of trainable parameters. There are multiple undesirable consequences, and in order to tackle these issues, it is desired to be able to open the black boxes of deep learning architectures. Problematically, doing so is difficult due to the high dimensionality of representations and the stochasticity of the training process. In this thesis, we investigate these architectures by introducing a graph formalism based on the recent advances in Graph Signal Processing (GSP). Namely, we use graphs to represent the latent spaces of deep neural networks. We showcase that this graph formalism allows us to answer various questions including: ensuring generalization abilities, reducing the amount of arbitrary choices in the design of the learning process, improving robustness to small perturbations added to the inputs, and reducing computational complexity

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源