论文标题

Giu-Gans:生成对抗网络的全球信息利用

GIU-GANs: Global Information Utilization for Generative Adversarial Networks

论文作者

Tian, Yongqi, Gong, Xueyuan, Tang, Jialin, Su, Binghua, Liu, Xiaoxiang, Zhang, Xinyuan

论文摘要

近年来,随着人工智能的快速发展,基于深度学习的图像产生已经大大提高了。基于生成对抗网络(GAN)的图像产生是一项有前途的研究。但是,由于卷积受到空间不足和通道特异性的限制,因此基于卷积的传统gan提取的特征受到限制。因此,甘斯无法捕获每个图像的更多细节。另一方面,直接堆积的卷积会导致太多的参数和分层,这将导致过度拟合的高风险。为了克服上述局限性,在本文中,我们提出了一种新的gan,称为偶发性生成对抗网络(GIU-GAN)。 Giu-Gans利用一个名为“全球信息利用率(GIU)”模块的全新模块,该模块集成了挤压和兴奋网络(SENET),并通过渠道注意机制专注于全球信息,从而使生成图像的质量更高。同时,分批归一化(BN)不可避免地忽略了发电机采样噪声之间的表示差异,从而降低了生成的图像质量。因此,我们将代表性批处理归一化(RBN)介绍给此问题的gans架构。 CIFAR-10和Celeba数据集用于证明我们提出的模型的有效性。大量实验证明我们的模型实现了最先进的竞争性能。

In recent years, with the rapid development of artificial intelligence, image generation based on deep learning has dramatically advanced. Image generation based on Generative Adversarial Networks (GANs) is a promising study. However, since convolutions are limited by spatial-agnostic and channel-specific, features extracted by traditional GANs based on convolution are constrained. Therefore, GANs are unable to capture any more details per image. On the other hand, straightforwardly stacking of convolutions causes too many parameters and layers in GANs, which will lead to a high risk of overfitting. To overcome the aforementioned limitations, in this paper, we propose a new GANs called Involution Generative Adversarial Networks (GIU-GANs). GIU-GANs leverages a brand new module called the Global Information Utilization (GIU) module, which integrates Squeeze-and-Excitation Networks (SENet) and involution to focus on global information by channel attention mechanism, leading to a higher quality of generated images. Meanwhile, Batch Normalization(BN) inevitably ignores the representation differences among noise sampled by the generator, and thus degrade the generated image quality. Thus we introduce Representative Batch Normalization(RBN) to the GANs architecture for this issue. The CIFAR-10 and CelebA datasets are employed to demonstrate the effectiveness of our proposed model. A large number of experiments prove that our model achieves state-of-the-art competitive performance.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源