论文标题

通过低复杂性的插值来学习的无损图像压缩

Learned Lossless Image Compression Through Interpolation With Low Complexity

论文作者

Kamisli, Fatih

论文摘要

随着图像处理中深度学习的日益普及,最近已经提出了许多学习的无损图像压缩方法。一组表现出良好性能的算法基于学习的基于像素的自动回归模型,但是,它们的顺序性质可防止易于平行的计算,并导致长时间的解码时间。另一个流行的算法组基于基于规模的自动回归模型,并且可以提供竞争性的压缩性能,同时还可以实现简单的并行化和较短的解码时间。但是,它们的主要缺点是使用的大型神经网络和高计算复杂性。本文提出了一种基于插值的学习无损图像压缩方法,该方法属于基于比例的自动回归模型组。该方法的实现比最近的基于比例尺的自动回归模型要好于或在PAR压缩性能上,但需要少10倍以上的神经网络参数和编码/解码计算复杂性。这些成就是由于整体系统和神经网络架构设计中的贡献/发现,例如在不同尺度上共享插室神经网络,使用单独的神经网络对概率分布模型的不同参数以及在YCOCGG-R色彩空间中执行处理,而不是RGB色彩空间。

With the increasing popularity of deep learning in image processing, many learned lossless image compression methods have been proposed recently. One group of algorithms that have shown good performance are based on learned pixel-based auto-regressive models, however, their sequential nature prevents easily parallelized computations and leads to long decoding times. Another popular group of algorithms are based on scale-based auto-regressive models and can provide competitive compression performance while also enabling simple parallelization and much shorter decoding times. However, their major drawback are the used large neural networks and high computational complexity. This paper presents an interpolation based learned lossless image compression method which falls in the scale-based auto-regressive models group. The method achieves better than or on par compression performance with the recent scale-based auto-regressive models, yet requires more than 10x less neural network parameters and encoding/decoding computation complexity. These achievements are due to the contributions/findings in the overall system and neural network architecture design, such as sharing interpolator neural networks across different scales, using separate neural networks for different parameters of the probability distribution model and performing the processing in the YCoCg-R color space instead of the RGB color space.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源