Category Archives: Compression

Article accepted – MMSP 2018

Our article on  quality assessment of compressed images with deep learning techniques has been accepted in the IEEE Multimedia Signal Processing 2018 (MMSP) conference [1].

In this article we use two of the most recent compression methods based on DL, developed respectively by Ballé et al. [2] and by Toderici et al. [3]. The images compressed with these methods were evaluated by a panel of around twenty people. We also considered images compressed with “classical” techniques (HEVC-INTRA (BPG) and JPEG2000). We found that the subjective quality is often better than JPEG2000, and in any case very close. On the other hand, BPG still has better results on average, even if on certain images the method [3] is the best one.

RDV on [1] for more details! (The PDF of this article will be available soon).

[1] G. Valenzise, ​​A. Purica, V. Hulusic, M. Cagnazzo. “Quality Assessment of Deep-Learning-Based Image Compression”. To appear in IEEE Multimedia Signal Processing Workshop, 2018.

[2] J. Ballé, V. Laparra, and E. P. Simoncelli, “End-to-end optimized image compression,” in Int. Conf. on Learning Representations (ICLR), Toulon, France, Apr. 2017.

[3] Toderici G., Vincent D., Johnston N., Hwang S., Minnen D, Shor J., Covell M., “Full resolution image compression with recurrent neural networks,” in IEEE Int. Conf. on Computer Vision and Pattern Recognition (CVPR), Honolulu, Hawaii, USA, Jul. 2017, pp. 5435-5443.

Article on AVC-to-HEVC transcoding accepted

Our article on AVC-to-HEVC transcoding has been accepted into the Springer Multimedia Tools ans Application Journal.

Together with Elie Mora and Frédéric Dufaux, we propose in this paper a method to reduce the complexity of the transcoding from the AVC format to the HEVC format. With respect to the reference “full decode-full encode” (FD-FE) technique, we reduce the depth of the HEVC quad-tree coding structure, based on the motion information retrieved from the AVC stream. In this way, we do not need to explore the full HEVC quad-tree, with a remarkable complexity reduction. We bet on the fact that our criterion would make the same choice as a full tree exploration.

Experiments confirm that the proposed technique is almost three times faster than the reference FD-FE (2.72x in the RA configuration, computed on all the sequences of the MPEG data set for RA), with a very small rate increase – 1.4% in average – for the same quality. These results are better than the state of the art.

A patent is pending on this technology.

The article is available on the journal web site:
http://www.springer.com/-/2/AVQmUlOE2brxj7RS2ZBN

 

PhD Thesis: compression of avionics screen content

The airplane screens have a very specific video content, where text and graph are superposed to images or to a uniform background.

Compressing this kind of data requires adapted techniques, since the most important information (text, graph) is usually degraded by traditional, transform-based video compression techniques.

We want to investigate the use of classification, segmentation and inpainting to recognize the most relevant information and encode it with appropriate methods.

The PhD student will work at both Telecom-ParisTech and Zodiac Aerospace

APPLY HERE:

http://www.adum.fr/as/ed/voirproposition.pl?site=PSaclay&matricule_prop=9954

Decoded sequences for our ICIP’15 submission

The decoded video sequences for our submission to ICIP’15 are available here. Each file is about 300MB.

Reference method Proposed method
Four People Four People
Johnny Johnny
Kirsten and Sarah Kirsten and Sarah

The use case is the following. The three HEVC class-E sequences (Four_People, Johnny, Kirsten_and_Sarah) have been encoded with the proposed method (our ICIP’15 submission) and the standard HEVC encoder (HM13). Then we simulated transmission on a lossy channel, using a Gilbert-Elliot model. Finally, we decoded the received packets, employing a simple error concealement technique. These videos show the superiority of the proposed scheme with respect to the reference.