We have some recent accepted publications in the team:
 N. Hobloss, L. Ge, M. Cagnazzo, “A Multi-View Stereoscopic Video Database With Green Screen (MTF) For Video Transition Quality-of-Experience Assessment”, accepted in QoMEx 2021
 M. Milovanovic, M. Cagnazzo, F. Henry, J. Jung, “PATCH DECODER-SIDE DEPTH ESTIMATION IN MPEG IMMERSIVE VIDEO”, accepted in IEEE ICASSP’21
 T. Karagkioules et al., “Online Learning for Adaptive Video Streaming in Mobile Networks”, accepted in ACM Transactions on Multimedia
Congrats to Nour, Marta and Theo and all the advisoring team!
Our work on photographic device quality has been accepted into the conference ICPR. The article is entitled: “DR2S: Deep Regression with Region Selection for Camera Quality Evaluation”.
Congrats to our first author, Marcelin Tworski, and all the team!
Goluck Konuko joins the team for his intership. He will work with Stéphane Lathuilière and Giuseppe Valenzise on deep video compression by face animation.
The project named: “MILES – MachIne Learning for Efficient Streaming” has been founded by the Institut Polytechnique de Paris. It will develop, in our Multimedia team, on-line learning methods for improved video streaming over mobile devices.
Congratulations to Attilio Fiandrotti, in charge of the project!
Our article about “Progressive hologram transmission using a view-dependent scalable compression scheme” has been accepted for publication in Springer Annals of Telecommunications!
Congrats to the first author Anas El Rhammad. More information can be found here: https://link.springer.com/article/10.1007/s12243-019-00741-7
Our article “Cockpit video coding with temporal prediction” byI. Mitrica (MM/IDS et Safran), A. Fiandrotti (MM/IDS), M. Cagnazzo (MM/IDS), C. Ruellan and E. Mercier, has received the Best Paper Award at the “European Workshop on Visual Information Processing” (Rome, Italy, 31/10/2019).
Congrats Iulia !
Download the pyton scripts here.
Our article “Channel Impulsive Noise Mitigation for Linear Video Coding Schemes” has been accepted for publication in IEEE TCSVT!
Congrats to the first author Shuo Zheng (now postdoc in Lille)
Three new exciting seminars will be given on June 7th, 2019. The program is the following :
Place : Telecom-ParisTech, 46 rue Barrault, Paris. Room B567.
9h45h Welcome coffee
10h00-11h15 “Deep learning for Super Resolution and Tracking”, by Gianni Franchi, Univ. Paris Sud
11h15-12h30 “Deformation models for image and video generation”, by Stéphane Lathuilière, Univ. Trento (Italy)
12h30-14h Lunch break
14h-15h15 “The Next Big Thing: From Systems to Deep Systems”, by Francesco Banterle, CNR Pisa (Italy)
15h15-16h Coffee, free discussion
“Deep learning for Super Resolution and Tracking”
Cette présentation traitera de deux projets :
Le projet 1 : vise à mélanger des techniques de deep learning et de géostatistique pour faire de la super résolution d’images. L’objectif est d’accéder aux bons résultats du deep learning ainsi que l’incertitude de l’estimateur grâce la géostatistique.
Le projet 2 : vise à suivre des personnes dans une vidéo de foule extrêmement dense. N’ayant pas de base donnée annotée sur ce projet, nous proposerons une technique ou le réseau de neurones apprend tout seul. (En anglais : Self supervised learning).
“Deformation models for image and video generation”
Generating realistic images and videos has countless applications in different areas, ranging from photography technologies to e-commerce business.
Recently, deep generative approaches have emerged as effective techniques for generation tasks. In this talk, we will first present the problem of pose-guided person image generation. Specifically, given an image of a person and a target pose, a new image of that person in the target pose is synthesized. We will show that important body-pose changes affect generation quality and that specific feature map deformations lead to better images.
Then, we will present our recent framework for video generation. More precisely, our approach generates videos where an object in a source image is animated according to the motion of a driving video. In this task, we employ a motion representation based on keypoints that are learned in a self-supervised fashion. Therefore, our approach can animate any arbitrary object without using annotation or prior information about the specific object to animate.
“The Next Big Thing: From Systems to Deep Systems”
The main communities in Computer Science are all shifting from traditional algorithms towards deep-based algorithms where deep learning is extensivelyused to solve everyday problems. Although this is very attractive in terms of quality and speed,the days of end-to-end encoding are numbered because more than a network is needed to achieve a full task. This talk will show a traditional system for 3D reconstruction, how to make it deep, and the making of a from scratch deep system in which deep learning was in the loop from start to finish,
Our article on scalable hologram representation has been accepted into IEEE ICIP’10 conference.
Anas El Rhammad, Patrick Gioia, Antonin Gilles, Marco Cagnazzo, ‘SCALABLE CODING FRAMEWORK FOR A VIEW-DEPENDENT STREAMING OF DIGITAL HOLOGRAMS’