Archives de catégorie : Publication

Publications

We have some recent accepted publications in the team:

[1] N. Hobloss, L. Ge, M. Cagnazzo, « A Multi-View Stereoscopic Video Database With Green Screen (MTF) For Video Transition Quality-of-Experience Assessment », accepted in QoMEx 2021

[2] M. Milovanovic, M. Cagnazzo, F. Henry, J. Jung, « PATCH DECODER-SIDE DEPTH ESTIMATION IN MPEG IMMERSIVE VIDEO », accepted in IEEE ICASSP’21

[3] T. Karagkioules et al., « Online Learning for Adaptive Video Streaming in Mobile Networks », accepted in ACM Transactions on Multimedia

Congrats to Nour, Marta and Theo and all the advisoring team!

[:en]Article in ICPR[:]

[:en]Our work on photographic device quality has been accepted into the conference ICPR. The article is entitled: « DR2S: Deep Regression with Region Selection for Camera Quality Evaluation ».

Congrats to our first author, Marcelin Tworski, and all the team![:]

[:en]MILES project approved[:]

[:en]The project named: « MILES – MachIne Learning for Efficient Streaming » has been founded by the Institut Polytechnique de Paris. It will develop, in our Multimedia team, on-line learning methods for improved video streaming over mobile devices.

Congratulations to Attilio Fiandrotti, in charge of the project![:]

Best paper award

[:fr]

 

L’article « Cockpit video coding with temporal prediction » par I. Mitrica (MM/IDS et Safran), A. Fiandrotti (MM/IDS), M. Cagnazzo (MM/IDS), C. Ruellan et E. Mercier, a reçu le prix du meilleur article de la conférence « European Workshop on Visual Information Processing » (Rome, Italie, 31/10/2019).

Bravo Iulia ![:en]

Our article « Cockpit video coding with temporal prediction » byI. Mitrica (MM/IDS et Safran), A. Fiandrotti (MM/IDS), M. Cagnazzo (MM/IDS), C. Ruellan and E. Mercier, has received the Best Paper Award at the « European Workshop on Visual Information Processing » (Rome, Italy, 31/10/2019).

Congrats Iulia ![:it]

 

 [:]

[:en]Seminars[:]

[:en]Three new exciting seminars will be given on June 7th, 2019. The program is the following :

Place : Telecom-ParisTech, 46 rue Barrault, Paris. Room B567.

9h45h Welcome coffee
10h00-11h15 “Deep learning for Super Resolution and Tracking”, by Gianni Franchi, Univ. Paris Sud
11h15-12h30 “Deformation models for image and video generation”, by Stéphane Lathuilière, Univ. Trento (Italy)
12h30-14h Lunch break
14h-15h15 “The Next Big Thing: From Systems to Deep Systems”, by Francesco Banterle, CNR Pisa (Italy)
15h15-16h Coffee, free discussion

Abstracts :

“Deep learning for Super Resolution and Tracking”
Cette présentation traitera de deux projets :
Le projet 1 : vise à mélanger des techniques de deep learning et de géostatistique pour faire de la super résolution d’images. L’objectif est d’accéder aux bons résultats du deep learning ainsi que l’incertitude de l’estimateur grâce la géostatistique.
Le projet 2 : vise à suivre des personnes dans une vidéo de foule extrêmement dense. N’ayant pas de base donnée annotée sur ce projet, nous proposerons une technique ou le réseau de neurones apprend tout seul. (En anglais : Self supervised learning).

“Deformation models for image and video generation”
Generating realistic images and videos has countless applications in different areas, ranging from photography technologies to e-commerce business.
Recently, deep generative approaches have emerged as effective techniques for generation tasks. In this talk, we will first present the problem of pose-guided person image generation. Specifically, given an image of a person and a target pose, a new image of that person in the target pose is synthesized. We will show that important body-pose changes affect generation quality and that specific feature map deformations lead to better images.
Then, we will present our recent framework for video generation. More precisely, our approach generates videos where an object in a source image is animated according to the motion of a driving video. In this task, we employ a motion representation based on keypoints that are learned in a self-supervised fashion. Therefore, our approach can animate any arbitrary object without using annotation or prior information about the specific object to animate.

“The Next Big Thing: From Systems to Deep Systems”
The main communities in Computer Science are all shifting from traditional algorithms towards deep-based algorithms where deep learning is extensivelyused to solve everyday problems. Although this is very attractive in terms of quality and speed,the days of end-to-end encoding are numbered because more than a network is needed to achieve a full task. This talk will show a traditional system for 3D reconstruction, how to make it deep, and the making of a from scratch deep system in which deep learning was in the loop from start to finish,[:]