Download the pyton scripts here.
Download the pyton scripts here.
Three new exciting seminars will be given on June 7th, 2019. The program is the following :
Place : Telecom-ParisTech, 46 rue Barrault, Paris. Room B567.
9h45h Welcome coffee
10h00-11h15 “Deep learning for Super Resolution and Tracking”, by Gianni Franchi, Univ. Paris Sud
11h15-12h30 “Deformation models for image and video generation”, by Stéphane Lathuilière, Univ. Trento (Italy)
12h30-14h Lunch break
14h-15h15 “The Next Big Thing: From Systems to Deep Systems”, by Francesco Banterle, CNR Pisa (Italy)
15h15-16h Coffee, free discussion
“Deep learning for Super Resolution and Tracking”
Cette présentation traitera de deux projets :
Le projet 1 : vise à mélanger des techniques de deep learning et de géostatistique pour faire de la super résolution d’images. L’objectif est d’accéder aux bons résultats du deep learning ainsi que l’incertitude de l’estimateur grâce la géostatistique.
Le projet 2 : vise à suivre des personnes dans une vidéo de foule extrêmement dense. N’ayant pas de base donnée annotée sur ce projet, nous proposerons une technique ou le réseau de neurones apprend tout seul. (En anglais : Self supervised learning).
“Deformation models for image and video generation”
Generating realistic images and videos has countless applications in different areas, ranging from photography technologies to e-commerce business.
Recently, deep generative approaches have emerged as effective techniques for generation tasks. In this talk, we will first present the problem of pose-guided person image generation. Specifically, given an image of a person and a target pose, a new image of that person in the target pose is synthesized. We will show that important body-pose changes affect generation quality and that specific feature map deformations lead to better images.
Then, we will present our recent framework for video generation. More precisely, our approach generates videos where an object in a source image is animated according to the motion of a driving video. In this task, we employ a motion representation based on keypoints that are learned in a self-supervised fashion. Therefore, our approach can animate any arbitrary object without using annotation or prior information about the specific object to animate.
“The Next Big Thing: From Systems to Deep Systems”
The main communities in Computer Science are all shifting from traditional algorithms towards deep-based algorithms where deep learning is extensivelyused to solve everyday problems. Although this is very attractive in terms of quality and speed,the days of end-to-end encoding are numbered because more than a network is needed to achieve a full task. This talk will show a traditional system for 3D reconstruction, how to make it deep, and the making of a from scratch deep system in which deep learning was in the loop from start to finish,
Our article on scalable hologram representation has been accepted into IEEE ICIP’10 conference.
Anas El Rhammad, Patrick Gioia, Antonin Gilles, Marco Cagnazzo, ‘SCALABLE CODING FRAMEWORK FOR A VIEW-DEPENDENT STREAMING OF DIGITAL HOLOGRAMS’
Three articles have been accepted into IEEE ICASSP :
1) S. Zheng, M. Cagnazzo, M. Kieffer. “CHANNEL IMPULSIVE NOISE MITIGATION FOR LINEAR VIDEO CODING SCHEMES”
2) L. Wang, A. Fiandrotti, A. Purica, G. Valenzise, M. Cagnazzo. “ENHANCING HEVC SPATIAL PREDICTION BY CONTEXT-BASED LEARNING”
3) P. Nikitin, M. Cagnazzo, J. Jung. “COMPRESSION IMPROVEMENT VIA REFERENCE ORGANIZATION FOR 2D-MULTIVIEW CONTENT”.
Congrats to Shuo, Li and Pavel.
Our article entitled “Very Low Bitrate Semantic Compression of Airplane Cockpit Screen Content” has been accepted for publication in IEEE Transactions on Multimedia.
Congratulations to Iulia, our first author.
Our paper “View-dependent compression of digital hologram based on matching pursuit”, byA. El Rhammad, P. Gioia, A. Gilles, M. Cagnazzo, B. Pesquet-Popescu, published in SPIE Photonics Europe, vol. 10679 (Avril 2018, Strasbourg, France) has been awarded by the Best Student Paper Award. Congratulations to Anas !