Learning 3D Digital Humans from Images, Videos and Scans

  报告人:Gerard Pons-Moll,Max Planck for Informatics (MPII) in Saarbrücken, Germany

  时间:7月11日上午 10:30 ~ 12:00

  地点:446会议室

  报告摘要:

  The research community has made significant progress in modelling people's faces, hands and bodies from data. The standard approach is to capture data coming from 3D/4D scanners and learn models from it. Such approach provides a very useful first step, but it does not scale to the real world. If we want to learn rich models that include clothing, interactions of people, and interactions with the environment geometry, we require new approaches that learn from ubiquitous data such as plain RGB-images and video. In this talk, I will describe some of our works on capturing and learning models of human pose, shape, and clothing from 3D scans as well as from plain video.

  Topics: Computer Vision, Computer Graphics, Machine Learning, Human Digitization

  报告人简介:

  Gerard Pons-Moll is the head of the Emmy Noether research group "Real Virtual Humans" at the Max Planck for Informatics (MPII) in Saarbrücken, Germany . His research lies at the intersection of computer vision, computer graphics and machine learning -- with special focus on analyzing people in videos, and creating virtual human models by "looking" at real ones. His research has produced some of the most advanced statistical human body models of pose, shape, soft-tissue and clothing (which are currently used for a number of applications in industry and research), as well as algorithms to track and reconstruct 3D people models from images, video, depth, and IMUs. His work has received several awards including an Emmy Noether Starting Grant (2018), a Google Faculty Research Award (2019), Best Papers at BMVC’13, Eurographics’17, 3DV’18 and his work has been published at the top venues and journals including CVPR, ICCV, Siggraph, Eurographics, IJCV and PAMI. Group website: http://virtualhumans.mpi-inf.mpg.de