Pushing the envelope in volumetric video
In the Vision and Imaging Technologies (VIT) department at the Fraunhofer Heinrich Hertz Institute HHI, a new state-of-the-art is being defined for the production and modification (i.e., animation and alteration) of volumetric video content. This is reflected in a new publication and the initiation of an EU-funded project “Innovative Volumetric Capture and Editing Tools for Ubiquitous Storytelling” (INVICTUS).
Volumetric video is interesting for Augmented and Virtual Reality applications, because it offers a high-quality free-viewpoint rendering of dynamic scenes. Typically, however, high-quality immersiveness is limited to pre-recorded situations, which precludes direct interaction with virtual characters. To enable interaction, classic computer graphic models are applied at the cost of realisticness.
In the 2020 Special Issue on Computer Vision for the Creative Industry, Dr. Anna Hilsmann (head of the Computer Vision & Graphics research group in VIT) and colleagues present a pipeline for creating high-quality animatable and alterable volumetric video content, which “exploit[s] the captured high-quality real-world data as much as possible [because] it contains all-natural deformations and characteristics.” The key features of the pipeline are the supplementation of captured data with semantics and animation properties and the leveraging of geometry- and video-based animation methods that permit direct animation. In addition, the pipeline proposes a reliable approach for handling movement of body and faces. Faces, in particular, have traditionally been a challenge to the industry. Hilsmann et al. (2020) suggest a three-tiered solution: modeling low-resolution features (e.g., coarse movements) using geometrics, overlying video-based textures to capture subtle movements and fine features, and synthesizing traditionally neglected features (e.g., eyes) using an autoencoder-based approach.
Building on the theme of the publication, the new EU-funded project INVICTUS (accepted in early March 2020 with a two-year duration) will rally state-of-the-art volumetric motion capture technologies that capture both appearance and motion of actors to produce volumetric avatars and facilitate the creation of narratives. The deliverables of the project include three innovative authoring tools: 1. for high-resolution volumetric capture of both appearance and motion of actors, which can be used for high-end off-line (film) productions and real-time rendering productions; 2. for editing high-fidelity volumetric appearance and motion; and 3. for story authoring (e.g., editing decors, layouts, and animated characters). The project involves two research groups in VIT (Computer Vision & Graphics and Immersive Media & Communication) and partners Ubisoft Motion Pictures, Volograms Limited, the University of Rennes, and Interdigital R&D France.
To learn more about these developments, visit the department website. Also, mark your calendar: VIT (in collaboration with colleagues in the Video Coding and Analytics Department) is developing a demo that highlights interactive volumetric video streaming and rendering, which will be demonstrated at the virtual International Broadcasting Convention (IBC) 2020.