Real-time facial reenactment, VR that lets you feel without gloves, the next stage for augmented reality, and a novel system for materials design: Take a look at some of the most exciting research and emerging innovations featured at SIGGRAPH 2016.

Chris Wiltz

August 2, 2016

3 Min Read
SIGGRAPH 2016: The Best Emerging Interactive Technologies

Virtual reality (VR) and augmented reality (AR) have only just begun to penetrate the consumer market, but researchers are already asking themselves, “what's next?” and looking for new applications and ways to improve these technologies to make them more immersive, interactive, and able to enrich other existing technologies.

SIGGRAPH, the annual conference for computer graphics and interactive technologies, featured a number of emerging technologies that aim to “blur the boundaries between art and science, and transform social assumptions,” according to the conference website. Working across various disciplines, researchers all over the world are working to explore new ways of making technology, art, and science into a more interactive, human-friendly experience.

Click the image below to view a slideshow of some of the most exciting emerging interactive technologies being developed today.

The Best of Show award for the SIGGRAPH 2016 Emerging Technologies showcase went to Face2Face, a project from researchers at University of Erlangen-Nuremberg, the Max-Planck-Institute for Informatics, and Stanford University, that takes a novel approach to facial motion capture. You've probably seen behind-the-scenes footage of motion capture artists capturing someone's facial expressions and rendering them into an animated character. Face2Face offers a unique method in that it only captures RGB data from an actor and a targeted source video. The result is that it can capture and animate an actor's facial expression and superimpose it onto someone in a target video in real time. The team demonstrated the technology at SIGGRAPH 2016 as in the video below, where a user could speak and have his or her facial expressions and gestures imposed onto a YouTube video. The Face2Face technology achieves this effect in several steps. First, it captures the actor's performance, then that of the subject in the YouTube video. From there the software is able to warp and reconstruct the actor's data to match the video subject, resulting in a photo-realistic reenactment. The research team hopes the technology will have applications in improving teleconferencing and other VR/AR applications such as on-the-fly dubbing of video for translation purposes. Finally, a way to get the President to say what you want him to say.
(Image source: Stanford University)

READ MORE ARTICLES ON VIRTUAL AND AUGMENTED REALITY:


ESC Minn logoVirtual Reality's New Game. Come hear Chuck Carter, who helped create Myst and 26 other video games, talk about "Playing a New Game: VR challenges and opportunities" in his keynote at the Embedded Systems Conference. Sept. 21-22, 2016 in Minneapolis. Register here for the event, hosted by Design News’ parent company UBM.

Chris Wiltz is the Managing Editor of Design News.

Sign up for the Design News Daily newsletter.

You May Also Like