AI helps make a beet-red-faced, “vibranium-synthezoid” named Vision look human in an unexpected way.

John Blyler

August 31, 2021

3 Min Read
0oq92_1540-800.jpeg
Marvel Studios

Have you ever wondered how Vision’s beet-red head was created for the Marvel TV show, WandaVision? The answer is rather incredible, but first, here’s a little context.

WandaVision is an American television miniseries created by Jac Schaeffer for the streaming service Disney+, based on Marvel Comics featuring the characters Wanda Maximoff / Scarlet Witch and Vision. The series takes place after the events of the film Avengers: Endgame.

The show’s timeline starts three weeks after the events of Avengers: Endgame. Wanda Maximoff and Vision are living an idyllic suburban life in the town of Westview, NJ. Both are trying to conceal their true natures from their neighbors. But as the series evolves, the couple suspects that their surroundings are not so idyllic or even real.

Vision is a vibranium-synthezoid with a body created by the villain Ultron in Avengers: Age of Ultron. Vision becomes a sentient being thanks to the Mind Stone and the work of Tony Stark and Bruce Banner, uploading the core software of Stark's AI, called J.A.R.V.I.S., into the vibranium. Vision becomes a member of the Avengers and develops a romantic relationship with Wanda Maximoff.

How It’s Done

Vision is portrayed by actor Paul Bettany with a bit of help from modern artificial intelligence (AI) technology. The Motion Picture Association (MPA) recently talked to Ryan Freer, Creative Director at VFX Supervison, a Toronto, ON, company. Freer and the team at Monsters Aliens Robots Zombies crafted Bettany's head frame by frame in VFX using a mix of AI and top talent to produce their amazing visuals.

Apparently, Vision’s beet-red head is computer-generated (CG), with only the actor Bettany’s eyes, nose, and mouth being used to reveal the actual human-generated expressions. CG has been used in the motion picture industry for a long time. So, where does AI contribute?

During the MPA interview, AI's role in changing the actor into the superhero Vision became clear. Typically, the conversion from a real actor to a digital rendering requires tracking markers all over the person’s face and body. For Vision, the majority of the tracking markers seemed to be on his beet-red, earless head.

AdobeStock_239894233-770.jpeg

According to Freer, his team received footage of actor Bettany wearing a bald cap, ears sticking out, and tracking markers all over his face and neck. The critical step was that the markers were removed with an in-house removal system driven by AI. Previously, such tracking markers would have been removed by more labor-intensive and very costly means.

0o8x5_770.jpeg

On a side note, new technology is emerging that allows AI can be used more directly to capture a 3D representation of an actor in a process known as “markerless” motion capture. With this approach, actors don’t need to be encased in Lycra suits with lots of white balls attached to them.

According to Matt Panousis, J.D. Chief Operating Officer at MARZ (from that same MPA interview), AI saves their clients “hundreds of thousands of dollars and tons of time, about a day of savings per shot. Multiply that across 400 shots that we did for the show, and it adds up to about 400 artist days that are effectively gone.”

John Blyler is a Design News senior editor, covering the electronics and advanced manufacturing spaces. With a BS in Engineering Physics and an MS in Electrical Engineering, he has years of hardware-software-network systems experience as an editor and engineer within the advanced manufacturing, IoT and semiconductor industries. John has co-authored books related to system engineering and electronics for IEEE, Wiley, and Elsevier.

About the Author(s)

John Blyler

John Blyler is a former Design News senior editor, covering the electronics and advanced manufacturing spaces. With a BS in Engineering Physics and an MS in Electrical Engineering, he has years of hardware-software-network systems experience as an engineer and editor within the advanced manufacturing, IoT and semiconductor industries. John has co-authored books related to RF design, system engineering and electronics for IEEE, Wiley, and Elsevier. John currently serves as a standard’s editor for Accellera-IEEE. He has been an affiliate professor at Portland State Univ and a lecturer at UC-Irvine.

Sign up for the Design News Daily newsletter.

You May Also Like