Fans of scripted storytelling will be especially interested to hear that we recently wrapped production on VR’s first scripted drama series, Invisible. Created by 30 Ninjas and their director Doug Liman (The Bourne Identity), the series tells the story of a powerful family that has the ability to become invisible. We had the pleasure of providing the titles and VFX for the series, supported by Jaunt, Samsung, Lexus, and Condé Nast.
The trailer for the series has been available since early October, but now you can watch the series in full on any VR-capable device. At around five minutes each, it will take you about the length of a normal 30-minute TV show to watch all the initial episodes.
At The Molecule, we’ve been developing our own methods for making VR compositing more efficient and intuitive. As studios all over the world are developing their own VR tools, we want to share what we’ve learned about creating precise VFX for this new frontier.
PRODUCTION
Before you get to post-production, it’s important to know what footage you’ll be working with. Check out all of the cameras we used to shoot this series! We had a few VR cameras from Jaunt, rigs with several GoPros attached, and a Sony A7S.
When choosing a camera, you should always think ahead to the post-production phase. The Jaunt VR camera and the GoPro rigs have different implications for stitching in post. The Jaunt cameras we used have the advantage of genlock (which helps to ensure synchronized timing between cameras) and exposure lock. They also contain their own cloud-based auto-stitcher. The camera is pretty large, however, and like most auto-stitchers, theirs is not quite perfect yet.
GoPro has the major advantage of being small and lightweight, and you don’t need very many of them to record 360-degree video. However, they use auto-exposure and auto white balance, have no genlock feature, and require you to manually stitch together footage from each camera in post.
VFX PHASE
In our studio, we developed a set of steps that make compositing in VR space much easier for VFX artists. We created a node in Nuke that transforms the warped, lat-long image into something that a viewer would normally see in the VR headset. This makes a huge difference, because now the artist can work from a more traditional perspective.
Working in the stretched-out lat-long image would make it difficult to create precise effects because of the distortion. Michael Clarke, the artist behind the development of this method, explains that because of this shift in perspective, “any visual effects artist can get in there and start working with it.”
Say, for instance, that you wanted to paint out the camera rig in bottom of the following image:
If you switch the view to the headset perspective, you can isolate the rig and paint it out like you normally would.
Then, you could isolate that shape and reconvert it back to the lat-long view.
And then… ta-da!
This method can be used for all kinds of common visual-effects tasks, including rotoscoping, painting, and tracking.
“IT’S JUST A COMP”
For many artists, working in this new perspective can feel intimidating and frustrating. It definitely doesn’t have to be that way, though. As Clarke reminds his artists, “It’s just a normal comp when you work in this view.”
Using the process outlined above, what seems like an intimidating effect for newcomers to VR becomes simple for many VFX artists.
Check out the full fight scene here, and make sure to head over to Jaunt’s website to see the entire series!