Virtual Production • Machine Learning Tooling

The Hollywood Reporter:

Kelly Marie Tran Editorial

 
THR-Cover-Issue-9-9cover.hires-2021-1614729091-compressed.jpg

“Kelly Marie Tran’s first recording session for Raya and the Last Dragon involved a leap of faith. As the titular warrior princess, she stood in a booth in Disney Animation's Burbank offices and performed the dialogue, an incantation meant to awaken a mythical creature that could help save the world.

The taping went smoothly, but right before the scene wrapped, Tran spoke up: "Hey, actually … would you mind if I tried something?"“

To release their latest cover shoot featuring Kelly Marie Tran, The Hollywood Reporter collaborated with Dyan Jong and Pyramid3 to host a virtual cover shoot. The shoot coincided with the release of Disney’s “Raya and the Last Dragon”.

Released as a print magazine, on digital, Apple News and social.

 The Concept

A key element of the production was to highlight south-asian heritage and work with a team that could bring bring a hint of their heritage into the shoot. With Cong Tri providing outfits, we set out to build a virtual space that blended between past and future. Kelly shared references from her own trips to Vietnam, which made it as abstracted architectural shapes as a terraced landscape. We used a photo-scan of the My Son Sanctuary in Vietnam - the dramatic ruins that were the religious and political capital of the Champa Kingdom - to allude to both epic past and Disney's upcoming “Raya And The Last Dragon”.

The result is a blend of Asian-futurism and fairy-tale - with a nod to the major Sci-Fi and fantasy IP’s Kelly has been a part of.

Above: My Son Sanctuary ruins rendered in Unreal Engine

Above: My Son Sanctuary ruins rendered in Unreal Engine

THR_Kelly_Shot_02_100.jpg

The Photoshoot

Kelly came to THR’s office studio where a green-screen with tracking dots awaited her. All footage of her was shot in studio. This was chosen as an alternative to volumetric capture or 3D modeling due to the short timeline of the project - it allowed us to get all the footage needed in just a few hours.

R0013768.JPG

Machine Learning

Our timeline was short, and our goal was to iterate through as many options and outputs as possible. To aid that goal, we made use of Machine Learning throughout the entire pipeline to process images and videos. We began by using segmentation tools to separate background from foreground, outputting a mask for all videos within a few minutes.