Digital video, photography, TV and film, everything you need.
Trim and record footage, upload music, voice acting, color correction, sound effects, and more
Sound Design, Mixing, Recording, Microphones, Audio Correction
Stay up-to-date on the latest apps, tools, software, devices, and what’s new
Easy to follow tutorials for everyone and on all topics.
How WePlay Studios hosted the five-hour VTuber Awards with a virtual broadcast, amenities, and physical production apparatus with in-depth virtual production engineering and design.
Organized through VTuber Filian in partnership with skills monitoring company Mythic Talent, the goal of the five-hour event is to celebrate the most productive online virtual creators. This will require a combination of genuine production equipment and services with extensive virtual production engineering and design.
“Storytelling and technological innovation are the driving force behind every exhibit we do, and we pride ourselves on creating iconic content that leaves a lasting impression on the viewer; the VTuber awards are no exception,” said Aleksii Gutyantov, head of virtual production. “While we had already incorporated AR into live esports productions, this showcase marked our first foray into a fully virtual event with virtual signals; It’s the most ambitious generation effort we’ve ever undertaken.
To make the occasion a success, Gutiantov controlled and coordinated the production in Los Angeles remotely from his computer from Europe, communicating via intercom with more than 16 crew members and orchestrating 8 days of uninterrupted pre-production to deliver the broadcast. His team first created a real-time representation of a fully virtual Filian to integrate into live production motion capture generation (mocap). They leveraged 20 witness cameras to capture the full functionality of the entire body, added exact finger movements, and combined it with additional generation to transmit facial images. MOCAP data.
The livestream of the event included a large virtual arena, but Filan’s character was situated on a smaller stage, surrounded by a digitally reconstructed edition of WePlay’s physical arena in Los Angeles. To make sure that each and every one of the physical twists, inclines, and focuses were translated. directly in the virtual rendering environment, WePlay Studios’ camera operators controlled 3 cameras synchronized with virtual cameras. The cameramen of the practical/physical set were able to transfer iPads connected to virtual cameras between other angles in the virtual stadium, creating the ghost of a dozen cameras instead of 3.
To make the production more authentic, WePlay Studios connected the lighting fixtures from the physical level to the corresponding virtual lighting fixtures, allowing the team to manipulate the virtual stadium’s lighting environment by activating a real-life environment through a lighting console. Built into the virtual world, with software for live images connected to the virtual venue used for the launch and graphics displayed on the virtual-level screens. AJA KONA’s five video I/O cards played a very important role in the 12G-SDI signal. chain, and the final SDI transmission was transmitted to an AJA KUMO 3232-12G video router for availability across the entire broadcast channel.
“Our KONA five cards have been instrumental in enabling us to get 12G-SDI signals, integrate them into an Unreal Engine five environment, and create the ultimate in SDI. It is the most productive product on the market,” Gutyantov explained. , our KUMO routers allow us to create an infrastructure for gigantic remote and on-site productions like this and manage everything from a single, convenient internet interface, thousands of miles away. We also like to be able to save routing configurations except they’re pre-programmed for SDI signals, and we never have to worry about them dropping; I’ve been working with them since 2017 on other projects and they’ve never let me down.
KONA Five enabled the team at WePlay Studios to leverage the strength of Unreal Engine to create a complete virtual production hub capable of handling 12G-SDI workflows. This allowed them to fully exploit the potential of AR technology, from camera tracking to motion capture and data. -Graphics driven, while ensuring flawless live virtual production streams, without any technical issues in composition. It also allowed them to produce picture-in-picture and UltraHD fill signals from a single card in all known formats, using Pixotope as a manipulator for 4K. with the familiar failover features of FHD workflows.
“The user interface of the KONA five is undeniable enough to perceive and control, even under the pressures of live production, and we like being able to preview last-minute adjustments in real time. It also offers up to 4 reconfigurable I/O, from SD to 4K, as well as for AES/EBU, LTC, RS-422/GPI, which is a must for transforming video from interlaced to progressive formats if we run it in Saudi Arabia or China,” adds Gutiantov. KONA Five is actually helping to speed up operations on projects like this, which require a lot of computing power for motion-adaptive deinterlacing. In addition, the card’s multi-channel hardware processing has sped up compute-intensive operations so that we can mix multiple videos. assets in a single output in Unreal Engine five, up/down/cross-scale, and mix/compose for all resolutions. These processes are imperative for handling video content of any resolution, ensuring that the end result meets streaming quality standards. .
Due to the unique setup of WePlay Studios’ Los Angeles facility, the team developed an initial infrastructure that included a series of mini-converters to facilitate downconversion of the 12G-SDI signal and transmit the 3G-SDI signals to their AJA KUMO video router. . Using the AJA HD5DA SDI distribution amplifiers, the team was able to transmit the preview signals to all the monitors in the arena for easier control of all the preview signals. The setup, which also used safe routing configurations for SDI signals regardless of the nature of the knowledge source, allowed for an accurate view of the production that WePlay Studio provided to its constituents, talents, cameramen, motion capture crew, and entire production team at any given time. DisplayPort-to-SDI mini AJA ROI-DP to SDI converters proved to be a key component of this initial infrastructure design, allowing the team to duplicate PC monitors in the streaming process to handle the scale-up conversion across the region of interest.
It’s an engaging topic and an ever-expanding field. WePlay Studios plans to open a new virtual production studio in Los Angeles this year that will feature more than 2,500 square feet of screen space with a pitch of 1. 8mm and the first Pantone-certified LED color pipe, complex opposed-chip technology. . . And it will require their expertise and integrate it into new spaces such as film and entertainment projects beyond gaming and esports.
According to Gutiantov, the point of interactivity opens up exciting new possibilities for live entertainment genres, blurring the lines between the audience and the virtual worlds we create. “WePlay doesn’t just stay within the confines of the gaming industry; We’re turning to music and broader directions of entertainment. Lately we are in the early stages of making plans and discussing projects that encompass those new frontiers.
*Taken from Rice Digital’s official guide, independent of VTuber jargon.
Literally, [Kusa means] grass, but it’s used to laugh explicitly. There is a rather complicated linguistic procedure to get to this point, which looks like this. In Japan, many Internet users do not use “mdr” as they do in English-speaking territories. do; They use the letter “W,” which stands for “warai” (笑い, literally “to laugh”). Intense laughter is expressed through a long string of letters – “wwwwwwwww” – that resembles a row of grass. Therefore, intense laughter is abbreviated as “kusa” (草, “grass”).
Written via RedShark News Team
RedShark 2020 @ All rights reserved.