How Artificial Intelligence is Reinventing Visual Effects
Artificial Intelligence (AI) is a wide-ranging tool that allows people to improve the way we collect information, analyze data, and use the resulting insights to improve decision making. Some of the applications of AI include speech recognition, expert systems, and machine vision. AI can be used in casino gaming to improve the overall gaming experience. Also, it is used in the healthcare sector to discover the link between genetic codes, power surgical robots, and so on.
AI is having a huge impact on computer graphics research with the potential to transform VFX production.
AI has automated many repetitive tasks
In Marvel’s Avengers Endgame, Josh Brolin’s performance was flawlessly rendered into the 9-ft Thanos by a team of animators at Digital Domain. This example was produced using AI and ML tools to automate the part of the process. This demonstrates that AI/ML can not only transform the VFX creation for movies but also make sophisticated VFX techniques more accessible.
In recent years, 3D animations and simulations have reached a fidelity it terms of art-direction that is near perfection to the audience. Today, there are very few effects that are not possible to create using AI, given challenges such as crossing the mysterious valley for photorealistic faces.
Over the past few years, the VFX industry has placed a major emphasis on creating more effective, efficient, and flexible pipelines in order to meet the requirement of VFX film production.
For a while, most of the repetitive and arduous tasks like composting, rotoscoping, and animation were outsourced to foreign studios. But with the recent advancements in AI, many of these tasks can now be fully automated and performed very fast.
Manual to automatic
Matchmoving, for example, is a technique that allows the insertion of computer graphics into live-action footage with correct position, orientation, scale, and motion relative to the photographed objects of the shot. This can be a frustrating process as tracking camera placement within a scene is typically a manual process and can consume more than 5% of the total time spent on the entire pipeline.
Recently, software developer Foundry developed a new method using algorithms to accurately track camera placement using metadata from the camera. This has improved the matchmaking process by 20% and proved the concept by training the algorithm on data from DNEG.
Rotoscoping, another labour-intensive task is being tackled by Kognat’s Rotobot. Using its AI, the company claims that a frame can be processed within 5-20 seconds. The accuracy of the work is limited to the quality of the deep learning model behind Rotobot but it will improve dramatically as it feeds new data in the near future.
AI is also transforming motion picture, another traditionally expensive exercise requiring specialized hardware, suit, trackers, controlled environments, and a team of experts to make it all work.
RADiCAL is planning to create a motion capture AI-driven solution with no physical features at all. It aims to make it is as easy as recording a video, even from a mobile device and uploading it to the cloud where the firm’s AI will send back motion-captured animation to movements.
Digital Domain – Realistic Digital Human for Virtual Production
sing artificial intelligence, deep learning, and Unreal Engine, Digital Domain is on a quest to achieve real-time digital facial performance for virtual production.
“DigiDoug”: a real-time, 3-D, digital rendering of his likeness that’s accurate down to the scale of pores and wrinkles. Powered by an inertial motion capture suit, deep neural networks and enormous amounts of data, DigiDoug renders the real Doug’s emotions (and even how his blood flows and eyelashes move) in striking detail. Learn more about how this exciting tech was built — and its applications in movies, virtual assistants and beyond.
How Ziva Dynamics uses AI to enhance CGI visual effects (VFX)