Adobe launches a new project called “Sweet Talk”. This project makes the portraits going to be alive.
Project Sweet Talk is far smarter. It analyzes the voice-over and then uses its AI smarts to realistically animate the character’s mouth and head.
The team, lead by Adobe Researcher Dingzeyu Li, together with Yang Zhou (University of Massachusetts, Amherst) and Jose Echevarria and Eli Schectman (Adobe Research), actually fed their model with thousands of hours of video of real people talking to the camera on YouTube.
Surprisingly, that model transferred really well to drawing and paintings even though the faces the team worked with, including pretty basic drawings of animal faces, don’t really look like human faces.
And more over the team smartly warps the faces automatically to make them look more realistic all from a basic JPG image
Project Sweet Talk doesn’t work all that well on photos. They just wouldn’t look right and it also means there’s no need to worry about anybody abusing this project for deepfakes.