Researchers at the Samsung AI Center in Moscow have quite amazingly turned the Mona Lisa and other famous subjects of photos and art into realistic talking heads. This “Few-Shot Adversarial Learning” technology combs through thousands of videos and catalogs visual information. When an image is presented, the matching landmark features are located and put to work. Other examples include Albert Einstein and Salvador Dali. The results are spooky, to say the least.
We present a system for learning full-body neural avatars, i.e. deep networks that produce full-body renderings of a person for varying body pose and camera position. Our system takes the middle path between the classical graphics pipeline and the recent deep learning approaches that generate images of humans using image-to-image translation.