Neuronet has learned to impose the facial expression of one person on the face of another

Groups of researchers often experiment with video content using neural networks. Take for example NVIDIA, which in late 2017 taught the neural network to change the weather and time of day on video. Another project of this kind was launched by researchers from the University of Carnegie-Melon, who created a neural network for imposing the facial expressions of one person on the face of the other.

The basis of the project was the DeepFakes technology for changing faces on video. It is based on the generative and adversarial form of machine learning. In its framework, the generative model tries to deceive the discriminatory model and vice versa, as a result the system understands how the content can be transformed into a different style.

The cycle-GAN algorithm for passing properties to another object is not ideal and allows the presence of artifacts on the image. To improve the quality of the network, the researchers used its improved version of Recycle-GAN. He takes into account not only the position of different parts of the face, but also the speed of their movement

The neural network successfully transferred facial expressions of TV host Stephen Colbert to the face of comedian John Oliver. Moreover, she transferred the flowering process of narcissus to hibiscus.

Researchers believe that technology can be used in cinematography. This will speed up the process and reduce the cost of making movies. The ability of neural networks to change the weather on video will simplify the training of electric cars to ride in different weather conditions.

Notify of
Inline Feedbacks
View all comments
Would love your thoughts, please comment.x