AI was the “acoustic” Turing test

Artificial intelligence has learned to synthesize the noise that people are not able to distinguish from natural. Scientists from the artificial intelligence Lab at mit have developed an algorithm, able to name the videos with their sound effects. For learning AI researchers showed him around 1000 movies containing 46 000 different noises that were made using drumsticks.

“To voice any episode, the algorithm analyzes audio characteristics of the original fragment and compares them with samples stored in the database. When any noise is found, the system inserts it into the audio track roller, carefully “stitching” it with neighboring sounds,” explained graduate student Andrew Owens (Andrew Owens), one of the authors.

When videos, voiced by AI, were shown to volunteers, who in most cases could not recognize a “fraud”.

Artificial intelligence overwrites the original noise in the video, replacing them with bumps and creaks drumsticks. According to Owens and his colleagues, the algorithm can be used to create sound effects in a movie. However, research has more fundamental importance — scientists believe they have developed a technique that will allow robots to more effectively meet the outside world.

“When you knock a finger across the glass, the sound allows us to understand how much it contains of the liquid. AI that learns to replicate the sound, at the same time gets an idea about the shape and properties of material objects,” said Owens.

Notify of

Inline Feedbacks
View all comments
Would love your thoughts, please comment.x