Now a set of devices can imprint the events on the camera, but that if it is possible to write on video what else shall occur? Researchers in MIT CSAIL created the deep studying algorithm capable to create video, showing what expects that will happen in the future. After intensive training (2 million videos), the system of an artificial intelligence began to generate frames, opposing each other two neural networks. One created a scene, defining what objects in separately the taken frame move at this time. Another, meanwhile, appeared as checking quality — it defined whether video is real or is simulated, and artificial video was read successful when the checking AI was deceived, supposing that frames authentic.
However the technology has restrictions. She can’t create video further one and a half seconds in the future, and results aren’t amazingly realistic, however nevertheless the technology is rather good for a prediction, for example, of movement of waves on the beach or the people going on a grass.
However if researchers manage to make forecasts more realistic and big on time, then it can have far-reaching consequences. Self-driving cars will be able to foretell where and as other transport and pedestrians moves, and surveillance cameras will be able to mark mismatch of the removed material based on what they expect to see. The technology could also be used in daily tasks like adding of animation in still images or to squeeze video (as each frame of video won’t be necessary any more). And irrespective of circumstances, predicting of the future can help AI to understand what occurs at present — it could help anyway when machine image identification is important.