"Any TV these days is capable of 3D. There's just no content. So we see that the production of high-quality content is the main thing that should happen," says Wojciech Matusik, associate professor of electrical engineering and computer science at MIT.

Today's video games generally store very detailed 3D maps of the virtual environment that the player is navigating. When the player initiates a move, the game adjusts the map accordingly and, on the fly, generates a 2D projection of the 3D scene that corresponds to a particular viewing angle.

The MIT and QCRI researchers essentially ran this process in reverse. They set the very realistic Microsoft soccer game 'FIFA13' to play over and over again and used Microsoft's video-game analysis tool PIX to continuously store screen shots of the action.

For each screen shot, they also extracted the corresponding 3D map. Using a standard algorithm, they ruled out most of the screen shots, keeping just those that best captured the range of possible viewing angles and player configurations. Then they stored each screen shot and the associated 3D map in a database.

"We are developing a conversion pipeline for a specific sport. We would like to do it at broadcast quality, and we would like to do it in real-time. What we have noticed is that we can leverage video games," he explained.

The researchers presented the new system at the Association for Computing Machinery's Multimedia conference in Brisbane, Australia, last week.