These can also identify the various components of a road scene in real time on a regular camera or smartphone, performing the same job as sensors costing millions. Although the systems cannot currently control a driverless car, the ability to make a machine 'see' and accurately identify where it is and what it's looking at is a vital part of developing autonomous vehicles and robotics.

"Vision is our most powerful sense and driverless cars will also need to see but teaching a machine to see is far more difficult than it sounds," said professor Roberto Cipolla from University of Cambridge who led the research.

The first system, called SegNet, can take an image of a street scene it hasn't seen before and classify it, sorting objects into 12 different categories - such as roads, street signs, pedestrians, buildings and cyclists - in real time.

It can deal with light, shadow and night-time environments, and currently labels more than 90 percent of pixels correctly."Users can visit the SegNet website and upload an image or search for any city or town in the world, and the system will label all the components of the road scene. The system has been successfully tested on both city roads and motorways," the authors noted."It is remarkably good at recognising things in an image because it has had so much practice," added Alex Kendall, PhD student.