Based on preliminary work on assistive technologies done by the Lincoln Centre for Autonomous Systems, the team plans to use colour and depth sensor technology inside new smartphones and tablets to enable 3D mapping and localisation, navigation and object recognition.
    
The team will then develop the best interface to relay that to users - whether that is vibrations, sounds or the spoken word.
    
The research team, which includes Dr Oscar Martinez Mozos, a specialist in machine learning and quality of life technologies, and Dr Grzegorz Cielniak, who works in mobile robotics and machine perception, aim to develop a system that will recognise visual clues in the environment.
    
This data would be detected through the device camera and used to identify the type of room as the user moves around the space.
    
A key aspect of the system will be its capacity to adapt to individual users' experiences, modifying the guidance it provides as the machine 'learns' from its landscape and from the human interaction.
    
So, as the user becomes more accustomed to the technology, the quicker and easier it would be to identify the environment.