SLAM with Reinforcement Learning for highly dynamic scenes

Project: Research council

Project Details

Description

Unmanned systems are growing fast, and there is an urgent need to improve the robustness and efficiency of such systems. Quadrotors are one prime example, which can be used in a variety of different domains. This includes infrastructure inspection, disaster management, search and rescue, precise agriculture, and package delivery. The government has shown a huge interest in autonomous vehicles. The release of the Future of Transport: rural strategy highlights the opportunities for drones to make deliveries in rural or isolated towns and to help reduce pollution. Furthermore, reports have shown the self-driving vehicle industry to be worth nearly £42 billion by 2035.

Autonomous vehicles rely on highly accurate localization and mapping techniques which can be very difficult in cluttered and dynamic scenes. Dead-reckoning based methods which rely on previous estimates work in these scenarios but fall victim to propagated error which leads to inaccuracies in the long run. This has led to research in the loop closure which utilizes previously seen landmarks to re-localize the vehicle.

The most common form of self-localization within autonomous vehicles comes from Simultaneous Localization and Mapping, which is a technique that utilizes detected landmarks and control inputs to estimate the position and orientation of the vehicle within a generated map. The assumption of static landmarks however still provides an issue within the previously mentioned dynamic environments, as static landmarks are needed to be filtered from dynamic landmarks. Dynamic-SLAM methods modify the existing method by providing this filtering technique but still lack robustness when dynamic objects fill up the majority of the environment.

We hope to tackle this problem using data-driven approaches. Reinforcement learning has been shown as a viable solution for navigation within mapless and dynamic environments. We hope to train the reinforcement learning agent, through a series of simulation environments, the ability to navigate in a dynamic and cluttered environment using onboard camera depth sensors. Building on work already done but that would not have been able to take place during the PhD. An experimental quadrotor has already been developed and we hope to utilize this within Ryerson University's drone arena to validate the proposed hypothesis.

The key outputs of this project will be the development of reinforcement learning techniques to navigate within a mapless environment to aid with the mapping process in a dynamic scene. This novel technique provides an alternative solution to the current advances in dynamic-SLAM. We hope that reinforcement learning-based techniques will improve dynamic-SLAM's ability to be utilized. Furthermore, such a technical solution can be easily applied to industrial applications and is supposed to, in practice, fill the gap between autonomous control and popular artificial intelligence techniques

We believe that the proposed research brings the strength of robotics research from our partners in Canada to significantly improve the accessibility of AI techniques in autonomous robotics, and further strengthen the UK's role as the global leader in the creation of industrial autonomy solutions. Such a role aligns with the current UK research roadmap, with at least £800 million to ensure the UK can gain a competitive advantage in the creation of artificial intelligence and industrial autonomy.
StatusFinished
Effective start/end date1/09/2231/08/23

Funding

  • Natural Environment Research Council

RCUK Research Areas

  • Electrical engineering
  • Mechanical engineering
  • Robotics and Autonomy
  • Robotics and Autonomy

Fingerprint

Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.