AbstractOver the years, Unmanned Aerial Vehicles(UAV) have been used for indoor
and outdoor autonomous navigation. However, indoor autonomous navigation
have had their challenges when it comes to navigation of UAVs. This is mainly
due to the limitation of the Global Positioning System (GPS) indoors as the
signal is usually not very precise.
This thesis provides insight into the background of indoor autonomous navi-
gation including solutions that have been proffered by researchers in the past.
Some of which include, the constructions of 3D maps and the use of additional
sensors such as laser range finders to localise robots indoors. These methods
however, have their downsides, one of this is the heavy computational power
while another is the payload of the UAV which increases as a result of the
weight from the additional sensors.
The thesis goes further to propose a different approach to this problem by
suggesting a system where a quadcopter travels autonomously indoors to a
target point through the use of a vision sensor and a deep learning technique.
The deep learning model used in this project is an advanced architecture of
Convolutional Neural Network (CNN) called DenseNet. The model is used to
help the system learn how to navigate in a diverse environment whilst making
decisions as a human pilot would.
Through simulations and results, the system is checked for optimal perfor-
mance. The accuracy of the model is measured using a loss function metric.
At the end of the project, a conclusion is reached, showing that the project has
successfully being able to design a method that allows the quadcopter travel
|Date of Award||2019|
|Supervisor||Uriel Martinez Hernandez (Supervisor)|