This thesis investigates the problem of visual learning using a robotic platform. Given a set of objects the robots task is to autonomously manipulate, observe, and learn. This allows the robot to recognise objects in a novel scene and pose, or separate them into distinct visual categories. The main focus of the work is in autonomously acquiring object models using robotic manipulation.Autonomous learning is important for robotic systems. In the context of vision, it allows a robot to adapt to new and uncertain environments, updating its internal model of the world. It also reduces the amount of human supervision needed for building visual models. This leads to machines which can operate in environments with rich and complicated visual information, such as the home or industrial workspace; also, in environments which are potentially hazardous for humans.The hypothesis claims that inducing robot motion on objects aids the learning process. It is shown that extra information from the robot sensors provides enough information to localise an object and distinguish it from the background. Also, that decisive planning allows the object to be separated and observed from a variety of different poses, giving a good foundation to build a robust classification model. Contributions include a new segmentation algorithm, a new classification model for object learning, and a method for allowing a robot to supervise its own learning in cluttered and dynamic environments.
|Date of Award||31 Mar 2012|
|Supervisor||Peter Hall (Supervisor) & Pejman Iravani (Supervisor)|
- machine learning
- computer vision