Abstract
Collaborative robots are a key technology driving the development of Industry 4.0. This new and evolving manufacturing paradigm sees the rise of smart factories, where cyber-physical systems are developed for efficient manufacturing of highly customisable products in small batch sizes. The goal of Human-Robot Collaboration (HRC) is to allow humans and robots to safely work together, hand-in-hand, in an adaptable and versatile way.Providing a robot with the autonomy to decide how to act for HRC requires a series of developments over current industrial implementations. First, the robot must perceive its environment and understand the physical space as well as what the user is doing. Secondly, the robot must be able to reason about the state of the environment and the task, so that it might decide what action to perform next. Finally, the robot should remember past events and learn from them, in order to operate effectively in a changing and complex environment.
The work presented incorporates various developments towards the overall goal of HRC. It is determined that a cognitive architecture which can incorporate the various modules associated with the different levels of cognition (perception, reasoning, action planning, control and memory) should be developed. In order to perceive the user and understand what they are doing, Human Action Recognition (HAR) is vital. Manufacturing action recognition presents unique challenges which have had limited research, with most methods using simplified or untransferable sensing or recognition techniques. A novel HAR method using Deep Learning (DL) is therefore developed for use with Inertial Measurement Unit (IMU) and skeleton tracking data.
The ability to plan future actions relies on correctly perceiving the current state of the assembly and predicting the progression of user actions into the future. The sequential development of action status prediction and future planning techniques is shown, culminating in the development of an action status predictor model based on Long Short-Term Memory (LSTM) methods. Despite the use of DL techniques, the method developed is still generalisable to new sensor inputs, actions, tasks, and environments without retraining, making it highly applicable to deployment in a wide variety of use cases.
Overall, the methods are demonstrated in a cognitive architecture with a modular design. This allows adaptability into the future and presents the opportunity for ease of integration with additional modules, such as advanced path planning. An episodic memory system is developed which provides detail on the progression of a task from the robot’s perspective, allowing for system-wide learning to improve efficiency. Validation of the techniques investigated is conducted in offline and online experiments, with multiple users and two different assembly tasks. Online experiments are performed in collaboration with a robot manipulator with real-time interaction. This demonstrates the methods to give good results, where the robot’s action selection and timings are based on the user’s natural progression through a task. The robot can therefore autonomously adapt its behaviour in order to best optimise its time while collaborating with the user.
Date of Award | 28 Jun 2023 |
---|---|
Original language | English |
Awarding Institution |
|
Supervisor | Uriel Martinez Hernandez (Supervisor) & Peter Wilson (Supervisor) |
Keywords
- Human-Robot Collaboration
- Machine Learning
- Industry 4.0
- Human Action Recognition