TY - GEN
T1 - Multimodal sensor-based human-robot collaboration in assembly tasks
AU - Male, James
AU - Al, Gorkem Anil
AU - Shabani, Arya
AU - Martinez-Hernandez, Uriel
N1 - Funding Information:
This work was supported by The Engineering and Physical Sciences Research Council (EPSRC), The Republic of Turkey Ministry of National Education and the Centre for Autonomous Robotics (CENTAUR).
PY - 2022/10/31
Y1 - 2022/10/31
N2 - This work presents a framework for Human-Robot Collaboration (HRC) in assembly tasks that uses multimodal sensors, perception and control methods. First, vision sensing is employed for user identification to determine the collaborative task to be performed. Second, assembly actions and hand gestures are recognised using wearable inertial measurement units (IMUs) and convolutional neural networks (CNN) to identify when robot collaboration is needed and bring the next object to the user for assembly. If collaboration is not required, then the robot performs a solo task. Third, the robot arm uses time domain features from tactile sensors to detect when an object has been touched and grasped for handover actions in the assembly process. These multimodal sensors and computational modules are integrated in a layered control architecture for HRC collaborative assembly tasks. The proposed framework is validated in real-time using a Universal Robot arm (UR3) to collaborate with humans for assembling two types of objects 1) a box and 2) a small chair, and to work on a solo task of moving a stack of Lego blocks when collaboration with the user is not needed. The experiments show that the robot is capable of sensing and perceiving the state of the surrounding environment using multimodal sensors and computational methods to act and collaborate with humans to complete assembly tasks successfully.
AB - This work presents a framework for Human-Robot Collaboration (HRC) in assembly tasks that uses multimodal sensors, perception and control methods. First, vision sensing is employed for user identification to determine the collaborative task to be performed. Second, assembly actions and hand gestures are recognised using wearable inertial measurement units (IMUs) and convolutional neural networks (CNN) to identify when robot collaboration is needed and bring the next object to the user for assembly. If collaboration is not required, then the robot performs a solo task. Third, the robot arm uses time domain features from tactile sensors to detect when an object has been touched and grasped for handover actions in the assembly process. These multimodal sensors and computational modules are integrated in a layered control architecture for HRC collaborative assembly tasks. The proposed framework is validated in real-time using a Universal Robot arm (UR3) to collaborate with humans for assembling two types of objects 1) a box and 2) a small chair, and to work on a solo task of moving a stack of Lego blocks when collaboration with the user is not needed. The experiments show that the robot is capable of sensing and perceiving the state of the surrounding environment using multimodal sensors and computational methods to act and collaborate with humans to complete assembly tasks successfully.
KW - assembly tasks
KW - human-robot collaboration
KW - vision and touch sensing
KW - wearable sensing
UR - http://www.scopus.com/inward/record.url?scp=85142728493&partnerID=8YFLogxK
U2 - 10.1109/SMC53654.2022.9945532
DO - 10.1109/SMC53654.2022.9945532
M3 - Chapter in a published conference proceeding
AN - SCOPUS:85142728493
T3 - Conference Proceedings - IEEE International Conference on Systems, Man and Cybernetics
SP - 1266
EP - 1271
BT - 2022 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2022 - Proceedings
PB - IEEE
CY - U. S. A.
T2 - 2022 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2022
Y2 - 9 October 2022 through 12 October 2022
ER -