TY - GEN
T1 - Collaborative architecture for human-robot assembly tasks using multimodal sensors
AU - Male, James
AU - Martinez-Hernandez, Uriel
N1 - Funding Information:
*This work was supported by The Engineering and Physical Sciences Research Council (EPSRC) and the Royal Society Research Grants for the ‘Touching and feeling the immersive world’ project (RGS/R2/192346) James and Uriel are with the inte-R-action lab, the Centre for Autonomous Robotics (CENTAUR) and the Department of Electronic & Electrical Engineering, University of Bath, UK (jjm53, u.martinez)@bath.ac.uk
PY - 2022/1/5
Y1 - 2022/1/5
N2 - Human robot collaboration in manufacturing environments lacks adaptability to different tasks and environments as well as fluency of interaction with workers. This work looks at developing a cognitive architecture for assembly robots allowing prediction of when collaborative actions are required. The cognitive architecture provides reliable perception and reasoning methods for increased human-robot fluency allowing the user to focus on the required task. The system has three layers; the perception layer determines the current task state while the memory layer keeps track of task details, current predictions and past episodes. The control layer predicts future collaborative actions and passes commands to the robot at the required time to reduce user idle time. The system uses inertial measurement unit convolutional neural network action recognition and vision recognition for environment perception. The cognitive architecture is validated with experiments in offline and real-time modes. In offline mode, two action recognition methods are performed, with one classifier for all actions achieving accuracy of 81% and a separate classifier for each action achieving 74%. In real-time mode, a UR3 cobot is used for a collaborative assembly task where an increase in the proportion of time the user is active is shown.
AB - Human robot collaboration in manufacturing environments lacks adaptability to different tasks and environments as well as fluency of interaction with workers. This work looks at developing a cognitive architecture for assembly robots allowing prediction of when collaborative actions are required. The cognitive architecture provides reliable perception and reasoning methods for increased human-robot fluency allowing the user to focus on the required task. The system has three layers; the perception layer determines the current task state while the memory layer keeps track of task details, current predictions and past episodes. The control layer predicts future collaborative actions and passes commands to the robot at the required time to reduce user idle time. The system uses inertial measurement unit convolutional neural network action recognition and vision recognition for environment perception. The cognitive architecture is validated with experiments in offline and real-time modes. In offline mode, two action recognition methods are performed, with one classifier for all actions achieving accuracy of 81% and a separate classifier for each action achieving 74%. In real-time mode, a UR3 cobot is used for a collaborative assembly task where an increase in the proportion of time the user is active is shown.
UR - http://www.scopus.com/inward/record.url?scp=85124706746&partnerID=8YFLogxK
U2 - 10.1109/ICAR53236.2021.9659382
DO - 10.1109/ICAR53236.2021.9659382
M3 - Chapter in a published conference proceeding
AN - SCOPUS:85124706746
T3 - 2021 20th International Conference on Advanced Robotics, ICAR 2021
SP - 1024
EP - 1029
BT - 2021 20th International Conference on Advanced Robotics, ICAR 2021
PB - IEEE
CY - U. S. A.
T2 - 20th International Conference on Advanced Robotics, ICAR 2021
Y2 - 6 December 2021 through 10 December 2021
ER -