ViTac: Integrating Vision and Touch for Multimodal and Cross-Modal Perception

Shan Luo, Nathan Lepora, Uriel Martinez Hernandez, Joao Bimbo, Huaping Liu

Research output: Contribution to journalArticlepeer-review

5 Downloads (Pure)


Animals interact with the world through multimodal sensing inputs, especially vision and touch sensing in the case of humans interacting with our physical surroundings. In contrast, artificial systems usually rely on a single sensing modality, with distinct hardware and algorithmic approaches developed for each modality. For example, computer vision and tactile robotics are usually treated as distinct disciplines, with specialist knowledge required to make progress in each research field. Future robots, as embodied agents interacting with complex environments, should make best use of all available sensing modalities to perform their tasks.
Original languageEnglish
Number of pages3
JournalFrontiers in Robotics and AI
Publication statusAcceptance date - 23 Apr 2021

Cite this