Editorial: ViTac: Integrating Vision and Touch for Multimodal and Cross-Modal Perception

Shan Luo, Nathan Lepora, Uriel Martinez Hernandez, Joao Bimbo, Huaping Liu

Research output: Contribution to journalEditorialpeer-review

7 Citations (SciVal)
72 Downloads (Pure)


Animals interact with the world through multimodal sensing inputs, especially vision and touch sensing in the case of humans interacting with our physical surroundings. In contrast, artificial systems usually rely on a single sensing modality, with distinct hardware and algorithmic approaches developed for each modality. For example, computer vision and tactile robotics are usually treated as distinct disciplines, with specialist knowledge required to make progress in each research field. Future robots, as embodied agents interacting with complex environments, should make best use of all available sensing modalities to perform their tasks.
Original languageEnglish
Article number697601
Number of pages3
JournalFrontiers in Robotics and AI
Publication statusPublished - 7 May 2021


  • editorial
  • robot learning and control
  • robot perception
  • robot sensing and perception
  • tactile sensing

ASJC Scopus subject areas

  • Computer Science Applications
  • Artificial Intelligence


Dive into the research topics of 'Editorial: ViTac: Integrating Vision and Touch for Multimodal and Cross-Modal Perception'. Together they form a unique fingerprint.

Cite this