Automatic high fidelity foot contact location and timing for elite sprinting

Research output: Contribution to journalArticlepeer-review

2 Citations (SciVal)

Abstract

Making accurate measurements of human body motions using only passive, non-interfering sensors such as video is a difficult
task with a wide range of applications throughout biomechanics, health, sports and entertainment. The rise of machine
learning-based human pose estimation has allowed for impressive performance gains, but machine learning-based systems
require large datasets which might not be practical for niche applications. As such, it may be necessary to adapt systems
trained for more general-purpose goals, but this might require a sacrifice in accuracy when compared with systems specifically
developed for the application. This paper proposes two approaches to measuring a sprinter’s foot-ground contact locations and
timing (step length and step frequency), a task which requires high accuracy. The first approach is a learning-free system based
on occupancy maps. The second approach is a multi-camera 3D fusion of a state-of-the-art machine learning-based human
pose estimation model. Both systems use the same underlying multi-camera system. The experiments show the learning-free
computer vision algorithm to provide foot timing to better than 1 frame at 180 fps, and step length accurate to 7 mm, while
the system based on pose estimation achieves timing better than 1.5 frames at 180 fps, and step length estimates accurate to
20 mm.
Original languageEnglish
Article number112 (2021)
JournalMachine Vision and Applications
DOIs
Publication statusPublished - 28 Aug 2021

Funding

This research was funded by CAMERA, the RCUK Centre for the Analysis of Motion, Entertainment Research and Applications, EP/M023281/1.

ASJC Scopus subject areas

  • Computer Vision and Pattern Recognition

Fingerprint

Dive into the research topics of 'Automatic high fidelity foot contact location and timing for elite sprinting'. Together they form a unique fingerprint.

Cite this