AMENet: Attentive Maps Encoder Network for trajectory prediction

Hao Cheng, Wentong Liao, Michael Ying Yang, Bodo Rosenhahn, Monika Sester

Research output: Contribution to journalArticlepeer-review

33 Citations (SciVal)

Abstract

Trajectory prediction is critical for applications of planning safe future movements and remains challenging even for the next few seconds in urban mixed traffic. How an agent moves is affected by the various behaviors of its neighboring agents in different environments. To predict movements, we propose an end-to-end generative model named Attentive Maps Encoder Network (AMENet) that encodes the agent's motion and interaction information for accurate and realistic multi-path trajectory prediction. A conditional variational auto-encoder module is trained to learn the latent space of possible future paths based on attentive dynamic maps for interaction modeling and then is used to predict multiple plausible future trajectories conditioned on the observed past trajectories. The efficacy of AMENet is validated using two public trajectory prediction benchmarks Trajnet and InD.

Original languageEnglish
Pages (from-to)253-266
Number of pages14
JournalISPRS Journal of Photogrammetry and Remote Sensing
Volume172
Early online date14 Jan 2021
DOIs
Publication statusPublished - 28 Feb 2021

Bibliographical note

Publisher Copyright:
© 2020 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS)

Funding

This work is supported by the German Research Foundation (DFG) through the Research Training Group SocialCars (GRK 1931).

FundersFunder number
Deutsche ForschungsgemeinschaftGRK 1931

    Keywords

    • Encoder
    • Generative model
    • Trajectory prediction

    ASJC Scopus subject areas

    • Atomic and Molecular Physics, and Optics
    • Engineering (miscellaneous)
    • Computer Science Applications
    • Computers in Earth Sciences

    Fingerprint

    Dive into the research topics of 'AMENet: Attentive Maps Encoder Network for trajectory prediction'. Together they form a unique fingerprint.

    Cite this