Machine/Deep Learning categorisation of sub-kilohertz Arctic soundscapes

Jonathan Cleverly, Philippe Blondel, Hanne Sagen, Espen Storheim, Matthew Dzieciuch

Research output: Chapter or section in a book/report/conference proceedingChapter in a published conference proceeding

1 Downloads (Pure)

Abstract

Arctic soundscapes are being modified by climate change, which is greatly amplified in the region. Cryophony (sounds from sea ice processes) will become more variable as ice floes become increasingly fragmented. These changes to the sea ice will also result in shifts in temporospatial patterns of marine mammal vocalisations and anthropogenic sounds. These markers of the state of the Arctic Ocean are monitored using passive acoustic technologies, however there are still no standard practices for exploring soundscapes in this region. Here we investigate Machine/Deep Learning (ML/DL) approaches for categorising deep-water Arctic soundscapes. Recordings from hydrophone moorings deployed along the Nansen Basin during the “Coordinated Arctic Acoustic Thermometry Experiment” (CAATEX, 2019-2020) have been considered for this study. We utilise AVES (Animal Vocalisation Encoder based on Self-Supervision) to identify sounds within recordings with broad descriptors. Training datasets for ML/DL algorithms usually consider a broader frequency range (beyond 20 kHz), but it is not always feasible to use these higher sampling rates, thus the robustness of algorithms requires testing with lower sample rate data (976 Hz here), where the frequency content of sounds is not always fully recorded. To study multiple sound sources at once (e.g. whale songs, anthropogenic sounds), we consider longer context windows (‘snippets’) of 120 seconds, currently seldom considered in acoustic ML problems. These techniques will be crucial for avoiding current bottlenecks in data processing, in particular in the Arctic, enabling more in-depth studies for marine mammal conservation and industrial regulation.
Original languageEnglish
Title of host publicationUnderwater Acoustics Conference and Exhibition Series 2025
Chapter1
Number of pages8
Publication statusPublished - 31 Dec 2025

Publication series

NameUnderwater Acoustics Conference and Exhibition Series 2025 (UACE2025), Halkidiki, Greece
ISSN (Electronic)2408-0195

Funding

The CAATEX data used in this paper are used with permission from Principal Investigators of the CAATEX projects: Dr. Hanne Sagen, NERSC, (Project No. 280531) funded by the Research Council of Norway, and Dr. Matthew Dzieciuch, Scripps Institution of Oceanogra phy, University of California (Project No. N00014-18-1-2698) funded by the Office of Naval Research. The HiAOOS project is funded by the European Union Horizon Europe Programme (Grant Agreement No. 101094621, UK participants supported by UKRI Grant No. 10071903). JC would like to acknowledge Gagan Narula (Earth Species Project) for his prompt correspon dence and great advice in getting set up with AVES. This work has been completed as part of JC’s EPSRC-funded PhD studentship (EP/W524712/1) supervised by PB. UACE2025 - Conference Proceedings.

FundersFunder number
UKRI-funded

    Fingerprint

    Dive into the research topics of 'Machine/Deep Learning categorisation of sub-kilohertz Arctic soundscapes'. Together they form a unique fingerprint.

    Cite this