Skip to main content

Using AI to Listen and Learn about Humpback Whales

Deep machine learning analyzes long-term passive acoustic data

Photo of humpback whale at sea surface near Hawaii
Photo courtesy of NOAA, National Marine Sanctuaries, Jason Moore

All species of whales use sound to communicate. Some vocalizations are identifiable to the species or even the local population level. While many whales can be hard to spot visually, their calls can travel for many miles underwater.

To monitor movement and examine changes in populations over time, scientists use underwater microphones to record their sounds. This field of study, called passive acoustic monitoring, has advanced in recent years allowing for recordings over extremely long periods of time and large areas of our oceans. These advancements have resulted in a dramatic increase in the volume of data collected and scientists simply do not have enough hours in the day to analyze the data themselves. 

A recent paper in Frontiers in Marine Science documents a successful study that applies the artificial intelligence technique of deep machine learning to analyze a large passive acoustic dataset and identify highly variable humpback whale sounds across broad spatial and temporal scales.

Deep Machine Learning Model 

Machine learning techniques have recently begun to provide improved detection and classification methods for a large variety of fields. Deep learning is a recent advancement that combines feature extraction and selection, which was performed manually in conventional machine learning, and detection or classification into a single end-to-end model.

Scientists from NOAA and Google, Inc., worked together to train a deep learning convolutional neural network (CNN) to identify humpback whale song in over 187,000 hours of acoustic data collected over a 14-year period at 13 different monitoring sites in the North Pacific.  

The Challenge of the Humpback Whale

Male humpback whales sing long and complex songs, which are unique to each breeding population and year. The complex nature of humpback song, and its constantly changing nature make it particularly difficult to design an automatic method of song identification. To address this challenge, the team used technology developed for YouTube image classification to build a robust deep machine learning model that is capable of identifying humpback songs from several different populations over a timespan of 13 years. They used the output from this model to evaluate changes and trends in humpback song occurrence over time. 

The analysis identified a decline in humpback song occurrence in the Main Hawaiian Islands beginning in the winter of 2015–16, which supports findings from annual visual surveys of humpback whales at well-known Hawaiian aggregation sites that noted a significant decline in humpback presence beginning in winter 2014–15.

The efficient processing of the long-term dataset using deep learning provided a detailed view of the seasonal and long-term occurrence of whales within known breeding areas and in regions where visual surveys are not feasible and contributed to new insights into population movements and new hypotheses about population structure. It also allowed for a comprehensive analysis of this full data collection for humpback whale song, which was not previously attainable.

A Public-Private Partnership

The public-private partnership brought industrial cloud computing capacity to bear on a large scientific dataset, enabling parallel inference on terabyte-scale unlabeled audio within hours, a timescale unmatchable by historical analysis methods. The study represents the largest scale application of deep learning to recognize marine mammal bioacoustics signals to date, as measured by dataset size and variability,  demonstrating the effective application of a CNN for identifying complex acoustic signals in an extremely large and heterogeneous passive acoustic dataset. Google has publicly released the machine learning model it developed for the study for public use.

An important component of the project—especially for NCEI—is open data access and open models so that the community can share and build tools together. Dr. Carrie Wall of the University of Colorado at Boulder’s Cooperative Institute for Research in Environmental Sciences leads the Passive Acoustic Data Archive efforts for NCEI. She is a co-author of the deep learning study. The raw audio files and the humpback whale song annotations used in the study are archived at NCEI and made publicly available through the archive’s web-based map viewer and Google Cloud Platform through the NOAA Big Data Program. This is the longest duration passive acoustic dataset made publicly available through NCEI to date. 

“The establishment of the NOAA Center for Artificial Intelligence co-located with NCEI-Colorado in Boulder will provide more opportunities to use machine learning techniques on our large, long-term data sets,” Wall said. “The potential for new partnerships and accelerated discoveries is very exciting.”

Reference: A. N. Allen, Harvey, M., Harrell, L., Jansen, A., Merkens, K. P., Wall, C. C., Cattiau, J., and Oleson, E. M. (2021) A Convolutional Neural Network for Automated Detection of Humpback Whale Song in a Diverse, Long-Term Passive Acoustic Dataset. Front. Mar. Sci. 8:607321. doi:10.3389/fmars.2021.607321

Related News