What is BirdNET?

How can computers learn to recognize birds from sounds? The Cornell Lab of Ornithology and the Chemnitz University of Technology are trying to find an answer to this question. Our research is mainly focused on the detection and classification of avian sounds using machine learning – we want to assist experts and citizen scientist in their work of monitoring and protecting our birds.

This page features some of our public demonstrations, including a live stream demo, a demo for the analysis of audio recordings, an Android app and its visualization of submissions. All demos are based on an artificial neural network we call BirdNET. We are constantly improving the features and performance of our demos – please make sure to check back with us regularly.

We are currently featuring 984 of the most common species of North America and Europe. We will add more species and more regions in the near future. Click here for the list of supported species.

How it works:

Live Stream Demo

The live stream demo processes a live audio stream from a microphone outside the Cornell Lab of Ornithology, located in the Sapsucker Woods sanctuary in Ithaca, New York. This demo features an artificial neural network trained on the 180 most common species of the Sapsucker Woods area. Our system splits the audio stream into segments, converts those segments into spectrograms (visual representations of the audio signal) and passes the spectrograms through a convolutional neural network, all in near-real-time. The web page accumulates the species probabilities of the last five seconds into one prediction. If the probability for one species reaches 15% or higher, you can see a marker indicating an estimated position of the corresponding sound in the scrolling spectrogram of the live stream. This demo is intended for large screens.

Follow this link to view the demo.

Analysis of Audio Recordings

Reliable identification of bird species in recorded audio files would be a transformative tool for researchers, conservation biologists, and birders. This demo provides a web interface for the upload and analysis of audio recordings. Based on an artificial neural network featuring almost 1,000 of the most common species of North America and Europe, this demo shows the most probable species for every second of the recording. Please note: We need to transfer the audio recordings to our servers in order to process the files. This demo is intended for large screens.

Follow this link to view the demo.

Click here to download a demo recording.









Android App

This demo lets you record a file using the internal microphone of your Android device and an artificial neural network will tell you the most probable bird species present in your recording. We use the native sound recording feature of smartphones and tablets as well as the GPS-service to make predictions based on location and date. Give it a try! Please note: We need to transfer the audio recordings to our servers in order to process the files. Recording quality may vary depending on your device. External microphones will probably increase the recording quality.

Follow this link to download the app.

Follow this link to view live submissions.

Follow our Twitter bot.

Note: We consider our app a prototype and by no means a final product. If you encounter any instabilities or have any question regarding the functionality, please let us know. We will add new features in the near future, you will receive all updates automatically.

About us:

Cornell Lab of Ornithology

Dedicated to advancing the understanding and protection of the natural world, the Cornell Lab joins with people from all walks of life to make new scientific discoveries, share insights, and galvanize conservation action. Our Johnson Center for Birds and Biodiversity in Ithaca, New York, is a global center for the study and protection of birds and biodiversity, and the hub for millions of citizen-science observations pouring in from around the world.

Click this link to visit our website.

Chemnitz University of Technology

Chemnitz University of Technology is a public university in Chemnitz, Germany. With over 11,000 students, it is the third largest university in Saxony. It was founded in 1836 as Königliche Gewerbeschule (Royal Mercantile College) and was elevated to a Technische Hochschule, a university of technology, in 1963. With approximately 1,500 employees in science, engineering and management, TU Chemnitz counts among the most important employers in the region.

Click this link to visit our website.

Meet the team:


Stefan Kahl

I am a research scientist and Ph.D. student at Chemnitz University of Technology since January 2014. My works include the development of AI applications using convolutional neural networks for visual recognition, bioacoustics for environmental monitoring as well as mobile human-computer interaction design.


Shyam

Shyam Madhusudhana

As a postdoc within the Center for Conservation Bioacoustics at the Cornell Lab of Ornithology, my current research involves developing solutions for automatic source separation in continuous ambient audio streams and the development of acoustic deep-learning techniques for unsupervised multi-class classification in the big-data realm. I have been actively involved with IEEE’s Oceanic Engineering Society (OES) and, currently, I serve as the coordinator of Technology Committees.


Holger Klinck

I joined the Cornell Lab of Ornithology in December 2015 and took over the directorship of the Center for Conservation Bioacoustics (formerly known as Bioacoustics Research Program) in August 2016. I am also a Faculty Fellow with the Atkinson Center for a Sustainable Future at Cornell University. In addition, I hold an Adjunct Assistant Professor position at Oregon State University (OSU), where I lead the Research Collective for Applied Acoustics.

Related publications:

Kahl, S., Stöter, F. R., Goëau, H., Glotin, H., Planqué, R., Vellinga, W. P., & Joly, A. (2019). Overview of BirdCLEF 2019: Large-scale Bird Recognition in Soundscapes.
In CLEF 2019 (Working Notes). [PDF]

Joly, A., Goëau, H., Botella, C., Kahl, S., Servajean, M., Glotin, H., … & Müller, H. (2019). Overview of LifeCLEF 2019: Identification of Amazonian plants, South & North American birds, and niche prediction.
In International Conference of the Cross-Language Evaluation Forum for European Languages (pp. 387-401). Springer, Cham. [PDF]

Joly, A., Goëau, H., Botella, C., Kahl, S., Poupard, M., Servajean, M., … & Schlüter, J. (2019). LifeCLEF 2019: Biodiversity Identification and Prediction Challenges.
In European Conference on Information Retrieval (pp. 275-282). Springer, Cham. [PDF]

Kahl, S., Wilhelm-Stein, T., Klinck, H., Kowerko, D., & Eibl, M. (2018). Recognizing Birds from Sound – The 2018 BirdCLEF Baseline System.
arXiv preprint arXiv:1804.07177. [PDF]

Goëau, H., Kahl, S., Glotin, H., Planqué, R., Vellinga, W. P., & Joly, A. (2018). Overview of BirdCLEF 2018: monospecies vs. soundscape bird identification.
In CLEF 2018 (Working Notes). [PDF]

Kahl, S., Wilhelm-Stein, T., Klinck, H., Kowerko, D., & Eibl, M. (2018). A Baseline for Large-Scale Bird Species Identification in Field Recordings.
In CLEF 2018 (Working Notes). [PDF]

Kahl, S., Wilhelm-Stein, T., Hussein, H., Klinck, H., Kowerko, D., Ritter, M., & Eibl, M. (2017). Large-Scale Bird Sound Classification using Convolutional Neural Networks.
In CLEF 2018 (Working Notes). [PDF]