About BirdNET

BirdNET is a research collaboration using machine learning to monitor global biodiversity. We develop open-source tools that transform bioacoustics from a specialized field into a scalable solution for conservation.

BirdNET about header

Science-led conservation.

BirdNET aims to lower the barrier to using sound for biodiversity monitoring. By combining deep learning with open tools and citizen science, we help track bird populations and support conservation decisions at local to global scales.

Precision Models
High-quality animal sound identification for researchers.
Scalable Monitoring
Acoustic workflows for large-scale deployments.

BirdNET is a joint effort between:

  • K. Lisa Yang Center for Conservation Bioacoustics, Cornell Lab of Ornithology
  • Chair of Media Informatics, Chemnitz University of Technology

Supported by researchers, engineers, educators, and community contributors.

Acoustic Monitoring

Many bird species are more easily detected by sound than sight. Passive monitoring captures activity without human disturbance across remote habitats and seasons.

Machine Learning

Manual review of audio is not feasible. AI automates species identification at scale, extracting features from noisy soundscapes with consistent precision.

Open Ecosystem

BirdNET provides core models for edge devices and reusable embeddings, supporting a global community of researchers and conservationists.

From soundscapes to insights

The BirdNET model is trained on thousands of hours of curated bird vocalizations. It is optimized to be robust against background noise while remaining efficient enough for real-time mobile use.

Training Data
Curated audio recordings from large public collections like Xeno-canto and Macaulay Library segmented into 3-second windows, noise-filtered, and augmented for robustness.
Architecture
Deep convolutional neural networks with an EfficientNet backbone and custom spectrogram layer producing embeddings and per-species confidence scores based on visual patterns.

Design Priorities

  • Noise robustness
  • Real-time & Batch optimization
  • High species coverage (6k+)
  • Continuous data-driven updates
1. Capture

Audio is captured at 48 kHz and divided into 3-second segments, optimizing the balance between model input size and the natural duration of avian vocalizations.

2. Spectrogram

Signals are processed into two log-scaled Mel-spectrograms, visualizing frequency patterns from 0 to 3 kHz and 150 Hz to 15 kHz for detailed analysis.

3. Neural Net

A Convolutional Neural Network (CNN) scans these visuals, utilizing millions of trained weights to detect species-specific patterns.

4. Result

Initial predictions are cross-referenced with local metadata (location and date) to produce high-confidence species probabilities.

Supported by a global network

Institutional Support

Work at the K. Lisa Yang Center for Conservation Bioacoustics is made possible by the generosity of K. Lisa Yang, supporting innovative conservation technologies that scale global biodiversity monitoring.

Project Funding

Development of BirdNET is supported by:
  • German Federal Ministry of Research, Technology and Space (FKZ 01|S22072)
  • German Federal Ministry for the Environment (FKZ 67KI31040E)
  • German Federal Ministry of Economic Affairs and Energy (FKZ 16KN095550)
  • Deutsche Bundesstiftung Umwelt (39263/01)
  • European Social Fund

Research Collaboration

BirdNET is a joint effort of partners from academia and industry. This multidisciplinary collaboration enables us to bridge the gap between AI research and practical field ecology.

Logos of BirdNET partner organizations

Representative partner logos. See publications and tools for additional collaborators.

Contact

Questions about BirdNET research, tools, or collaborations:

ccb-birdnet@cornell.edu