This dataset contains neurophysiological and audio recordings from a cohort of seven adult zebra finches (four males and three females). The data were collected in the laboratory of Prof. Manfred Gahr at the Max Planck Institute for Ornithology (now the Max Planck Institute for Biological Intelligence) in Seewiesen, Germany, by Dr. Hermina Robotka, Prof. Frédéric Theunissen, and Amirmasoud Ahmadi.
Dataset Overview
The core of this dataset comprises extracellular neural recordings from the primary (Field L, CLM) and secondary (NCM, CMM) auditory pallial areas of freely behaving birds. Recordings were acquired with 16-channel electrode arrays (Microprobe) over approximately four weeks for each bird.
Birds were presented with a stimulus set consisting of multiple renditions of ten distinct songs, along with temporally and spectrally filtered versions of those songs. The dataset includes both the complete audio waveforms of the playback stimuli and the vocalizations produced by the birds during the experiments.
Data Structure and Content
The dataset is organized into 51 recording sessions. Each session contains:
- Raw Neural Data: Intan
.rhd
files.
- Stimulus Information:
.csv
files with playback timings, song IDs, and stimulus types.
- Experimental Metadata: Bird identifier and electrode-array depth annotations.
Experimental Context
Recordings were made while the birds were housed in pairs, allowing for naturalistic behavioral responses to the auditory stimuli—such as spontaneous vocal exchanges. This setup provides a rich context for studying neural processing of communication signals in a semi-naturalistic environment.
Citation and Contact
The results and analyses derived from this dataset are described in the following manuscript:
Ahmadi, A., et al. (under review). Decoding Temporal Features of Birdsong Through Neural Activity Analysis.
For further methodological details, see the “Methods” section of the publication once available.
Questions? Please contact Amirmasoud Ahmadi.