Fig. 1: Development and evaluation of a quasi-real-time song decoder, SAIBS. | Nature Communications

Fig. 1: Development and evaluation of a quasi-real-time song decoder, SAIBS.

From: Goal-directed and flexible modulation of syllable sequence within birdsong

Fig. 1

a The operational architecture. In the training phase, syllables were clustered and used for training a convolutional neural network (CNN). In the decoding phase, trained CNN was used to decode the coming audio. t-SNE t-distributed stochastic neighbor embedding, DBSCAN density-based spatial clustering of application with noise. b, c Example of syllables automatically clustered by SAIBS (b) and their detection in a song (c). d Annotation comparison against TweetyNet. Songs from one bird were annotated by SAIBS, and the results were compared with those by TweetyNet. The matrix shows the mean match rate between each decoder for each syllable. e The concordance rate for each syllable is shown with rates of insert-type and deletion-type errors. Mean ± s.e.m. from n = 4 trial.

Back to article page