Journal articles on the topic 'Bioacoustic Recognition'

To see the other types of publications on this topic, follow the link: Bioacoustic Recognition.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 40 journal articles for your research on the topic 'Bioacoustic Recognition.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Gong, Cihun-Siyong Alex, Chih-Hui Simon Su, Kuo-Wei Chao, Yi-Chu Chao, Chin-Kai Su, and Wei-Hang Chiu. "Exploiting deep neural network and long short-term memory method-ologies in bioacoustic classification of LPC-based features." PLOS ONE 16, no. 12 (December 23, 2021): e0259140. http://dx.doi.org/10.1371/journal.pone.0259140.

Full text
Abstract:
The research describes the recognition and classification of the acoustic characteristics of amphibians using deep learning of deep neural network (DNN) and long short-term memory (LSTM) for biological applications. First, original data is collected from 32 species of frogs and 3 species of toads commonly found in Taiwan. Secondly, two digital filtering algorithms, linear predictive coding (LPC) and Mel-frequency cepstral coefficient (MFCC), are respectively used to collect amphibian bioacoustic features and construct the datasets. In addition, principal component analysis (PCA) algorithm is applied to achieve dimensional reduction of the training model datasets. Next, the classification of amphibian bioacoustic features is accomplished through the use of DNN and LSTM. The Pytorch platform with a GPU processor (NVIDIA GeForce GTX 1050 Ti) realizes the calculation and recognition of the acoustic feature classification results. Based on above-mentioned two algorithms, the sound feature datasets are classified and effectively summarized in several classification result tables and graphs for presentation. The results of the classification experiment of the different features of bioacoustics are verified and discussed in detail. This research seeks to extract the optimal combination of the best recognition and classification algorithms in all experimental processes.
APA, Harvard, Vancouver, ISO, and other styles
2

Asakura, Takumi, and Shizuru Iida. "Hand gesture recognition by using bioacoustic responses." Acoustical Science and Technology 41, no. 2 (March 1, 2020): 521–24. http://dx.doi.org/10.1250/ast.41.521.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Chesmore, David. "Automated bioacoustic identification of species." Anais da Academia Brasileira de Ciências 76, no. 2 (June 2004): 436–40. http://dx.doi.org/10.1590/s0001-37652004000200037.

Full text
Abstract:
Research into the automated identification of animals by bioacoustics is becoming more widespread mainly due to difficulties in carrying out manual surveys. This paper describes automated recognition of insects (Orthoptera) using time domain signal coding and artificial neural networks. Results of field recordings made in the UK in 2002 are presented which show that it is possible to accurately recognize 4 British Orthoptera species in natural conditions under high levels of interference. Work is under way to increase the number of species recognized.
APA, Harvard, Vancouver, ISO, and other styles
4

Noh, Hyung Wook, Chang-Geun Ahn, Seung-Hoon Chae, Yunseo Ku, and Joo Yong Sim. "Multichannel Acoustic Spectroscopy of the Human Body for Inviolable Biometric Authentication." Biosensors 12, no. 9 (August 31, 2022): 700. http://dx.doi.org/10.3390/bios12090700.

Full text
Abstract:
Specific features of the human body, such as fingerprint, iris, and face, are extensively used in biometric authentication. Conversely, the internal structure and material features of the body have not been explored extensively in biometrics. Bioacoustics technology is suitable for extracting information about the internal structure and biological and material characteristics of the human body. Herein, we report a biometric authentication method that enables multichannel bioacoustic signal acquisition with a systematic approach to study the effects of selectively distilled frequency features, increasing the number of sensing channels with respect to multiple fingers. The accuracy of identity recognition according to the number of sensing channels and the number of selectively chosen frequency features was evaluated using exhaustive combination searches and forward-feature selection. The technique was applied to test the accuracy of machine learning classification using 5,232 datasets from 54 subjects. By optimizing the scanning frequency and sensing channels, our method achieved an accuracy of 99.62%, which is comparable to existing biometric methods. Overall, the proposed biometric method not only provides an unbreakable, inviolable biometric but also can be applied anywhere in the body and can substantially broaden the use of biometrics by enabling continuous identity recognition on various body parts for biometric identity authentication.
APA, Harvard, Vancouver, ISO, and other styles
5

Ferguson, Elizabeth L., Peter Sugarman, Kevin R. Coffey, Jennifer Pettis Schallert, and Gabriela C. Alongi. "Development of deep neural networks for marine mammal call detection using an open-source, user friendly tool." Journal of the Acoustical Society of America 151, no. 4 (April 2022): A28. http://dx.doi.org/10.1121/10.0010547.

Full text
Abstract:
As the collection of large acoustic datasets used to monitor marine mammals increases, so too does the need for expedited and reliable detection of accurately classified bioacoustic signals. Deep learning methods of detection and classification are increasingly proposed as a means of addressing this processing need. These image recognition and classification methods include the use of a neural networks that independently determine important features of bioacoustic signals from spectrograms. Recent marine mammal call detection studies report consistent performance even when used with datasets that were not included in the network training. We present here the use of DeepSqueak, a novel open-source tool originally developed to detect and classify ultrasonic vocalizations from rodents in a low-noise, laboratory setting. We have trained networks in DeepSqueak to detect marine mammal vocalizations in comparatively noisy, natural acoustic environments. DeepSqueak utilizes a regional convolutional neural network architecture within an intuitive graphical user interface that provides automated detection results independent of acoustician expertise. Using passive acoustic data from two hydrophones on the Ocean Observatories Initiative’s Coastal Endurance Array, we developed networks for humpback whales, delphinids, and fin whales. We report performance and limitations for use of this detection method for each species.
APA, Harvard, Vancouver, ISO, and other styles
6

Crawford, John D., Aaron P. Cook, and Andrea S. Heberlein. "Bioacoustic behavior of African fishes (Mormyridae): Potential cues for species and individual recognition inPollimyrus." Journal of the Acoustical Society of America 102, no. 2 (August 1997): 1200–1212. http://dx.doi.org/10.1121/1.419923.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Oba, Teruyo. "The sound environmental education aided by automated bioacoustic identification in view of soundscape recognition." Journal of the Acoustical Society of America 120, no. 5 (November 2006): 3239. http://dx.doi.org/10.1121/1.4788256.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Chesmore, E. D., and E. Ohya. "Automated identification of field-recorded songs of four British grasshoppers using bioacoustic signal recognition." Bulletin of Entomological Research 94, no. 4 (August 2004): 319–30. http://dx.doi.org/10.1079/ber2004306.

Full text
Abstract:
AbstractRecognition of Orthoptera species by means of their song is widely used in field work but requires expertise. It is now possible to develop computer-based systems to achieve the same task with a number of advantages including continuous long term unattended operation and automatic species logging. The system described here achieves automated discrimination between different species by utilizing a novel time domain signal coding technique and an artificial neural network. The system has previously been shown to recognize 25 species of British Orthoptera with 99% accuracy for good quality sounds. This paper tests the system on field recordings of four species of grasshopper in northern England in 2002 and shows that it is capable of not only correctly recognizing the target species under a range of acoustic conditions but also of recognizing other sounds such as birds and man-made sounds. Recognition accuracies for the four species of typically 70–100% are obtained for field recordings with varying sound intensities and background signals.
APA, Harvard, Vancouver, ISO, and other styles
9

Larsen, Hanne Lyngholm, Cino Pertoldi, Niels Madsen, Ettore Randi, Astrid Vik Stronen, Holly Root-Gutteridge, and Sussie Pagh. "Bioacoustic Detection of Wolves: Identifying Subspecies and Individuals by Howls." Animals 12, no. 5 (March 2, 2022): 631. http://dx.doi.org/10.3390/ani12050631.

Full text
Abstract:
Wolves (Canis lupus) are generally monitored by visual observations, camera traps, and DNA traces. In this study, we evaluated acoustic monitoring of wolf howls as a method for monitoring wolves, which may permit detection of wolves across longer distances than that permitted by camera traps. We analyzed acoustic data of wolves’ howls collected from both wild and captive ones. The analysis focused on individual and subspecies recognition. Furthermore, we aimed to determine the usefulness of acoustic monitoring in the field given the limited data for Eurasian wolves. We analyzed 170 howls from 16 individual wolves from 3 subspecies: Arctic (Canis lupus arctos), Eurasian (C. l. lupus), and Northwestern wolves (C. l. occidentalis). Variables from the fundamental frequency (f0) (lowest frequency band of a sound signal) were extracted and used in discriminant analysis, classification matrix, and pairwise post-hoc Hotelling test. The results indicated that Arctic and Eurasian wolves had subspecies identifiable calls, while Northwestern wolves did not, though this sample size was small. Identification on an individual level was successful for all subspecies. Individuals were correctly classified with 80%–100% accuracy, using discriminant function analysis. Our findings suggest acoustic monitoring could be a valuable and cost-effective tool that complements camera traps, by improving long-distance detection of wolves.
APA, Harvard, Vancouver, ISO, and other styles
10

Cao, Tianyu, Xiaoqun Zhao, Yichen Yang, Caiyun Zhu, and Zhongwei Xu. "Adaptive Recognition of Bioacoustic Signals in Smart Aquaculture Engineering Based on r-Sigmoid and Higher-Order Cumulants." Sensors 22, no. 6 (March 15, 2022): 2277. http://dx.doi.org/10.3390/s22062277.

Full text
Abstract:
In recent years, interest in aquaculture acoustic signal has risen since the development of precision agriculture technology. Underwater acoustic signals are known to be noisy, especially as they are inevitably mixed with a large amount of environmental background noise, causing severe interference in the extraction of signal features and the revelation of internal laws. Furthermore, interference adds a considerable burden on the transmission, storage, and processing of data. A signal recognition curve (SRC) algorithm is proposed based on higher-order cumulants (HOC) and a recognition-sigmoid function for feature extraction of target signals. The signal data of interest can be accurately identified using the SRC. The analysis and verification of the algorithm are carried out in this study. The results show that when the SNR is greater than 7 dB, the SRC algorithm is effective, and the performance improvement is maximized when the SNR is 11 dB. Furthermore, the SRC algorithm has shown better flexibility and robustness in application.
APA, Harvard, Vancouver, ISO, and other styles
11

Palacios, V., E. Font, R. Márquez, and P. Carazo. "Recognition of familiarity on the basis of howls: a playback experiment in a captive group of wolves." Behaviour 152, no. 5 (2015): 593–614. http://dx.doi.org/10.1163/1568539x-00003244.

Full text
Abstract:
Playback experiments were conducted with a pack of captive Iberian wolves. We used a habituation–discrimination paradigm to test wolves’ ability to discriminate howls based on: (1) artificial manipulation of acoustic parameters of howls and (2) the identity of howling individuals. Manipulations in fundamental frequency and frequency modulation within the natural range of intra-individual howl variation did not elicit dishabituation, while manipulation of modulation pattern did produce dishabituation. With respect to identity, across trials wolves habituated to unfamiliar howls by a familiar wolf (i.e., no direct contact, but previous exposure to howls by this wolf), but not to unfamiliar howls from unfamiliar wolves (i.e., no direct contact and no previous exposure to howls by these wolves). Modulation pattern seems to be an important bioacoustic feature for individual recognition. Overall, our results provide the first experimental evidence that wolves can discriminate individuals based on the acoustic structure of their howls.
APA, Harvard, Vancouver, ISO, and other styles
12

DIETRICH, C., G. PALM, K. RIEDE, and F. SCHWENKER. "Classification of bioacoustic time series based on the combination of global and local decisions." Pattern Recognition 37, no. 12 (December 2004): 2293–305. http://dx.doi.org/10.1016/s0031-3203(04)00161-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Rekanos, Ioannis T., and Leontios J. Hadjileontiadis. "An iterative kurtosis-based technique for the detection of nonstationary bioacoustic signals." Signal Processing 86, no. 12 (December 2006): 3787–95. http://dx.doi.org/10.1016/j.sigpro.2006.03.020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Lengagne, Thierry, Jacques Lauga, and Pierre Jouventin. "A method of independent time and frequency decomposition of bioacoustic signals: inter-individual recognition in four species of penguins." Comptes Rendus de l'Académie des Sciences - Series III - Sciences de la Vie 320, no. 11 (November 1997): 885–91. http://dx.doi.org/10.1016/s0764-4469(97)80873-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Charrier, Isabelle, Laurie L. Bloomfield, and Christopher B. Sturdy. "Note types and coding in parid vocalizations. I: The chick-a-dee call of the black-capped chickadee (Poecile atricapillus)." Canadian Journal of Zoology 82, no. 5 (May 1, 2004): 769–79. http://dx.doi.org/10.1139/z04-045.

Full text
Abstract:
The chick-a-dee call of the black-capped chickadee, Poecile atricapillus (L., 1766), consists of four note types and is used in a wide variety of contexts including mild alarm, contact between mates, and for mobilizing members of winter flocks. Because note-type composition varies with context and because birds need to identify flock mates and individuals by their calls, it is important that birds are able to discriminate between note types and birds. Moreover, previous experiments have shown that black-capped chickadees are able to discriminate their four note types, but the acoustical basis of this process is still unknown. Here, we present the results of a bioacoustic analysis that suggests which acoustic features may be controlling the birds' perception of note types and of individual identity. Several acoustic features show high note type and individual specificity, but frequency and frequency modulation cues (in particular, those of the initial part of the note) appear more likely to be used in these processes. However, only future experiments testing the bird's perceptual abilities will determine which acoustic cues in particular are used in the discrimination of note types and in individual recognition.
APA, Harvard, Vancouver, ISO, and other styles
16

Gambale, Priscilla Guedes, Luciana Signorelli, and Rogério Pereira Bastos. "Individual variation in the advertisement calls of a Neotropical treefrog (Scinax constrictus)." Amphibia-Reptilia 35, no. 3 (2014): 271–81. http://dx.doi.org/10.1163/15685381-00002949.

Full text
Abstract:
Studies of the variability of signals at different levels are important to resolve issues related to the evolution of a species’ recognition system. We analysed the variation within males, among individuals, and among three breeding seasons in the advertisement calls of Scinax constrictus, a Neotropical hylid frog endemic to the Cerrado of Brazil. We assessed the influence of different temperature ranges and different body condition ranges over a three-year period of breeding season on the acoustical features of the advertisement calls of 62 individuals. Air temperature had negative relationship with call duration and note number. Body condition had a negative relationship with the dominant frequency and positive effects on pulse number and note duration. The acoustic parameters of S. constrictus were stable across breeding seasons. Temporal parameters were highly variable within individuals, whereas the dominant frequency was the most stereotyped property of advertisement calls. Individuals of S. constrictus have sufficient among-male variability, especially for temporal parameters (call duration, number of notes, and note duration), to permit discrimination between conspecific calling males at a reproductive site by statistical analysis. The results highlight the informativeness of non-invasive bioacoustic features for population-level studies and biological conservation.
APA, Harvard, Vancouver, ISO, and other styles
17

Batistela, Marciela, and Eliara Solange Müller. "Analysis of duet vocalizations in Myiothlypis leucoblephara (Aves, Parulidae)." Neotropical Biology and Conservation 14, no. 2 (August 13, 2019): 297–311. http://dx.doi.org/10.3897/neotropical.14.e37655.

Full text
Abstract:
Bird vocalizations might be used for specific recognition, territorial defense, and reproduction. Bioacoustic studies aim to understand the production, propagation and reception of acoustic signals, and they are an important component of research on animal behavior and evolution. In this study we analyzed the sound structure of duet vocalizations in pairs of Myiothlypis leucoblephara and evaluated whether the vocal variables differ among pairs and if there are differences in temporal characteristics and frequency of duets between pairs in forest edges vs. forest interior. Vocalizations were recorded from 17 bird pairs in three remnants of Atlantic Forest in southern Brazil. Six of the bird pairs were situated at the edge of the forest remnant, and 11 were in the interior of the remnant. The duets of different pairs between forest areas showed descriptive differences in the frequency, number of notes per call, and time between issuance of calls, with the main distinguishing feature being a change in frequency of a few notes in the second part of the musical phrase. The minimum frequency of vocalization was reduced at the private area than in the other two remnants (p <0.05). The duets of birds in the forest edge and forest interior did not significantly differ in minimum or maximum frequency of phrases (p> 0.05), phrase duration (p> 0.05) or number of notes per phrase (p> 0.05). Myiothlypis leucoblephara did not show a specific pattern with respect to issue of phrases in duets, but instead showed five different patterns, which were variable among pairs. There was a sharp decline or alternation in frequency between notes in the second part of the musical phrase for recognition among pairs. Variation in vocalization among M. leucoblephara duets may play a role in pair recognition.
APA, Harvard, Vancouver, ISO, and other styles
18

Trapanotto, Martino, Loris Nanni, Sheryl Brahnam, and Xiang Guo. "Convolutional Neural Networks for the Identification of African Lions from Individual Vocalizations." Journal of Imaging 8, no. 4 (April 1, 2022): 96. http://dx.doi.org/10.3390/jimaging8040096.

Full text
Abstract:
The classification of vocal individuality for passive acoustic monitoring (PAM) and census of animals is becoming an increasingly popular area of research. Nearly all studies in this field of inquiry have relied on classic audio representations and classifiers, such as Support Vector Machines (SVMs) trained on spectrograms or Mel-Frequency Cepstral Coefficients (MFCCs). In contrast, most current bioacoustic species classification exploits the power of deep learners and more cutting-edge audio representations. A significant reason for avoiding deep learning in vocal identity classification is the tiny sample size in the collections of labeled individual vocalizations. As is well known, deep learners require large datasets to avoid overfitting. One way to handle small datasets with deep learning methods is to use transfer learning. In this work, we evaluate the performance of three pretrained CNNs (VGG16, ResNet50, and AlexNet) on a small, publicly available lion roar dataset containing approximately 150 samples taken from five male lions. Each of these networks is retrained on eight representations of the samples: MFCCs, spectrogram, and Mel spectrogram, along with several new ones, such as VGGish and stockwell, and those based on the recently proposed LM spectrogram. The performance of these networks, both individually and in ensembles, is analyzed and corroborated using the Equal Error Rate and shown to surpass previous classification attempts on this dataset; the best single network achieved over 95% accuracy and the best ensembles over 98% accuracy. The contributions this study makes to the field of individual vocal classification include demonstrating that it is valuable and possible, with caution, to use transfer learning with single pretrained CNNs on the small datasets available for this problem domain. We also make a contribution to bioacoustics generally by offering a comparison of the performance of many state-of-the-art audio representations, including for the first time the LM spectrogram and stockwell representations. All source code for this study is available on GitHub.
APA, Harvard, Vancouver, ISO, and other styles
19

Forti, Lucas Rodriguez, Roseli Maria Foratto, Rafael Márquez, Vânia Rosa Pereira, and Luís Felipe Toledo. "Current knowledge on bioacoustics of the subfamily Lophyohylinae (Hylidae, Anura) and description of Ocellated treefrogItapotihyla langsdorffiivocalizations." PeerJ 6 (May 31, 2018): e4813. http://dx.doi.org/10.7717/peerj.4813.

Full text
Abstract:
BackgroundAnuran vocalizations, such as advertisement and release calls, are informative for taxonomy because species recognition can be based on those signals. Thus, a proper acoustic description of the calls may support taxonomic decisions and may contribute to knowledge about amphibian phylogeny.MethodsHere we present a perspective on advertisement call descriptions of the frog subfamily Lophyohylinae, through a literature review and a spatial analysis presenting bioacoustic coldspots (sites with high diversity of species lacking advertisement call descriptions) for this taxonomic group. Additionally, we describe the advertisement and release calls of the still poorly known treefrog,Itapotihyla langsdorffii. We analyzed recordings of six males using the software Raven Pro 1.4 and calculated the coefficient of variation for classifying static and dynamic acoustic properties.Results and DiscussionWe found that more than half of the species within the subfamily do not have their vocalizations described yet. Most of these species are distributed in the western and northern Amazon, where recording sampling effort should be strengthened in order to fill these gaps. The advertisement call ofI. langsdorffiiis composed of 3–18 short unpulsed notes (mean of 13 ms long), presents harmonic structure, and has a peak dominant frequency of about 1.4 kHz. This call usually presents amplitude modulation, with decreasing intensity along the sequence of notes. The release call is a simple unpulsed note with an average duration of 9 ms, and peak dominant frequency around 1.8 kHz. Temporal properties presented higher variations than spectral properties at both intra- and inter-individual levels. However, only peak dominant frequency was static at intra-individual level. High variability in temporal properties and lower variations related to spectral ones is usual for anurans; The first set of variables is determined by social environment or temperature, while the second is usually related to species-recognition process. Here we review and expand the acoustic knowledge of the subfamily Lophyohylinae, highlighting areas and species for future research.
APA, Harvard, Vancouver, ISO, and other styles
20

DE CARVALHO, THIAGO RIBEIRO, and ARIOVALDO ANTONIO GIARETTA. "Taxonomic circumscription of Adenomera martinezi (Bokermann, 1956) (Anura: Leptodactylidae: Leptodactylinae) with the recognition of a new cryptic taxon through a bioacoustic approach." Zootaxa 3701, no. 2 (August 19, 2013): 207. http://dx.doi.org/10.11646/zootaxa.3701.2.5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Correa, Claudio, and Felipe Durán. "Taxonomy, systematics and geographic distribution of ground frogs (Alsodidae, Eupsophus): a comprehensive synthesis of the last six decades of research." ZooKeys 863 (July 11, 2019): 107–52. http://dx.doi.org/10.3897/zookeys.863.35484.

Full text
Abstract:
The genusEupsophus(ground frogs) inhabits exclusively the temperate forests of southern South America (Chile and Argentina). The current delimitation of the genus was reached in the late 1970s, when only two species were recognized, but since then the number of described species steadily increased, reaching a maximum of 11 by 2012. Subsequent studies that applied explicit species delimitation approaches decreased the number of species to six in 2017 and raised it again to 11 the following year, including an undescribed putative species. Despite these taxonomic changes, the two species groups traditionally recognized,roseusandvertebralis, have been maintained. Another recent contribution to the taxonomy of the genus was the explicit recognition of the extremely high level of external phenotypic variation exhibited by species of theroseusgroup, which undermines the utility of some diagnostic characters. Here we provide a critical review of the extensive taxonomic and systematic literature on the genus over the last six decades, to examine the evidence behind the recurrent taxonomic changes and advances in its systematics. We also update and complete a 2017 review of geographic information, provide additional qualitative observations of external characters commonly used in the diagnoses of species of theroseusgroup, and reassess the phylogenetic position of a putative new species from Tolhuaca (Chile), which was not included in the last species delimitation study. The present review shows that: 1) there is no congruence between the patterns of phenotypic and genetic/phylogenetic differentiation among species of both groups; 2) in theroseusgroup, the intraspecific variation in some external characters is as high as the differences described among species; 3) there is little morphological and bioacoustic differentiation within species groups, and inconsistencies in the chromosomal evidence at the genus level; 4) under the latest taxonomic proposal (2018), species of theroseusgroup still lack consistent and reliable diagnoses and their distribution limits are poorly defined; and 5) the population from Tolhuaca represents an additional undescribed species under the most recent taxonomic framework. Finally, we discuss the implications of these findings for the taxonomy and biogeography of the genus, pointing out some areas that require further research to understand their patterns and processes of diversification.
APA, Harvard, Vancouver, ISO, and other styles
22

FOLLY, MANUELLA, LUCAS COUTINHO AMARAL, SERGIO POTSCH DE CARVALHO-E-SILVA, and JOSÉ P. POMBAL JR. "Rediscovery of the toadlet Brachycephalus bufonoides Miranda-Ribeiro, 1920 (Anura: Brachycephalidae) with osteological and acoustic descriptions." Zootaxa 4819, no. 2 (July 23, 2020): 265–94. http://dx.doi.org/10.11646/zootaxa.4819.2.3.

Full text
Abstract:
Brachycephalus bufonoides was described as a “variety” of B. ephippium based on two specimens which 90 years later was considered full species. Besides its brief original description, nothing else is known for this species. Herein we report the rediscovery of the pumpkin-toadlet Brachycephalus bufonoides from Nova Friburgo, State of Rio de Janeiro, the second most populous area within the Atlantic Forest in Brazil. A detailed osteological description of this species was also provided, including skull, hyolaryngeal skeleton and postcranium skeleton. The laryngeal skeleton of Brachycephalus genus was depicted for the first time. We conducted a molecular phylogenetic analysis of Brachycephalus using DNA sequences comprising two fragments of mitochondrial gene (16S). Both analysis with Bayesian inference and maximum parsimony supported the recognition of B. bufonoides as an exclusive lineage, allocated within the B. ephippium species group in B. vertebralis lineage. We improved the diagnosis and variation of the species, including more collected specimens, coloration in vivo and advertisement call description. Compared with its congeners, B. bufonoides has skin on head and dorsum with dermal hyperossification; skull with hyperossification of postorbital crests; a pair of hyperossified bulges about equidistant between postorbital crests; fourth presacral vertebra with transverse process hyperossified, ornamented and sacral diapophyses hyperossified, which can be seen externally (lineage of B. vertebralis sensu Condez et al. 2020); presence of dermal ossification as separated bulges of each vertebrae; general background color orange with different intensities of dark orange blotches on dorsum, including bordering of sacral region; absence of osteoderms and presence of warts on the dorsolateral surface of body; medium body size (SVL of adults: 12.0–14.5 mm for males and 14.7–16.3 mm for females; Table 1); rough dorsum; advertisement calls with 13 to 17 pulses; presence of pulse period modulation; and advertisement calls with notes longer than 0.2 s (0.22 to 0.31 s). Herein an important contribution for the taxonomy and systematics of this genus is provided, including a large amount of novel information for B. bufonoides from different sources (i.e., molecular, morphological variation, bioacoustic), allowing it to be included in future studies of species delimitation and relationships within Brachycephalus. Also, the discovery of this species reiterates the importance of Nova Friburgo for the conservation of the Atlantic Forest biodiversity.
APA, Harvard, Vancouver, ISO, and other styles
23

ZEFA, EDISON, LUCIANO DE PINHO MARTINS, CHRISTIAN PETER DEMARI, RIULER CORRÊA ACOSTA, ELLIOTT CENTENO, RODRIGO ANTÔNIO CASTRO-SOUZA, GABRIEL LOBREGAT DE OLIVEIRA, et al. "Singing crickets from Brazil (Orthoptera: Gryllidea), an illustrated checklist with access to the sounds produced." Zootaxa 5209, no. 2 (November 16, 2022): 211–37. http://dx.doi.org/10.11646/zootaxa.5209.2.4.

Full text
Abstract:
The knowledge of bioacoustics of the Neotropical crickets (Orthoptera, Gryllidea) is incipient, despite the great species diversity in the region. There are few cricket song-files deposited in the major World Sound Libraries, compared to other groups such as birds and amphibians. In order to contribute to the knowledge of the bioacoustics of Brazilian crickets, we organize, analyze and make available at Fonoteca Neotropical Jacques Vielliard (FNJV) and Orthoptera Species File (OSF) our bank of cricket songs. We deposited 876 cricket’s song files in the FNJV, belonging to 31 species and 47 sonotypes. The songs were field/lab recorded, and all individuals were collected to improve species/sonotypes taxonomic determination accuracy. We present photos (in vivo) of most recorded crickets, as well as calling song spectrograms to facilitate the species/sonotype recognition. Samples of the songs can be found online on the FNJV website, using the codes available in this work, as well as on the OSF, linked to the species name. As a result, we advance the knowledge of the songs of crickets and the current perspective of the Brazilian cricket bioacoustics. We encourage researchers to share with the public their collections of their cricket file songs both in the FNJV and the OSF.
APA, Harvard, Vancouver, ISO, and other styles
24

Ghosh, Saheb, Sathis Kumar B, and Kathir Deivanai. "DETECTION OF WHALES USING DEEP LEARNING METHODS AND NEURAL NETWORKS." Asian Journal of Pharmaceutical and Clinical Research 10, no. 13 (April 1, 2017): 489. http://dx.doi.org/10.22159/ajpcr.2017.v10s1.20767.

Full text
Abstract:
Deep learning methods are a great machine learning technique which is mostly used in artificial neural networks for pattern recognition. This project is to identify the Whales from under water Bioacoustics network using an efficient algorithm and data model, so that location of the whales can be send to the Ships travelling in the same region in order to avoid collision with the whale or disturbing their natural habitat as much as possible. This paper shows application of unsupervised machine learning techniques with help of deep belief network and manual feature extraction model for better results.
APA, Harvard, Vancouver, ISO, and other styles
25

Teixeira, Daniella, Simon Linke, Richard Hill, Martine Maron, and Berndt J. van Rensburg. "Fledge or fail: Nest monitoring of endangered black-cockatoos using bioacoustics and open-source call recognition." Ecological Informatics 69 (July 2022): 101656. http://dx.doi.org/10.1016/j.ecoinf.2022.101656.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Morgan, Mallory M., and Jonas Braasch. "Open set classification strategies for long-term environmental field recordings for bird species recognition." Journal of the Acoustical Society of America 151, no. 6 (June 2022): 4028–38. http://dx.doi.org/10.1121/10.0011466.

Full text
Abstract:
Deep learning is one established tool for carrying out classification tasks on complex, multi-dimensional data. Since audio recordings contain a frequency and temporal component, long-term monitoring of bioacoustics recordings is made more feasible with these computational frameworks. Unfortunately, these neural networks are rarely designed for the task of open set classification in which examples belonging to the training classes must not only be correctly classified but also crucially separated from any spurious or unknown classes. To combat this reliance on closed set classifiers which are singularly inappropriate for monitoring applications in which many non-relevant sounds are likely to be encountered, the performance of several open set classification frameworks is compared on environmental audio datasets recorded and published within this work, containing both biological and anthropogenic sounds. The inference-based open set classification techniques include prediction score thresholding, distance-based thresholding, and OpenMax. Each open set classification technique is evaluated under multi-, single-, and cross-corpus scenarios for two different types of unknown data, configured to highlight common challenges inherent to real-world classification tasks. The performance of each method is highly dependent upon the degree of similarity between the training, testing, and unknown domain.
APA, Harvard, Vancouver, ISO, and other styles
27

Köhler, Gunther, Joseph Vargas, Ni Lar Than, Tilman Schell, Axel Janke, Steffen U. Pauls, and Panupong Thammachoti. "A taxonomic revision of the genus Phrynoglossus in Indochina with the description of a new species and comments on the classification within Occidozyginae (Amphibia, Anura, Dicroglossidae)." Vertebrate Zoology 71 (February 26, 2021): 1–26. http://dx.doi.org/10.3897/vz.71.e60312.

Full text
Abstract:
We revise the frogs of the genus Phrynoglossus from Indochina based on data of external morphology, bioacoustics and molecular genetics. The results of this integrative study provide evidence for the recognition of three distinct species, one of which we describe as new. Phrynoglossus martensii has a vast geographic distribution from central and southern Thailand across southern China to Vietnam, Laos, and Cambodia. Phrynoglossus myanhesseisp. nov. is distributed in central Myanmar whereas Phrynoglossus magnapustulosus is restricted to the Khorat Plateau, Thailand. These three species occur in allopatry and differ in their mating calls, external morphology, and in genetic distances of the 16S gene of 3.8–5.9%. Finally, we discuss and provide evolutionary evidence for the recognition of Phrynoglossus as a genus distinct from Occidozyga. Members of both genera form reciprocal monophyletic groups in our analyses of mtDNA data (16S) and are well differentiated from each other in morphology and ecology. Furthermore, they differ in the amplexus mode with Phrynoglossus having an inguinal amplexus whereas it is axillary in Occidozyga. We further provide a de novo draft genome of the holotype based on short-read sequencing technology to a coverage of 25-fold. This resource will permanently link the genetic characterization of the species to the name-bearing type specimen.
APA, Harvard, Vancouver, ISO, and other styles
28

Forti, Lucas Rodriguez, Célio Fernando Baptista Haddad, Felipe Leite, Leandro de Oliveira Drummond, Clodoaldo de Assis, Lucas Batista Crivellari, Caio Marinho Mello, Paulo Christiano Anchietta Garcia, Camila Zornosa-Torres, and Luís Felipe Toledo. "Notes on vocalizations of Brazilian amphibians IV: advertisement calls of 20 Atlantic Forest frog species." PeerJ 7 (September 13, 2019): e7612. http://dx.doi.org/10.7717/peerj.7612.

Full text
Abstract:
Bioacoustics is a powerful tool used for anuran species diagnoses, given that advertisement calls are signals related to specific recognition and mate attraction. Thus, call descriptions can support species taxonomy. In spite of that, call descriptions are lacking for many species, delaying advances in biodiversity research. Here, we describe the advertisement calls of 20 anuran species from the Brazilian Atlantic Forest. We accessed 50 digital recordings deposited in the Fonoteca Neotropical Jacques Vielliard. Acoustic analyses were carried out in the software Raven pro 1.5. We provide a general comparison of call structure among species inside taxonomic groups and genera. The vocalizations described here belong to poorly known species, which are representatives of six families: Brachycephalidae, Bufonidae, Ceratophryidae, Cycloramphidae, Hylidae, and Phyllomedusidae. Despite this, still there are 163 species of anurans from Atlantic Forest with calls not formally described. Our work represents an important step in providing data for a taxonomic perspective and improving the knowledge of the Atlantic Forest anuran diversity.
APA, Harvard, Vancouver, ISO, and other styles
29

Köhler, Gunther, Britta Zwitzers, Ni Lar Than, Deepak Kumar Gupta, Axel Janke, Steffen U. Pauls, and Panupong Thammachoti. "Bioacoustics Reveal Hidden Diversity in Frogs: Two New Species of the Genus Limnonectes from Myanmar (Amphibia, Anura, Dicroglossidae)." Diversity 13, no. 9 (August 24, 2021): 399. http://dx.doi.org/10.3390/d13090399.

Full text
Abstract:
Striking geographic variation in male advertisement calls was observed in frogs formerly referred to as Limnonectes doriae and L. limborgi, respectively. Subsequent analyses of mtDNA and external morphological data brought supporting evidence for the recognition of these populations as distinct species. We describe two new frog species of the genus Limnonectes (i.e., L. bagoensis sp. nov. and L. bagoyoma sp. nov.) from Myanmar. Limnonectes bagoensis sp. nov. is closely related to L. doriae whereas L. bagoyoma sp. nov. is closely related to L. limborgi. Results of this integrative study provide evidence for the presence of additional undescribed species in these species complexes but due to the lack of bioacoustical data, we consider these additional diverging populations as candidate species that need further study to resolve their respective taxonomic status. Both new species are distributed in Lower Myanmar. Limnonectes doriae is restricted to southern Myanmar along the Malayan Peninsula whereas L. limborgi is known to occur in eastern Myanmar and northwestern Thailand. The remaining populations formerly referred to as either L. doriae or L. limborgi are considered representatives of various candidate species that await further study. We further provide a de novo draft genome of the respective holotypes of L. bagoensis sp. nov. and L. bagoyoma sp. nov. based on short-read sequencing technology to 25-fold coverage.
APA, Harvard, Vancouver, ISO, and other styles
30

CAORSI, VALENTINA, DEBORA WOLFF BORDIGNON, RAFAEL MÁRQUEZ, and MÁRCIO BORGES-MARTINS. "Advertisement call of two threatened red-bellied-toads, Melanophryniscus cambaraensis and M. macrogranulosus (Anura: Bufonidae), from the Atlantic Rainforest, southern Brazil." Zootaxa 4894, no. 2 (December 9, 2020): 206–20. http://dx.doi.org/10.11646/zootaxa.4894.2.2.

Full text
Abstract:
In anuran amphibians, acoustic signals are fundamental mechanisms of mate recognition and mate choice, which makes frog calls a fundamental tool for anuran taxonomy. In this work, we describe the advertisement call of two species for the genus Melanophryniscus, M. cambaraensis and M. macrogranulosus and use the descriptions to try to solve a taxonomic problem between them. We collected data after heavy rains in three different sample sites in Rio Grande do Sul, Brazil, between 2012 and 2013. The advertisement call of both species is composed of two segments. It always begins with part A (about 0.44–6 seconds) composed of single modulated pulses separated by long time intervals. It is followed by part B, a long train of unmodulated pulses with short time intervals, lasting from 9 to 32.2 seconds. Principal Component Analysis (PCA) indicated some variation between temporal parameters of the two species, but Multivariate Analysis of Variance showed no significant differences. Within-individual Coefficient of Variation (CV) showed only two static parameters: pulse rate and peak frequency, both in the part B of the call. Despite intra-male variation in some acoustic parameters, it is not possible to differentiate between M. cambaraensis and M. macrogranulosus species only using bioacoustics.
APA, Harvard, Vancouver, ISO, and other styles
31

Farrow, Lucy F., Ahmad Barati, and Paul G. McDonald. "Cooperative bird discriminates between individuals based purely on their aerial alarm calls." Behavioral Ecology 31, no. 2 (December 17, 2019): 440–47. http://dx.doi.org/10.1093/beheco/arz182.

Full text
Abstract:
Abstract From an evolutionary perspective, the ability to recognize individuals provides great selective advantages, such as avoiding inbreeding depression during breeding. Whilst the capacity to recognize individuals for these types of benefits is well established in social contexts, why this recognition might arise in a potentially deadly alarm-calling context following predator encounters is less obvious. For example, in most avian systems, alarm signals directed toward aerial predators represent higher predation risk and vulnerability than when individuals vocalize toward a terrestrial-based predator. Although selection should favor simple, more effective alarm calls to these dangerous aerial predators, the potential of these signals to nonetheless encode additional information such as caller identity has not received a great deal of attention. We tested for individual discrimination capacity in the aerial alarm vocalizations of the noisy miner (Manorina melanocephala), a highly social honeyeater that has been previously shown to be able to discriminate between the terrestrial alarm signals of individuals. Utilizing habituation–discrimination paradigm testing, we found conclusive evidence of individual discrimination in the aerial alarm calls of noisy miners, which was surprisingly of similar efficiency to their ability to discriminate between less urgent terrestrial alarm signals. Although the mechanism(s) driving this behavior is currently unclear, it most likely occurs as a result of selection favoring individualism among other social calls in the repertoire of this cooperative species. This raises the intriguing possibility that individualistic signatures in vocalizations of social animals might be more widespread than currently appreciated, opening new areas of bioacoustics research.
APA, Harvard, Vancouver, ISO, and other styles
32

MacLaren, Andrew R., Shawn F. McCracken, and Michael R. J. Forstner. "Development and Validation of Automated Detection Tools for Vocalizations of Rare and Endangered Anurans." Journal of Fish and Wildlife Management 9, no. 1 (February 26, 2017): 144–54. http://dx.doi.org/10.3996/052017-jfwm-047.

Full text
Abstract:
Abstract For many rare or endangered anurans, monitoring is achieved via auditory cues alone. Human-performed audio surveys are inherently biased, and may fail to detect animals when they are present. Automated audio recognition tools offer an alternative mode of observer-free monitoring. Few commercially available platforms for developing these tools exist, and little research has investigated whether these tools are effective at detecting rare vocalization events. We generated a recognizer for detecting the vocalization of the endangered Houston toad Anaxyrus houstonensis using SongScope© bioacoustics software. We developed our recognizer using a large sample of training data that included only the highest quality of recorded audio (i.e., low noise, no interfering vocalizations) divided into small, manageable batches. To track recognizer performance, we generated an independent set of test data through randomly sampling a large population of audio known to possess Houston toad vocalizations. We analyzed training data and test data recursively, using a criterion of zero tolerance for false-negative detections. For each step, we incorporated a new batch of training data into the recognizer. Once we included all training data, we manually verified recognizer performance against one full month (March 2014) of audio taken from a known breeding locality. The recognizer successfully identified 100% of all training data and 97.2% of all test data. However, there is a trade-off between reducing false-negative and increasing false-positive detections, which limited the usefulness of some features of SongScope. Methods of automated detection represent a means by which we may test the efficacy of the manual monitoring techniques currently in use. The ability to search any collection of audio recordings for Houston toad vocalizations has the potential to challenge the paradigms presently placed on monitoring for this species of conservation concern.
APA, Harvard, Vancouver, ISO, and other styles
33

Dubois, Alain. "Lists of European species of amphibians and reptiles: will we soon be reaching "stability"?" Amphibia-Reptilia 19, no. 1 (1998): 1–28. http://dx.doi.org/10.1163/156853898x00304.

Full text
Abstract:
AbstractZoologists at the end of our century are faced with a strong demand from "society" for "final and definitive" lists of taxon names: such lists are requested in particular by administrations and users of "official lists" of species. This has entailed, even among some professional taxonomists, a strong movement in favour of artificial stability of taxon names and of a replacement of the basic rule of the International Code of Zoological Nomenclature, the rule of priority, by a so-called "rule of common usage". The aim of this paper is to show, taking the example of European anuran amphibians, that this way of posing the question is wrong. The major factor of change in taxon names in zoology is taxonomic research, not nomenclatural grooming. Contrary to what is often believed, even in "well-known" regions like Europe, numerous new species have recently been discovered, in part through the use of new research techniques (electrophoresis, bioacoustics, etc.), but also as a result of better exploration of natural populations: the misleading idea that "the European fauna is well known" has acted as a brake against recognition of new taxa when these were discovered in the field. Name changes due to the mere application of nomenclatural rules are much less numerous than those due to the progress of taxonomic research, and they would be even much less common if zoologists and editors paid more attention to the international rules of nomenclature. We are still far from reaching the "holy grail" of "final lists" of animal faunae, even in Europe, and, rather than trying to comply with this request from "society", zoologists should explain why this goal will not be reached soon, and that the only way to accelerate the movement towards it would be the creation of numerous positions of professional zoologists and the increase of funds afforded to basic zoological research in Europe.
APA, Harvard, Vancouver, ISO, and other styles
34

HEPP, FÁBIO, and JOSÉ P. JR POMBAL. "Review of bioacoustical traits in the genus Physalaemus Fitzinger, 1826 (Anura: Leptodactylidae: Leiuperinae)." Zootaxa 4725, no. 1 (January 20, 2020): 1–106. http://dx.doi.org/10.11646/zootaxa.4725.1.1.

Full text
Abstract:
Given the importance of acoustic communication in intraspecific recognition during mating activity, acoustic traits have been widely used to clarify the taxonomy of anurans. They have been particularly useful in the study of taxa with high morphological similarity such as the Neotropical genus Physalaemus. Here, we reviewed the acoustic repertoires of the species of Physalaemus based on homology hypotheses in order to make comparisons more properly applicable for taxonomic purposes. We covered all the known clades and species groups for the genus, analyzing 45 species (94 % of the currently recognized taxa). Different call types were labeled with letters (i.e., A, B, and C) to avoid speculative functional propositions for the call types. In order to identify correctly the observed frequency bands, we propose a method to interpret them based on the predicted graphic behavior on audiospectrogram and on the mathematic relationship among bands considering each kind of band production (e.g., harmonics and sidebands). We found different acoustic traits between the major clades P. signifer and P. cuvieri. Species in the P. signifer clade have more than one call type (67 % of species in the clade). Furthermore, all species of this clade have A calls with pulses and/or low fundamental frequency (< 500 Hz). In the P. cuvieri clade, species emit only one call type and, in most species, this call is a continuous whine-like emission with relatively high fundamental frequency (> 400 Hz) and several S-shaped harmonics (except for species of P. henselii and P. olfersii groups, P. centralis, and P. cicada). Within the P. signifer clade, pulsed calls are present in P. angrensis, P. atlanticus, P. bokermanni, P. crombiei, P. irroratus, P. moreirae, P. nanus, and P. obtectus, whereas within the P. cuvieri clade this feature is restricted to a few species (10 % of the clade): P. jordanensis, P. feioi, and P. orophilus. A principal component analysis of the quantitative data indicates two clusters that substantially correspond to the composition of these two major clades with a few exceptions. Overall, the cluster composed of taxa of the P. signifer clade has lower fundamental frequency, bandwidth and dominant frequency at the end of the call and higher frequency delta and dominant frequency at the end of the call than the cluster with most taxa of the P. cuvieri clade. We also identified and described several similarities among acoustic signals of closely related species, which might correspond to synapomorphies in the evolution of the acoustic signal in the group. Species of the P. deimaticus group emit long sequences of very short A calls with low fundamental frequency (< 300 Hz) and short duration (< 0.2 s). Most species in the P. signifer group have clearly pulsed calls and emit at least two different call types. Species in the P. henselii group have calls with only high frequency bands (> 1700 Hz). Species in P. cuvieri group have continuous calls that resemble nasal-like sounds or whines, with downward frequency modulation. Species in the P. olfersii group emit long calls (> 1 s) with ascendant and periodic frequency modulation. Calls of the species in the P. biligonigerus and P. gracilis groups usually have continuous whine-like calls with call envelopes very variable within species. In addition, we describe traits in the genus for the first time, such as complex traits not predicted by simple and linear acoustic models (nonlinear phenomena), and discuss the application of acoustic traits to taxonomy and phylogenetics and morphological constraints of the vocal apparatus that might be related to the different acoustic properties found.
APA, Harvard, Vancouver, ISO, and other styles
35

Bertran, Marta, Rosa Ma Alsina-Pagès, and Elena Tena. "Pipistrellus pipistrellus and Pipistrellus pygmaeus in the Iberian Peninsula: An Annotated Segmented Dataset and a Proof of Concept of a Classifier in a Real Environment." Applied Sciences 9, no. 17 (August 22, 2019): 3467. http://dx.doi.org/10.3390/app9173467.

Full text
Abstract:
Bats have an important role in the ecosystem, and therefore an effective detection of their prevalence can contribute to their conservation. At present, the most commonly methodology used in the study of bats is the analysis of echolocation calls. However, many other ultrasound signals can be simultaneously recorded, and this makes species location and identification a long and difficult task. This field of research could be greatly improved through the use of bioacoustics which provide a more accurate automated detection, identification and count of the wildlife of a particular area. We have analyzed the calls of two bat species—Pipistrellus pipistrellus and Pipistrellus pygmaeus—both of which are common types of bats frequently found in the Iberian Peninsula. These two cryptic species are difficult to identify by their morphological features, but are more easily identified by their echolocation calls. The real-life audio files have been obtained by an Echo Meter Touch Pro 1 bat detector. Time-expanded recordings of calls were first classified manually by means of their frequency, duration and interpulse interval. In this paper, we first detail the creation of a dataset with three classes, which are the two bat species but also the silent intervals. This dataset can be useful to work in mixed species environment. Afterwards, two automatic bat detection and identification machine learning approaches are described, in a laboratory environment, which represent the previous step to real-life in an urban scenario. The priority in that approaches design is the identification using short window analysis in order to detect each bat pulse. However, given that we are concerned with the risks of automatic identification, the main aim of the project is to accelerate the manual ID process for the specialists in the field. The dataset provided will help researchers develop automatic recognition systems for a more accurate identification of the bat species in a laboratory environment, and in a near future, in an urban environment, where those two bat species are common.
APA, Harvard, Vancouver, ISO, and other styles
36

Gatto, Bernardo Bentes, Juan Gabriel Colonna, Eulanda Miranda dos Santos, Alessandro Lameiras Koerich, and Kazuhiro Fukui. "Discriminative Singular Spectrum Classifier with applications on bioacoustic signal recognition." Digital Signal Processing, November 2022, 103858. http://dx.doi.org/10.1016/j.dsp.2022.103858.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Bravo Sanchez, Francisco J., Md Rahat Hossain, Nathan B. English, and Steven T. Moore. "Bioacoustic classification of avian calls from raw sound waveforms with an open-source deep learning architecture." Scientific Reports 11, no. 1 (August 3, 2021). http://dx.doi.org/10.1038/s41598-021-95076-6.

Full text
Abstract:
AbstractThe use of autonomous recordings of animal sounds to detect species is a popular conservation tool, constantly improving in fidelity as audio hardware and software evolves. Current classification algorithms utilise sound features extracted from the recording rather than the sound itself, with varying degrees of success. Neural networks that learn directly from the raw sound waveforms have been implemented in human speech recognition but the requirements of detailed labelled data have limited their use in bioacoustics. Here we test SincNet, an efficient neural network architecture that learns from the raw waveform using sinc-based filters. Results using an off-the-shelf implementation of SincNet on a publicly available bird sound dataset (NIPS4Bplus) show that the neural network rapidly converged reaching accuracies of over 65% with limited data. Their performance is comparable with traditional methods after hyperparameter tuning but they are more efficient. Learning directly from the raw waveform allows the algorithm to select automatically those elements of the sound that are best suited for the task, bypassing the onerous task of selecting feature extraction techniques and reducing possible biases. We use publicly released code and datasets to encourage others to replicate our results and to apply SincNet to their own datasets; and we review possible enhancements in the hope that algorithms that learn from the raw waveform will become useful bioacoustic tools.
APA, Harvard, Vancouver, ISO, and other styles
38

Jäckel, Denise, Kim G. Mortega, Sarah Darwin, Ulrich Brockmeyer, Ulrike Sturm, Mario Lasseck, Nicola Moczek, Gerlind U. C. Lehmann, and Silke L. Voigt-Heucke. "Community engagement and data quality: best practices and lessons learned from a citizen science project on birdsong." Journal of Ornithology, October 13, 2022. http://dx.doi.org/10.1007/s10336-022-02018-8.

Full text
Abstract:
AbstractCitizen Science (CS) is a research approach that has become popular in recent years and offers innovative potential for dialect research in ornithology. As the scepticism about CS data is still widespread, we analysed the development of a 3-year CS project based on the song of the Common Nightingale (Luscinia megarhynchos) to share best practices and lessons learned. We focused on the data scope, individual engagement, spatial distribution and species misidentifications from recordings generated before (2018, 2019) and during the COVID-19 outbreak (2020) with a smartphone using the ‘Naturblick’ app. The number of nightingale song recordings and individual engagement increased steadily and peaked in the season during the pandemic. 13,991 nightingale song recordings were generated by anonymous (64%) and non-anonymous participants (36%). As the project developed, the spatial distribution of recordings expanded (from Berlin based to nationwide). The rates of species misidentifications were low, decreased in the course of the project (10–1%) and were mainly affected by vocal similarities with other bird species. This study further showed that community engagement and data quality were not directly affected by dissemination activities, but that the former was influenced by external factors and the latter benefited from the app. We conclude that CS projects using smartphone apps with an integrated pattern recognition algorithm are well suited to support bioacoustic research in ornithology. Based on our findings, we recommend setting up CS projects over the long term to build an engaged community which generates high data quality for robust scientific conclusions.
APA, Harvard, Vancouver, ISO, and other styles
39

Sim, Joo Yong, Hyung Wook Noh, Woonhoe Goo, Namkeun Kim, Seung-Hoon Chae, and Chang-Geun Ahn. "Identity Recognition Based on Bioacoustics of Human Body." IEEE Transactions on Cybernetics, 2019, 1–12. http://dx.doi.org/10.1109/tcyb.2019.2941281.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Šturm, Rok, Juan José López Díez, Jernej Polajnar, Jérôme Sueur, and Meta Virant-Doberlet. "Is It Time for Ecotremology?" Frontiers in Ecology and Evolution 10 (March 2, 2022). http://dx.doi.org/10.3389/fevo.2022.828503.

Full text
Abstract:
Our awareness of air-borne sounds in natural and urban habitats has led to the recent recognition of soundscape ecology and ecoacoustics as interdisciplinary fields of research that can help us better understand ecological processes and ecosystem dynamics. Because the vibroscape (i.e., the substrate-borne vibrations occurring in a given environment) is hidden to the human senses, we have largely overlooked its ecological significance. Substrate vibrations provide information crucial to the reproduction and survival of most animals, especially arthropods, which are essential to ecosystem functioning. Thus, vibroscape is an important component of the environment perceived by the majority of animals. Nowadays, when the environment is rapidly changing due to human activities, climate change, and invasive species, this hidden vibratory world is also likely to change without our notice, with potentially crucial effects on arthropod communities. Here, we introduce ecotremology, a discipline that mainly aims at studying substrate-borne vibrations for unraveling ecological processes and biological conservation. As biotremology follows the main research concepts of bioacoustics, ecotremology is consistent with the paradigms of ecoacoustics. We argue that information extracted from substrate vibrations present in the environment can be used to comprehensively assess and reliably predict ecosystem changes. We identify key research questions and discuss the technical challenges associated with ecotremology studies.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography