Academic literature on the topic 'Bioacoustic Recognition'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Bioacoustic Recognition.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Bioacoustic Recognition"

1

Gong, Cihun-Siyong Alex, Chih-Hui Simon Su, Kuo-Wei Chao, Yi-Chu Chao, Chin-Kai Su, and Wei-Hang Chiu. "Exploiting deep neural network and long short-term memory method-ologies in bioacoustic classification of LPC-based features." PLOS ONE 16, no. 12 (December 23, 2021): e0259140. http://dx.doi.org/10.1371/journal.pone.0259140.

Full text
Abstract:
The research describes the recognition and classification of the acoustic characteristics of amphibians using deep learning of deep neural network (DNN) and long short-term memory (LSTM) for biological applications. First, original data is collected from 32 species of frogs and 3 species of toads commonly found in Taiwan. Secondly, two digital filtering algorithms, linear predictive coding (LPC) and Mel-frequency cepstral coefficient (MFCC), are respectively used to collect amphibian bioacoustic features and construct the datasets. In addition, principal component analysis (PCA) algorithm is applied to achieve dimensional reduction of the training model datasets. Next, the classification of amphibian bioacoustic features is accomplished through the use of DNN and LSTM. The Pytorch platform with a GPU processor (NVIDIA GeForce GTX 1050 Ti) realizes the calculation and recognition of the acoustic feature classification results. Based on above-mentioned two algorithms, the sound feature datasets are classified and effectively summarized in several classification result tables and graphs for presentation. The results of the classification experiment of the different features of bioacoustics are verified and discussed in detail. This research seeks to extract the optimal combination of the best recognition and classification algorithms in all experimental processes.
APA, Harvard, Vancouver, ISO, and other styles
2

Asakura, Takumi, and Shizuru Iida. "Hand gesture recognition by using bioacoustic responses." Acoustical Science and Technology 41, no. 2 (March 1, 2020): 521–24. http://dx.doi.org/10.1250/ast.41.521.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Chesmore, David. "Automated bioacoustic identification of species." Anais da Academia Brasileira de Ciências 76, no. 2 (June 2004): 436–40. http://dx.doi.org/10.1590/s0001-37652004000200037.

Full text
Abstract:
Research into the automated identification of animals by bioacoustics is becoming more widespread mainly due to difficulties in carrying out manual surveys. This paper describes automated recognition of insects (Orthoptera) using time domain signal coding and artificial neural networks. Results of field recordings made in the UK in 2002 are presented which show that it is possible to accurately recognize 4 British Orthoptera species in natural conditions under high levels of interference. Work is under way to increase the number of species recognized.
APA, Harvard, Vancouver, ISO, and other styles
4

Noh, Hyung Wook, Chang-Geun Ahn, Seung-Hoon Chae, Yunseo Ku, and Joo Yong Sim. "Multichannel Acoustic Spectroscopy of the Human Body for Inviolable Biometric Authentication." Biosensors 12, no. 9 (August 31, 2022): 700. http://dx.doi.org/10.3390/bios12090700.

Full text
Abstract:
Specific features of the human body, such as fingerprint, iris, and face, are extensively used in biometric authentication. Conversely, the internal structure and material features of the body have not been explored extensively in biometrics. Bioacoustics technology is suitable for extracting information about the internal structure and biological and material characteristics of the human body. Herein, we report a biometric authentication method that enables multichannel bioacoustic signal acquisition with a systematic approach to study the effects of selectively distilled frequency features, increasing the number of sensing channels with respect to multiple fingers. The accuracy of identity recognition according to the number of sensing channels and the number of selectively chosen frequency features was evaluated using exhaustive combination searches and forward-feature selection. The technique was applied to test the accuracy of machine learning classification using 5,232 datasets from 54 subjects. By optimizing the scanning frequency and sensing channels, our method achieved an accuracy of 99.62%, which is comparable to existing biometric methods. Overall, the proposed biometric method not only provides an unbreakable, inviolable biometric but also can be applied anywhere in the body and can substantially broaden the use of biometrics by enabling continuous identity recognition on various body parts for biometric identity authentication.
APA, Harvard, Vancouver, ISO, and other styles
5

Ferguson, Elizabeth L., Peter Sugarman, Kevin R. Coffey, Jennifer Pettis Schallert, and Gabriela C. Alongi. "Development of deep neural networks for marine mammal call detection using an open-source, user friendly tool." Journal of the Acoustical Society of America 151, no. 4 (April 2022): A28. http://dx.doi.org/10.1121/10.0010547.

Full text
Abstract:
As the collection of large acoustic datasets used to monitor marine mammals increases, so too does the need for expedited and reliable detection of accurately classified bioacoustic signals. Deep learning methods of detection and classification are increasingly proposed as a means of addressing this processing need. These image recognition and classification methods include the use of a neural networks that independently determine important features of bioacoustic signals from spectrograms. Recent marine mammal call detection studies report consistent performance even when used with datasets that were not included in the network training. We present here the use of DeepSqueak, a novel open-source tool originally developed to detect and classify ultrasonic vocalizations from rodents in a low-noise, laboratory setting. We have trained networks in DeepSqueak to detect marine mammal vocalizations in comparatively noisy, natural acoustic environments. DeepSqueak utilizes a regional convolutional neural network architecture within an intuitive graphical user interface that provides automated detection results independent of acoustician expertise. Using passive acoustic data from two hydrophones on the Ocean Observatories Initiative’s Coastal Endurance Array, we developed networks for humpback whales, delphinids, and fin whales. We report performance and limitations for use of this detection method for each species.
APA, Harvard, Vancouver, ISO, and other styles
6

Crawford, John D., Aaron P. Cook, and Andrea S. Heberlein. "Bioacoustic behavior of African fishes (Mormyridae): Potential cues for species and individual recognition inPollimyrus." Journal of the Acoustical Society of America 102, no. 2 (August 1997): 1200–1212. http://dx.doi.org/10.1121/1.419923.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Oba, Teruyo. "The sound environmental education aided by automated bioacoustic identification in view of soundscape recognition." Journal of the Acoustical Society of America 120, no. 5 (November 2006): 3239. http://dx.doi.org/10.1121/1.4788256.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Chesmore, E. D., and E. Ohya. "Automated identification of field-recorded songs of four British grasshoppers using bioacoustic signal recognition." Bulletin of Entomological Research 94, no. 4 (August 2004): 319–30. http://dx.doi.org/10.1079/ber2004306.

Full text
Abstract:
AbstractRecognition of Orthoptera species by means of their song is widely used in field work but requires expertise. It is now possible to develop computer-based systems to achieve the same task with a number of advantages including continuous long term unattended operation and automatic species logging. The system described here achieves automated discrimination between different species by utilizing a novel time domain signal coding technique and an artificial neural network. The system has previously been shown to recognize 25 species of British Orthoptera with 99% accuracy for good quality sounds. This paper tests the system on field recordings of four species of grasshopper in northern England in 2002 and shows that it is capable of not only correctly recognizing the target species under a range of acoustic conditions but also of recognizing other sounds such as birds and man-made sounds. Recognition accuracies for the four species of typically 70–100% are obtained for field recordings with varying sound intensities and background signals.
APA, Harvard, Vancouver, ISO, and other styles
9

Larsen, Hanne Lyngholm, Cino Pertoldi, Niels Madsen, Ettore Randi, Astrid Vik Stronen, Holly Root-Gutteridge, and Sussie Pagh. "Bioacoustic Detection of Wolves: Identifying Subspecies and Individuals by Howls." Animals 12, no. 5 (March 2, 2022): 631. http://dx.doi.org/10.3390/ani12050631.

Full text
Abstract:
Wolves (Canis lupus) are generally monitored by visual observations, camera traps, and DNA traces. In this study, we evaluated acoustic monitoring of wolf howls as a method for monitoring wolves, which may permit detection of wolves across longer distances than that permitted by camera traps. We analyzed acoustic data of wolves’ howls collected from both wild and captive ones. The analysis focused on individual and subspecies recognition. Furthermore, we aimed to determine the usefulness of acoustic monitoring in the field given the limited data for Eurasian wolves. We analyzed 170 howls from 16 individual wolves from 3 subspecies: Arctic (Canis lupus arctos), Eurasian (C. l. lupus), and Northwestern wolves (C. l. occidentalis). Variables from the fundamental frequency (f0) (lowest frequency band of a sound signal) were extracted and used in discriminant analysis, classification matrix, and pairwise post-hoc Hotelling test. The results indicated that Arctic and Eurasian wolves had subspecies identifiable calls, while Northwestern wolves did not, though this sample size was small. Identification on an individual level was successful for all subspecies. Individuals were correctly classified with 80%–100% accuracy, using discriminant function analysis. Our findings suggest acoustic monitoring could be a valuable and cost-effective tool that complements camera traps, by improving long-distance detection of wolves.
APA, Harvard, Vancouver, ISO, and other styles
10

Cao, Tianyu, Xiaoqun Zhao, Yichen Yang, Caiyun Zhu, and Zhongwei Xu. "Adaptive Recognition of Bioacoustic Signals in Smart Aquaculture Engineering Based on r-Sigmoid and Higher-Order Cumulants." Sensors 22, no. 6 (March 15, 2022): 2277. http://dx.doi.org/10.3390/s22062277.

Full text
Abstract:
In recent years, interest in aquaculture acoustic signal has risen since the development of precision agriculture technology. Underwater acoustic signals are known to be noisy, especially as they are inevitably mixed with a large amount of environmental background noise, causing severe interference in the extraction of signal features and the revelation of internal laws. Furthermore, interference adds a considerable burden on the transmission, storage, and processing of data. A signal recognition curve (SRC) algorithm is proposed based on higher-order cumulants (HOC) and a recognition-sigmoid function for feature extraction of target signals. The signal data of interest can be accurately identified using the SRC. The analysis and verification of the algorithm are carried out in this study. The results show that when the SNR is greater than 7 dB, the SRC algorithm is effective, and the performance improvement is maximized when the SNR is 11 dB. Furthermore, the SRC algorithm has shown better flexibility and robustness in application.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Bioacoustic Recognition"

1

Mace, Michael. "Heterogeneous recognition of bioacoustic signals for human-machine interfaces." Thesis, Imperial College London, 2013. http://hdl.handle.net/10044/1/11095.

Full text
Abstract:
Human-machine interfaces (HMI) provide a communication pathway between man and machine. Not only do they augment existing pathways, they can substitute or even bypass these pathways where functional motor loss prevents the use of standard interfaces. This is especially important for individuals who rely on assistive technology in their everyday life. By utilising bioacoustic activity, it can lead to an assistive HMI concept which is unobtrusive, minimally disruptive and cosmetically appealing to the user. However, due to the complexity of the signals it remains relatively underexplored in the HMI field. This thesis investigates extracting and decoding volition from bioacoustic activity with the aim of generating real-time commands. The developed framework is a systemisation of various processing blocks enabling the mapping of continuous signals into M discrete classes. Class independent extraction efficiently detects and segments the continuous signals while class-specific extraction exemplifies each pattern set using a novel template creation process stable to permutations of the data set. These templates are utilised by a generalised single channel discrimination model, whereby each signal is template aligned prior to classification. The real-time decoding subsystem uses a multichannel heterogeneous ensemble architecture which fuses the output from a diverse set of these individual discrimination models. This enhances the classification performance by elevating both the sensitivity and specificity, with the increased specificity due to a natural rejection capacity based on a non-parametric majority vote. Such a strategy is useful when analysing signals which have diverse characteristics, false positives are prevalent and have strong consequences, and when there is limited training data available. The framework has been developed with generality in mind with wide applicability to a broad spectrum of biosignals. The processing system has been demonstrated on real-time decoding of tongue-movement ear pressure signals using both single and dual channel setups. This has included in-depth evaluation of these methods in both offline and online scenarios. During online evaluation, a stimulus based test methodology was devised, while representative interference was used to contaminate the decoding process in a relevant and real fashion. The results of this research provide a strong case for the utility of such techniques in real world applications of human-machine communication using impulsive bioacoustic signals and biosignals in general.
APA, Harvard, Vancouver, ISO, and other styles
2

Bastas, Selin A. "Nocturnal Bird Call Recognition System for Wind Farm Applications." University of Toledo / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1325803309.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hübner, Sebastian Valentin. "Wissensbasierte Modellierung von Audio-Signal-Klassifikatoren : zur Bioakustik von Tursiops truncatus. - 2., überarb. Aufl." Phd thesis, Universität Potsdam, 2007. http://opus.kobv.de/ubp/volltexte/2008/1663/.

Full text
Abstract:
Die vorliegende Arbeit befasst sich mit der wissensbasierten Modellierung von Audio-Signal-Klassifikatoren (ASK) für die Bioakustik. Sie behandelt ein interdisziplinäres Problem, das viele Facetten umfasst. Zu diesen gehören artspezifische bioakustische Fragen, mathematisch-algorithmische Details und Probleme der Repräsentation von Expertenwissen. Es wird eine universelle praktisch anwendbare Methode zur wissensbasierten Modellierung bioakustischer ASK dargestellt und evaluiert. Das Problem der Modellierung von ASK wird dabei durchgängig aus KDD-Perspektive (Knowledge Discovery in Databases) betrachtet. Der grundlegende Ansatz besteht darin, mit Hilfe von modifizierten KDD-Methoden und Data-Mining-Verfahren die Modellierung von ASK wesentlich zu erleichtern. Das etablierte KDD-Paradigma wird mit Hilfe eines detaillierten formalen Modells auf den Bereich der Modellierung von ASK übertragen. Neunzehn elementare KDD-Verfahren bilden die Grundlage eines umfassenden Systems zur wissensbasierten Modellierung von ASK. Methode und Algorithmen werden evaluiert, indem eine sehr umfangreiche Sammlung akustischer Signale des Großen Tümmlers mit ihrer Hilfe untersucht wird. Die Sammlung wurde speziell für diese Arbeit in Eilat (Israel) angefertigt. Insgesamt werden auf Grundlage dieses Audiomaterials vier empirische Einzelstudien durchgeführt: - Auf der Basis von oszillographischen und spektrographischen Darstellungen wird ein phänomenologisches Klassifikationssystem für die vielfältigen Laute des Großen Tümmlers dargestellt. - Mit Hilfe eines Korpus halbsynthetischer Audiodaten werden verschiedene grundlegende Verfahren zur Modellierung und Anwendung von ASK in Hinblick auf ihre Genauigkeit und Robustheit untersucht. - Mit einem speziell entwickelten Clustering-Verfahren werden mehrere Tausend natürliche Pfifflaute des Großen Tümmlers untersucht. Die Ergebnisse werden visualisiert und diskutiert. - Durch maschinelles mustererkennungsbasiertes akustisches Monitoring wird die Emissionsdynamik verschiedener Lauttypen im Verlaufe von vier Wochen untersucht. Etwa 2.5 Millionen Klicklaute werden im Anschluss auf ihre spektralen Charakteristika hin untersucht. Die beschriebene Methode und die dargestellten Algorithmen sind in vielfältiger Hinsicht erweiterbar, ohne dass an ihrer grundlegenden Architektur etwas geändert werden muss. Sie lassen sich leicht in dem gesamten Gebiet der Bioakustik einsetzen. Hiermit besitzen sie auch für angrenzende Disziplinen ein hohes Potential, denn exaktes Wissen über die akustischen Kommunikations- und Sonarsysteme der Tiere wird in der theoretischen Biologie, in den Kognitionswissenschaften, aber auch im praktischen Naturschutz, in Zukunft eine wichtige Rolle spielen.
The present thesis is dedicated to the problem of knowledge-based modeling of audio-signal-classifiers in the bioacoustics domain. It deals with an interdisciplinary problem that has many facets. To these belong questions of knowledge representation, bioacoustics and algorithmical issues. The main purpose of the work is to provide and evaluate a scientific method in which all these facets are taken into consideration. In addition, a number of algorithms, which implement all important steps of this method, are described. The problem of modeling audio-signal-classifiers is regarded from the KDD-perspective (Knowledge-Discovery in Databases). The fundamental idea is to use modified KDD- and Data-Mining-algorithms to facilitate the modeling of audio-signal-classifiers. A detailed mathematical formalism is presented and the KDD-paradigm is adopted to the problem of modeling audio-signal-classifiers. 19 new KDD-procedures form a comprehensive system for knowledge-based audio-signal-classifier design. An extensive collection of acoustic signals of the bottlenose-dolphin was recorded in Eilat (Israel). It forms the basis of four empirical studies: A phenomenological classification of acoustic phenomena, an experimental evaluation of accuracy and precision of classifiers, a cluster analysis of whistle sounds and a monitoring study to examine the nature of click sounds. Both, method and algorithms can be adopted to other branches in bioacoustics without changing their fundamental architecture.
APA, Harvard, Vancouver, ISO, and other styles
4

Alexander, Callan. "Passive acoustic monitoring of Australia’s largest owl: Using automatic species recognition to detect the powerful owl (Ninox strenua)." Thesis, Queensland University of Technology, 2022. https://eprints.qut.edu.au/227461/1/Callan_Alexander_Thesis.pdf.

Full text
Abstract:
This thesis utilises passive acoustic monitoring as a framework to study Powerful Owls (Ninox strenua) in south-east Queensland. The study quantitively describes the vocalisations of adult and chick Powerful Owls and utilises open-source machine learning software to create automated species recognition tools for use in citizen science programs. The results indicate that call characteristics historically used to sex adult Powerful Owls are likely unreliable. Testing of the automated call recognisers resulted in highly promising outcomes, which suggests that they are likely to be valuable tools for future study.
APA, Harvard, Vancouver, ISO, and other styles
5

Lima, Alice de Moura. "Production and perception of acoustic signals in captive bottlenose dolphins (Tursiops truncatus) : contextual use of social signals and recognition of artificial labels." Thesis, Rennes 1, 2017. http://www.theses.fr/2017REN1B048/document.

Full text
Abstract:
Les études de bioacoustique animale, qui reposent traditionnellement sur des modèles primates non humains et oiseaux chanteurs, convergent vers l'idée que la vie sociale serait la principale force motrice de l'évolution de la complexité de la communication. La comparaison avec les cétacés est également particulièrement intéressante d'un point de vue évolutif. Ce sont des mammifères qui forment des liens sociaux complexes, ont des capacités de plasticité acoustique, mais qui ont dû s'adapter à la vie marine, faisant de l'habitat une autre force de sélection déterminante. Leur habitat naturel impose des contraintes sur la production sonore, l'utilisation et la perception des signaux acoustiques, mais, de la même manière, limite les observations éthologiques. Etudier les cétacés captifs devient alors une source importante de connaissances sur ces animaux. Au-delà de l'analyse des structures acoustiques, l'étude des contextes sociaux dans lesquels les différentes vocalisations sont utilisées est essentielle à la compréhension de la communication vocale. Par rapport aux primates et aux oiseaux, la fonction sociale des signaux acoustiques des dauphins reste largement méconnue. En outre, les adaptations morpho-anatomiques de l’appareil vocal et auditif des cétacés à une vie sous-marine sont uniques dans le règne animal. Leur capacité à percevoir les sons produits dans l'air reste controversée en raison du manque de démonstrations expérimentales. Les objectifs de cette thèse étaient, d'une part, d'explorer l'utilisation contextuelle spontanée des signaux acoustiques dans un groupe captif de dauphins et, d'autre part, de tester expérimentalement les capacités à percevoir les sons sous l’eau comme dans l’air. Notre première étude observationnelle décrit la vie quotidienne de dauphins en captivité et montre que les signaux vocaux reflètent, à grande échelle, la répartition temporelle des activités sociales et non sociales dans un établissement sous contrôle humain. Notre deuxième étude met l'accent sur le contexte d’émission des trois principales catégories acoustiques précédemment identifiées dans le répertoire vocal des dauphins, à savoir les sifflements, les sons pulsés et les séries de clics. Nous avons trouvé des associations préférentielles entre chaque catégorie vocale et certains types d'interactions sociales ainsi que des combinaisons sonores non aléatoires et également dépendantes du contexte. Notre troisième étude a testé expérimentalement, dans des conditions standardisées, la réponse des dauphins à des « labels » acoustiques individuels donnés par l’homme et diffusés dans l’eau et dans l’air. Nous avons constaté que les dauphins peuvent reconnaître et réagir uniquement à leur propre « label » sonore, même lorsqu'il est diffusé dans l’air. En plus de confirmer l'audition aérienne, ces résultats soutiennent l’idée que les dauphins possèdent une notion d'identité. Dans l'ensemble, les résultats obtenus au cours de cette thèse suggèrent que certains signaux sociaux dans le répertoire des dauphins peuvent être utilisés pour communiquer des informations spécifiques sur les contextes comportementaux des individus impliqués et que les individus sont capables de généraliser leur concept d'identité à des signaux générés par l'homme
Studies on animal bioacoustics, traditionally relying on non-human primate and songbird models, converge towards the idea that social life appears as the main driving force behind the evolution of complex communication. Comparisons with cetaceans is also particularly interesting from an evolutionary point of view. They are indeed mammals forming complex social bonds, with abilities in acoustic plasticity, but that had to adapt to marine life, making habitat another determining selection force. Their natural habitat constrains sound production, usage and perception but, in the same way, constrains ethological observations making studies of captive cetaceans an important source of knowledge on these animals. Beyond the analysis of acoustic structures, the study of the social contexts in which the different vocalizations are used is essential to the understanding of vocal communication. Compared to primates and birds, the social function of dolphins’ acoustic signals remains largely misunderstood. Moreover, the way cetaceans’ vocal apparatus and auditory system adapted morphoanatomically to an underwater life is unique in the animal kingdom. But their ability to perceive sounds produced in the air remains controversial due to the lack of experimental demonstrations. The objectives of this thesis were, on the one hand, to explore the spontaneous contextual usage of acoustic signals in a captive group of bottlenose dolphins and, on the other hand, to test experimentally underwater and aerial abilities in auditory perception. Our first observational study describes the daily life of our dolphins in captivity, and shows that vocal signalling reflects, at a large scale, the temporal distribution of social and non-social activities in a facility under human control. Our second observational study focuses on the immediate context of emission of the three main acoustic categories previously identified in the dolphins’ vocal repertoire, i.e. whistles, burst-pulses and click trains. We found preferential associations between each vocal category and specific types of social interactions and identified context-dependent patterns of sound combinations. Our third study experimentally tested, under standardized conditions, the response of dolphins to human-made individual sound labels broadcast under and above water. We found that dolphins were able to recognize and to react only to their own label, even when broadcast in the air. Apart from confirming aerial hearing, these findings go in line with studies supporting that dolphins possess a concept of identity. Overall, the results obtained during this thesis suggest that some social signals in the dolphin repertoire can be used to communicate specific information about the behavioural contexts of the individuals involved and that individuals are able to generalize their concept of identity for human-generated signals
APA, Harvard, Vancouver, ISO, and other styles
6

Kahl, Stefan. "Identifying Birds by Sound: Large-scale Acoustic Event Recognition for Avian Activity Monitoring." Universitätsverlag Chemnitz, 2019. https://monarch.qucosa.de/id/qucosa%3A36986.

Full text
Abstract:
Automated observation of avian vocal activity and species diversity can be a transformative tool for ornithologists, conservation biologists, and bird watchers to assist in long-term monitoring of critical environmental niches. Deep artificial neural networks have surpassed traditional classifiers in the field of visual recognition and acoustic event classification. Still, deep neural networks require expert knowledge to design, train, and test powerful models. With this constraint and the requirements of future applications in mind, an extensive research platform for automated avian activity monitoring was developed: BirdNET. The resulting benchmark system yields state-of-the-art scores across various acoustic domains and was used to develop expert tools and public demonstrators that can help to advance the democratization of scientific progress and future conservation efforts.
Die automatisierte Überwachung der Vogelstimmenaktivität und der Artenvielfalt kann ein revolutionäres Werkzeug für Ornithologen, Naturschützer und Vogelbeobachter sein, um bei der langfristigen Überwachung kritischer Umweltnischen zu helfen. Tiefe künstliche neuronale Netzwerke haben die traditionellen Klassifikatoren im Bereich der visuellen Erkennung und akustische Ereignisklassifizierung übertroffen. Dennoch erfordern tiefe neuronale Netze Expertenwissen, um leistungsstarke Modelle zu entwickeln, trainieren und testen. Mit dieser Einschränkung und unter Berücksichtigung der Anforderungen zukünftiger Anwendungen wurde eine umfangreiche Forschungsplattform zur automatisierten Überwachung der Vogelaktivität entwickelt: BirdNET. Das daraus resultierende Benchmark-System liefert state-of-the-art Ergebnisse in verschiedenen akustischen Bereichen und wurde verwendet, um Expertenwerkzeuge und öffentliche Demonstratoren zu entwickeln, die dazu beitragen können, die Demokratisierung des wissenschaftlichen Fortschritts und zukünftige Naturschutzbemühungen voranzutreiben.
APA, Harvard, Vancouver, ISO, and other styles
7

Movin, Andreas, and Jonathan Jilg. "Kan datorer höra fåglar?" Thesis, KTH, Skolan för teknikvetenskap (SCI), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-254800.

Full text
Abstract:
Ljudigenkänning möjliggörs genom spektralanalys, som beräknas av den snabba fouriertransformen (FFT), och har under senare år nått stora genombrott i samband med ökningen av datorprestanda och artificiell intelligens. Tekniken är nu allmänt förekommande, i synnerhet inom bioakustik för identifiering av djurarter, en viktig del av miljöövervakning. Det är fortfarande ett växande vetenskapsområde och särskilt igenkänning av fågelsång som återstår som en svårlöst utmaning. Även de främsta algoritmer i området är långt ifrån felfria. I detta kandidatexamensarbete implementerades och utvärderades enkla algoritmer för att para ihop ljud med en ljuddatabas. En filtreringsmetod utvecklades för att urskilja de karaktäristiska frekvenserna vid fem tidsramar som utgjorde basen för jämförelsen och proceduren för ihopparning. Ljuden som användes var förinspelad fågelsång (koltrast, näktergal, kråka och fiskmås) så väl som egeninspelad mänsklig röst (4 unga svenska män). Våra resultat visar att framgångsgraden normalt är 50–70%, den lägsta var fiskmåsen med 30% för en liten databas och den högsta var koltrasten med 90% för en stor databas. Rösterna var svårare för algoritmen att särskilja, men de hade överlag framgångsgrader mellan 50% och 80%. Dock gav en ökning av databasstorleken generellt inte en ökning av framgångsgraden. Sammanfattningsvis visar detta kandidatexamensarbete konceptbeviset bakom fågelsångigenkänning och illustrerar såväl styrkorna som bristerna av dessa enkla algoritmer som har utvecklats. Algoritmerna gav högre framgångsgrad än slumpen (25%) men det finns ändå utrymme för förbättring eftersom algoritmen vilseleddes av ljud av samma frekvenser. Ytterligare studier behövs för att bedöma den utvecklade algoritmens förmåga att identifiera ännu fler fåglar och röster.
Sound recognition is made possible through spectral analysis, computed by the fast Fourier transform (FFT), and has in recent years made major breakthroughs along with the rise of computational power and artificial intelligence. The technology is now used ubiquitously and in particular in the field of bioacoustics for identification of animal species, an important task for wildlife monitoring. It is still a growing field of science and especially the recognition of bird song which remains a hard-solved challenge. Even state-of-the-art algorithms are far from error-free. In this thesis, simple algorithms to match sounds to a sound database were implemented and assessed. A filtering method was developed to pick out characteristic frequencies at five time frames which were the basis for comparison and the matching procedure. The sounds used were pre-recorded bird songs (blackbird, nightingale, crow and seagull) as well as human voices (4 young Swedish males) that we recorded. Our findings show success rates typically at 50–70%, the lowest being the seagull of 30% for a small database and the highest being the blackbird at 90% for a large database. The voices were more difficult for the algorithms to distinguish, but they still had an overall success rate between 50% and 80%. Furthermore, increasing the database size did not improve success rates in general. In conclusion, this thesis shows the proof of concept and illustrates both the strengths as well as short-comings of the simple algorithms developed. The algorithms gave better success rates than pure chance of 25% but there is room for improvement since the algorithms were easily misled by sounds of the same frequencies. Further research will be needed to assess the devised algorithms' ability to identify even more birds and voices.
APA, Harvard, Vancouver, ISO, and other styles
8

Sarmiento-Ponce, Edith Julieta. "An analysis of phonotactic behaviour in the cricket Gryllus bimaculatus." Thesis, University of Cambridge, 2019. https://www.repository.cam.ac.uk/handle/1810/290108.

Full text
Abstract:
This thesis represents a comprehensive examination of the phonotactic behaviour (i.e. attraction to sound) of the female Gryllus bimaculatus under laboratory conditions. Chapter 2 is the first study to analyze the effect of substrate texture on walking performance in crickets. Substrate texture is found to play an essential role in the phonotactic responses of G. bimaculatus. Smooth substrate texture has a detrimental effect due to slipping, whereas a rough texture results in optimal walking performance due to the friction with the walking legs. Chapter 3 represents the first detailed lifetime study analysing phonotaxis in crickets. My results demonstrate that the optimal age to test phonotaxis in G. bimaculatus females is from day 7 to 24 after the final moult. I also found that selectiveness was persistent with age. These findings contradict the female choosiness hypothesis. This study is also the first to describe the effect of senescence on phonotaxis in insects, as responsiveness decreases with age. Chapter 4 compares the phonotactic behaviour of female crickets from different laboratory-bred colonies. From six tested cricket lab colonies, I found three groups statistically different from each other. Females raised under laboratory conditions at the University of Cambridge and Anglia Ruskin University were most reponsive at a frequency of 4.5 kHz, whereas females bred in Tokushima University in Japan were tuned towards a higher frequency of 5 kHz. These results suggest a degree of artificial allopatric speciation. Comparisons with crickets bred under low-quality conditions in a local pet shop demonstrate a loss of responsiveness, indicating that breeding conditions have a direct effect on phonotactic responsivity. Chapter 5 is the first study to report the presence of phonotaxis in males of G. bimaculatus. Previously it was unknown if G. bimaculatus males were able to perform phonotaxis, given that they were only recognised as endurance signal producers. In the present study, only 20% of the studied males (N=70) performed a weak phonotactic response. This finding has potential ecological implications in terms of male cricket territory establishment, and male-male interactions in the wild, which are discussed. Chapter 6 explores the song pattern recognition of the female G. bimaculatus by changing the duration of either the first, second or third pulse of the chirps. A long first pulse decreased the phonotactic response whereas phonotaxis remained strong when the third pulse was long. Chirps with three pulses of increasing duration of 5, 20 and 50 ms elicited phonotaxis, but the chirps were not attractive when played in reverse order. The data are in agreement with a mechanism in which processing of a sound pulse has an effect on the processing of the subsequent pulse, as outlined in the flow of activity in a delay-line and coincidence-detector circuit.
APA, Harvard, Vancouver, ISO, and other styles
9

Huang, Ren-Zhuang, and 黃仁壯. "Automatic Recognition of Bioacoustic Sounds." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/86611469812526979080.

Full text
Abstract:
碩士
中華大學
資訊工程學系碩士班
93
In this paper we propose a method to automatically identify animals from the sounds they generate. First, each syllable corresponding to a piece of vocalization is segmented. The averaged LPCCs (ALPCC), averaged MFCCs (AMFCC), and averaged fromants (AFormant) over all frames in a syllable are calculated as the vocalization features. Linear discriminant analysis (LDA) is exploited to increase the classification accuracy at a lower dimensional feature vector space. In our experiments, the proposed AMFCC outperforms ALPCC and AFormant. If LDA is applied, AMFCC can achieve average classification accuracy of 97% and 98% for 30 frog calls and 34 cricket calls, respectively. If a set of feature vectors were used to represent the same bird species, the average classification accuracy is 87% for 420 bird species.
APA, Harvard, Vancouver, ISO, and other styles
10

Hobson, Rosalyn S. "A spatio-temporal artificial neural network for object recognition using bioacoustic signals /." 1998. http://wwwlib.umi.com/dissertations/fullcit/9824276.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Bioacoustic Recognition"

1

Main, Linda, and John Thornton. "A Cortically-Inspired Model for Bioacoustics Recognition." In Neural Information Processing, 348–55. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-26561-2_42.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Bioacoustic Recognition"

1

Altes, Richard A. "Bioacoustic Systems: Insights For Acoustical Imaging And Pattern Recognition." In Pattern Recognition and Acoustical Imaging, edited by Leonard A. Ferrari. SPIE, 1987. http://dx.doi.org/10.1117/12.940249.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Tacioli, Leandro, Luíz Toledo, and Claudua Medeiros. "An Architecture for Animal Sound Identification based on Multiple Feature Extraction and Classification Algorithms." In XI Brazilian e-Science Workshop. Sociedade Brasileira de Computação - SBC, 2020. http://dx.doi.org/10.5753/bresci.2017.9919.

Full text
Abstract:
Automatic identification of animals is extremely useful for scientists, providing ways to monitor species and changes in ecological communities. The choice of effective audio features and classification techniques is a challenge on any audio recognition system, especially in bioacoustics that commonly uses several algorithms. This paper presents a novel software architecture that supports multiple feature extraction and classification algorithms to help on the identification of animal species from their recorded sounds. This architecture was implemented by the WASIS software, freely available on the Web.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography