Добірка наукової літератури з теми "Bioacoustic Recognition"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Bioacoustic Recognition".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Статті в журналах з теми "Bioacoustic Recognition"
Gong, Cihun-Siyong Alex, Chih-Hui Simon Su, Kuo-Wei Chao, Yi-Chu Chao, Chin-Kai Su, and Wei-Hang Chiu. "Exploiting deep neural network and long short-term memory method-ologies in bioacoustic classification of LPC-based features." PLOS ONE 16, no. 12 (December 23, 2021): e0259140. http://dx.doi.org/10.1371/journal.pone.0259140.
Повний текст джерелаAsakura, Takumi, and Shizuru Iida. "Hand gesture recognition by using bioacoustic responses." Acoustical Science and Technology 41, no. 2 (March 1, 2020): 521–24. http://dx.doi.org/10.1250/ast.41.521.
Повний текст джерелаChesmore, David. "Automated bioacoustic identification of species." Anais da Academia Brasileira de Ciências 76, no. 2 (June 2004): 436–40. http://dx.doi.org/10.1590/s0001-37652004000200037.
Повний текст джерелаNoh, Hyung Wook, Chang-Geun Ahn, Seung-Hoon Chae, Yunseo Ku, and Joo Yong Sim. "Multichannel Acoustic Spectroscopy of the Human Body for Inviolable Biometric Authentication." Biosensors 12, no. 9 (August 31, 2022): 700. http://dx.doi.org/10.3390/bios12090700.
Повний текст джерелаFerguson, Elizabeth L., Peter Sugarman, Kevin R. Coffey, Jennifer Pettis Schallert, and Gabriela C. Alongi. "Development of deep neural networks for marine mammal call detection using an open-source, user friendly tool." Journal of the Acoustical Society of America 151, no. 4 (April 2022): A28. http://dx.doi.org/10.1121/10.0010547.
Повний текст джерелаCrawford, John D., Aaron P. Cook, and Andrea S. Heberlein. "Bioacoustic behavior of African fishes (Mormyridae): Potential cues for species and individual recognition inPollimyrus." Journal of the Acoustical Society of America 102, no. 2 (August 1997): 1200–1212. http://dx.doi.org/10.1121/1.419923.
Повний текст джерелаOba, Teruyo. "The sound environmental education aided by automated bioacoustic identification in view of soundscape recognition." Journal of the Acoustical Society of America 120, no. 5 (November 2006): 3239. http://dx.doi.org/10.1121/1.4788256.
Повний текст джерелаChesmore, E. D., and E. Ohya. "Automated identification of field-recorded songs of four British grasshoppers using bioacoustic signal recognition." Bulletin of Entomological Research 94, no. 4 (August 2004): 319–30. http://dx.doi.org/10.1079/ber2004306.
Повний текст джерелаLarsen, Hanne Lyngholm, Cino Pertoldi, Niels Madsen, Ettore Randi, Astrid Vik Stronen, Holly Root-Gutteridge, and Sussie Pagh. "Bioacoustic Detection of Wolves: Identifying Subspecies and Individuals by Howls." Animals 12, no. 5 (March 2, 2022): 631. http://dx.doi.org/10.3390/ani12050631.
Повний текст джерелаCao, Tianyu, Xiaoqun Zhao, Yichen Yang, Caiyun Zhu, and Zhongwei Xu. "Adaptive Recognition of Bioacoustic Signals in Smart Aquaculture Engineering Based on r-Sigmoid and Higher-Order Cumulants." Sensors 22, no. 6 (March 15, 2022): 2277. http://dx.doi.org/10.3390/s22062277.
Повний текст джерелаДисертації з теми "Bioacoustic Recognition"
Mace, Michael. "Heterogeneous recognition of bioacoustic signals for human-machine interfaces." Thesis, Imperial College London, 2013. http://hdl.handle.net/10044/1/11095.
Повний текст джерелаBastas, Selin A. "Nocturnal Bird Call Recognition System for Wind Farm Applications." University of Toledo / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1325803309.
Повний текст джерелаHübner, Sebastian Valentin. "Wissensbasierte Modellierung von Audio-Signal-Klassifikatoren : zur Bioakustik von Tursiops truncatus. - 2., überarb. Aufl." Phd thesis, Universität Potsdam, 2007. http://opus.kobv.de/ubp/volltexte/2008/1663/.
Повний текст джерелаThe present thesis is dedicated to the problem of knowledge-based modeling of audio-signal-classifiers in the bioacoustics domain. It deals with an interdisciplinary problem that has many facets. To these belong questions of knowledge representation, bioacoustics and algorithmical issues. The main purpose of the work is to provide and evaluate a scientific method in which all these facets are taken into consideration. In addition, a number of algorithms, which implement all important steps of this method, are described. The problem of modeling audio-signal-classifiers is regarded from the KDD-perspective (Knowledge-Discovery in Databases). The fundamental idea is to use modified KDD- and Data-Mining-algorithms to facilitate the modeling of audio-signal-classifiers. A detailed mathematical formalism is presented and the KDD-paradigm is adopted to the problem of modeling audio-signal-classifiers. 19 new KDD-procedures form a comprehensive system for knowledge-based audio-signal-classifier design. An extensive collection of acoustic signals of the bottlenose-dolphin was recorded in Eilat (Israel). It forms the basis of four empirical studies: A phenomenological classification of acoustic phenomena, an experimental evaluation of accuracy and precision of classifiers, a cluster analysis of whistle sounds and a monitoring study to examine the nature of click sounds. Both, method and algorithms can be adopted to other branches in bioacoustics without changing their fundamental architecture.
Alexander, Callan. "Passive acoustic monitoring of Australia’s largest owl: Using automatic species recognition to detect the powerful owl (Ninox strenua)." Thesis, Queensland University of Technology, 2022. https://eprints.qut.edu.au/227461/1/Callan_Alexander_Thesis.pdf.
Повний текст джерелаLima, Alice de Moura. "Production and perception of acoustic signals in captive bottlenose dolphins (Tursiops truncatus) : contextual use of social signals and recognition of artificial labels." Thesis, Rennes 1, 2017. http://www.theses.fr/2017REN1B048/document.
Повний текст джерелаStudies on animal bioacoustics, traditionally relying on non-human primate and songbird models, converge towards the idea that social life appears as the main driving force behind the evolution of complex communication. Comparisons with cetaceans is also particularly interesting from an evolutionary point of view. They are indeed mammals forming complex social bonds, with abilities in acoustic plasticity, but that had to adapt to marine life, making habitat another determining selection force. Their natural habitat constrains sound production, usage and perception but, in the same way, constrains ethological observations making studies of captive cetaceans an important source of knowledge on these animals. Beyond the analysis of acoustic structures, the study of the social contexts in which the different vocalizations are used is essential to the understanding of vocal communication. Compared to primates and birds, the social function of dolphins’ acoustic signals remains largely misunderstood. Moreover, the way cetaceans’ vocal apparatus and auditory system adapted morphoanatomically to an underwater life is unique in the animal kingdom. But their ability to perceive sounds produced in the air remains controversial due to the lack of experimental demonstrations. The objectives of this thesis were, on the one hand, to explore the spontaneous contextual usage of acoustic signals in a captive group of bottlenose dolphins and, on the other hand, to test experimentally underwater and aerial abilities in auditory perception. Our first observational study describes the daily life of our dolphins in captivity, and shows that vocal signalling reflects, at a large scale, the temporal distribution of social and non-social activities in a facility under human control. Our second observational study focuses on the immediate context of emission of the three main acoustic categories previously identified in the dolphins’ vocal repertoire, i.e. whistles, burst-pulses and click trains. We found preferential associations between each vocal category and specific types of social interactions and identified context-dependent patterns of sound combinations. Our third study experimentally tested, under standardized conditions, the response of dolphins to human-made individual sound labels broadcast under and above water. We found that dolphins were able to recognize and to react only to their own label, even when broadcast in the air. Apart from confirming aerial hearing, these findings go in line with studies supporting that dolphins possess a concept of identity. Overall, the results obtained during this thesis suggest that some social signals in the dolphin repertoire can be used to communicate specific information about the behavioural contexts of the individuals involved and that individuals are able to generalize their concept of identity for human-generated signals
Kahl, Stefan. "Identifying Birds by Sound: Large-scale Acoustic Event Recognition for Avian Activity Monitoring." Universitätsverlag Chemnitz, 2019. https://monarch.qucosa.de/id/qucosa%3A36986.
Повний текст джерелаDie automatisierte Überwachung der Vogelstimmenaktivität und der Artenvielfalt kann ein revolutionäres Werkzeug für Ornithologen, Naturschützer und Vogelbeobachter sein, um bei der langfristigen Überwachung kritischer Umweltnischen zu helfen. Tiefe künstliche neuronale Netzwerke haben die traditionellen Klassifikatoren im Bereich der visuellen Erkennung und akustische Ereignisklassifizierung übertroffen. Dennoch erfordern tiefe neuronale Netze Expertenwissen, um leistungsstarke Modelle zu entwickeln, trainieren und testen. Mit dieser Einschränkung und unter Berücksichtigung der Anforderungen zukünftiger Anwendungen wurde eine umfangreiche Forschungsplattform zur automatisierten Überwachung der Vogelaktivität entwickelt: BirdNET. Das daraus resultierende Benchmark-System liefert state-of-the-art Ergebnisse in verschiedenen akustischen Bereichen und wurde verwendet, um Expertenwerkzeuge und öffentliche Demonstratoren zu entwickeln, die dazu beitragen können, die Demokratisierung des wissenschaftlichen Fortschritts und zukünftige Naturschutzbemühungen voranzutreiben.
Movin, Andreas, and Jonathan Jilg. "Kan datorer höra fåglar?" Thesis, KTH, Skolan för teknikvetenskap (SCI), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-254800.
Повний текст джерелаSound recognition is made possible through spectral analysis, computed by the fast Fourier transform (FFT), and has in recent years made major breakthroughs along with the rise of computational power and artificial intelligence. The technology is now used ubiquitously and in particular in the field of bioacoustics for identification of animal species, an important task for wildlife monitoring. It is still a growing field of science and especially the recognition of bird song which remains a hard-solved challenge. Even state-of-the-art algorithms are far from error-free. In this thesis, simple algorithms to match sounds to a sound database were implemented and assessed. A filtering method was developed to pick out characteristic frequencies at five time frames which were the basis for comparison and the matching procedure. The sounds used were pre-recorded bird songs (blackbird, nightingale, crow and seagull) as well as human voices (4 young Swedish males) that we recorded. Our findings show success rates typically at 50–70%, the lowest being the seagull of 30% for a small database and the highest being the blackbird at 90% for a large database. The voices were more difficult for the algorithms to distinguish, but they still had an overall success rate between 50% and 80%. Furthermore, increasing the database size did not improve success rates in general. In conclusion, this thesis shows the proof of concept and illustrates both the strengths as well as short-comings of the simple algorithms developed. The algorithms gave better success rates than pure chance of 25% but there is room for improvement since the algorithms were easily misled by sounds of the same frequencies. Further research will be needed to assess the devised algorithms' ability to identify even more birds and voices.
Sarmiento-Ponce, Edith Julieta. "An analysis of phonotactic behaviour in the cricket Gryllus bimaculatus." Thesis, University of Cambridge, 2019. https://www.repository.cam.ac.uk/handle/1810/290108.
Повний текст джерелаHuang, Ren-Zhuang, and 黃仁壯. "Automatic Recognition of Bioacoustic Sounds." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/86611469812526979080.
Повний текст джерела中華大學
資訊工程學系碩士班
93
In this paper we propose a method to automatically identify animals from the sounds they generate. First, each syllable corresponding to a piece of vocalization is segmented. The averaged LPCCs (ALPCC), averaged MFCCs (AMFCC), and averaged fromants (AFormant) over all frames in a syllable are calculated as the vocalization features. Linear discriminant analysis (LDA) is exploited to increase the classification accuracy at a lower dimensional feature vector space. In our experiments, the proposed AMFCC outperforms ALPCC and AFormant. If LDA is applied, AMFCC can achieve average classification accuracy of 97% and 98% for 30 frog calls and 34 cricket calls, respectively. If a set of feature vectors were used to represent the same bird species, the average classification accuracy is 87% for 420 bird species.
Hobson, Rosalyn S. "A spatio-temporal artificial neural network for object recognition using bioacoustic signals /." 1998. http://wwwlib.umi.com/dissertations/fullcit/9824276.
Повний текст джерелаЧастини книг з теми "Bioacoustic Recognition"
Main, Linda, and John Thornton. "A Cortically-Inspired Model for Bioacoustics Recognition." In Neural Information Processing, 348–55. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-26561-2_42.
Повний текст джерелаТези доповідей конференцій з теми "Bioacoustic Recognition"
Altes, Richard A. "Bioacoustic Systems: Insights For Acoustical Imaging And Pattern Recognition." In Pattern Recognition and Acoustical Imaging, edited by Leonard A. Ferrari. SPIE, 1987. http://dx.doi.org/10.1117/12.940249.
Повний текст джерелаTacioli, Leandro, Luíz Toledo, and Claudua Medeiros. "An Architecture for Animal Sound Identification based on Multiple Feature Extraction and Classification Algorithms." In XI Brazilian e-Science Workshop. Sociedade Brasileira de Computação - SBC, 2020. http://dx.doi.org/10.5753/bresci.2017.9919.
Повний текст джерела