Academic literature on the topic 'Audio analysi'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Audio analysi.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Audio analysi"
Pečiulytė, Indrė, and Juozas Ruževičius. "Kokybės auditas: koncepcija ir metodologijos tobulinimas." Informacijos mokslai 68 (January 1, 2014): 23–43. http://dx.doi.org/10.15388/im.2014..3922.
Full textBraga, Marcelo, and Otavio Lube dos Santos. "MÉTODODE ANÁLISE DE ÁUDIO PARA DETECÇÃO DE FALHAS DE COMUNICAÇÃO." Revista Científica Faesa 15, no. 2 (July 22, 2019): 81–98. http://dx.doi.org/10.5008/1809.7367.159.
Full textAleliūnas, Irmantas, and Zenona Atkočiūnienė. "Informacijos auditas kitų audito rūšių kontekste." Informacijos mokslai 54 (January 1, 2010): 7–16. http://dx.doi.org/10.15388/im.2010.0.3178.
Full textJolly, Jasmine, and Mehbooba P. Shareef. "Audio Watermarking Schemes: A Comparative Analysis." International Journal of Engineering Technology and Management Sciences 4, no. 4 (July 28, 2020): 57–61. http://dx.doi.org/10.46647/ijetms.2020.v04i04.009.
Full textBarrio Fraile, Estrella, Ana María Enrique Jiménez, María Luz Barbeito Veloso, and Anna Fajula Payet. "Sonic identity and audio branding elements in Spanish radio advertising." Anàlisi 65 (December 22, 2021): 103–19. http://dx.doi.org/10.5565/rev/analisi.3330.
Full textGreenspun, Philip, and Leigh Klotz. "Audio Analysis VI: Testing Audio Cables." Computer Music Journal 12, no. 1 (1988): 58. http://dx.doi.org/10.2307/3679837.
Full textKischinhevsky, Marcelo, Itala Maduell Vieira, João Guilherme Bastos dos Santos, Viktor Chagas, Miguel de Andrade Freitas, and Alessandra Aldé. "WhatsApp audios and the remediation of radio: Disinformation in Brazilian 2018 presidential election." Radio Journal: International Studies in Broadcast & Audio Media 18, no. 2 (October 1, 2020): 139–58. http://dx.doi.org/10.1386/rjao_00021_1.
Full textSierra García, Laura, Emiliano Ruiz Barbadillo, and Manuel Orta Pérez. "Análisis de la Influencia de la Función de Auditoría Interna sobre las Cuotas de Auditoría." Revista de Contabilidad 22, no. 1 (January 1, 2019): 100–111. http://dx.doi.org/10.6018/rc-sar.22.1.354351.
Full textSuhandinata, Sebastian, Reyhan Achmad Rizal, Dedy Ongky Wijaya, Prabhu Warren, and Srinjiwi Srinjiwi. "ANALISIS PERFORMA KRIPTOGRAFI HYBRID ALGORITMA BLOWFISH DAN ALGORITMA RSA." JURTEKSI (Jurnal Teknologi dan Sistem Informasi) 6, no. 1 (December 10, 2019): 1–10. http://dx.doi.org/10.33330/jurteksi.v6i1.395.
Full textAuken, V. M. "Анализ взаимодействия государственных доходов и аудита." INTERNATIONAL JOURNAL OF INFORMATION AND COMMUNICATION TECHNOLOGIES, no. 8(8) (March 4, 2022): 43–46. http://dx.doi.org/10.54309/ijict.2021.8.8.008.
Full textDissertations / Theses on the topic "Audio analysi"
CHEMLA, ROMEU SANTOS AXEL CLAUDE ANDRE'. "MANIFOLD REPRESENTATIONS OF MUSICAL SIGNALS AND GENERATIVE SPACES." Doctoral thesis, Università degli Studi di Milano, 2020. http://hdl.handle.net/2434/700444.
Full textAmong the diverse research fields within computer music, synthesis and generation of audio signals epitomize the cross-disciplinarity of this domain, jointly nourishing both scientific and artistic practices since its creation. Inherent in computer music since its genesis, audio generation has inspired numerous approaches, evolving both with musical practices and scientific/technical advances. Moreover, some syn- thesis processes also naturally handle the reverse process, named analysis, such that synthesis parameters can also be partially or totally extracted from actual sounds, and providing an alternative representation of the analyzed audio signals. On top of that, the recent rise of machine learning algorithms earnestly questioned the field of scientific research, bringing powerful data-centred methods that raised several epistemological questions amongst researchers, in spite of their efficiency. Especially, a family of machine learning methods, called generative models, are focused on the generation of original content using features extracted from an existing dataset. In that case, such methods not only questioned previous approaches in generation, but also the way of integrating this methods into existing creative processes. While these new generative frameworks are progressively introduced in the domain of image generation, the application of such generative techniques in audio synthesis is still marginal. In this work, we aim to propose a new audio analysis-synthesis framework based on these modern generative models, enhanced by recent advances in machine learning. We first review existing approaches, both in sound synthesis and in generative machine learning, and focus on how our work inserts itself in both practices and what can be expected from their collation. Subsequently, we focus a little more on generative models, and how modern advances in the domain can be exploited to allow us learning complex sound distributions, while being sufficiently flexible to be integrated in the creative flow of the user. We then propose an inference / generation process, mirroring analysis/synthesis paradigms that are natural in the audio processing domain, using latent models that are based on a continuous higher-level space, that we use to control the generation. We first provide preliminary results of our method applied on spectral information, extracted from several datasets, and evaluate both qualitatively and quantitatively the obtained results. Subsequently, we study how to make these methods more suitable for learning audio data, tackling successively three different aspects. First, we propose two different latent regularization strategies specifically designed for audio, based on and signal / symbol translation and perceptual constraints. Then, we propose different methods to address the inner temporality of musical signals, based on the extraction of multi-scale representations and on prediction, that allow the obtained generative spaces that also model the dynamics of the signal. As a last chapter, we swap our scientific approach to a more research & creation-oriented point of view: first, we describe the architecture and the design of our open-source library, vsacids, aiming to be used by expert and non-expert music makers as an integrated creation tool. Then, we propose an first musical use of our system by the creation of a real-time performance, called aego, based jointly on our framework vsacids and an explorative agent using reinforcement learning to be trained during the performance. Finally, we draw some conclusions on the different manners to improve and reinforce the proposed generation method, as well as possible further creative applications.
À travers les différents domaines de recherche de la musique computationnelle, l’analysie et la génération de signaux audio sont l’exemple parfait de la trans-disciplinarité de ce domaine, nourrissant simultanément les pratiques scientifiques et artistiques depuis leur création. Intégrée à la musique computationnelle depuis sa création, la synthèse sonore a inspiré de nombreuses approches musicales et scientifiques, évoluant de pair avec les pratiques musicales et les avancées technologiques et scientifiques de son temps. De plus, certaines méthodes de synthèse sonore permettent aussi le processus inverse, appelé analyse, de sorte que les paramètres de synthèse d’un certain générateur peuvent être en partie ou entièrement obtenus à partir de sons donnés, pouvant ainsi être considérés comme une représentation alternative des signaux analysés. Parallèlement, l’intérêt croissant soulevé par les algorithmes d’apprentissage automatique a vivement questionné le monde scientifique, apportant de puissantes méthodes d’analyse de données suscitant de nombreux questionnements épistémologiques chez les chercheurs, en dépit de leur effectivité pratique. En particulier, une famille de méthodes d’apprentissage automatique, nommée modèles génératifs, s’intéressent à la génération de contenus originaux à partir de caractéristiques extraites directement des données analysées. Ces méthodes n’interrogent pas seulement les approches précédentes, mais aussi sur l’intégration de ces nouvelles méthodes dans les processus créatifs existants. Pourtant, alors que ces nouveaux processus génératifs sont progressivement intégrés dans le domaine la génération d’image, l’application de ces techniques en synthèse audio reste marginale. Dans cette thèse, nous proposons une nouvelle méthode d’analyse-synthèse basés sur ces derniers modèles génératifs, depuis renforcés par les avancées modernes dans le domaine de l’apprentissage automatique. Dans un premier temps, nous examinerons les approches existantes dans le domaine des systèmes génératifs, sur comment notre travail peut s’insérer dans les pratiques de synthèse sonore existantes, et que peut-on espérer de l’hybridation de ces deux approches. Ensuite, nous nous focaliserons plus précisément sur comment les récentes avancées accomplies dans ce domaine dans ce domaine peuvent être exploitées pour l’apprentissage de distributions sonores complexes, tout en étant suffisamment flexibles pour être intégrées dans le processus créatif de l’utilisateur. Nous proposons donc un processus d’inférence / génération, reflétant les paradigmes d’analyse-synthèse existant dans le domaine de génération audio, basé sur l’usage de modèles latents continus que l’on peut utiliser pour contrôler la génération. Pour ce faire, nous étudierons déjà les résultats préliminaires obtenus par cette méthode sur l’apprentissage de distributions spectrales, prises d’ensembles de données diversifiés, en adoptant une approche à la fois quantitative et qualitative. Ensuite, nous proposerons d’améliorer ces méthodes de manière spécifique à l’audio sur trois aspects distincts. D’abord, nous proposons deux stratégies de régularisation différentes pour l’analyse de signaux audio : une basée sur la traduction signal/ symbole, ainsi qu’une autre basée sur des contraintes perceptives. Nous passerons par la suite à la dimension temporelle de ces signaux audio, proposant de nouvelles méthodes basées sur l’extraction de représentations temporelles multi-échelle et sur une tâche supplémentaire de prédiction, permettant la modélisation de caractéristiques dynamiques par les espaces génératifs obtenus. En dernier lieu, nous passerons d’une approche scientifique à une approche plus orientée vers un point de vue recherche & création. Premièrement, nous présenterons notre librairie open-source, vsacids, visant à être employée par des créateurs experts et non-experts comme un outil intégré. Ensuite, nous proposons une première utilisation musicale de notre système par la création d’une performance temps réel, nommée ægo, basée à la fois sur notre librarie et sur un agent d’exploration appris dynamiquement par renforcement au cours de la performance. Enfin, nous tirons les conclusions du travail accompli jusqu’à maintenant, concernant les possibles améliorations et développements de la méthode de synthèse proposée, ainsi que sur de possibles applications créatives.
TERENZI, Alessandro. "Innovative Digital Signal Processing Methodologies for Identification and Analysis of Real Audio Systems." Doctoral thesis, Università Politecnica delle Marche, 2021. http://hdl.handle.net/11566/287822.
Full textMany real word audio systems exist, each has its own characteristics but almost all of them can be identified from the fact that they are able to generate or modify a sound. If a natural or artificial system can be defined as a sound system, then it is possible to apply the techniques of digital signal processing for the studying and the emulation of the system. In this thesis, innovative methodologies for digital signal processing applied to real audio systems will be discussed. In particular, three different audio systems will be considered: the world of vacuum-based non linear audio devices with particular attention to guitar and hi-fi amplifiers; the room acoustic environment and its effect on the sound propagation; and finally the sound emitted by honey bees in a beehive. Regarding the first system, innovative approaches for the identification of the Volterra series and Hammerstein models will be proposed, in particular an approach to overcome some limitation of Volterra series identification. The application of a sub-band structure to reduce the computational cost and increase the convergence speed of an adaptive Hammerstein model identification will be proposed as well. Finally, an innovative approach for the measurement of several distortion parameters using a single measure, exploiting a generalized Hammerstein model, will be presented. For the second system, the results of the application of a multi-point equalizer to two different situations will be exposed. In particular, in the first case, it will be shown how a multi-point equalization can be used not only to compensate the acoustical anomalies of a room, but also to improve the frequency response of vibrating transducers mounted on a rigid surface. The second contribution will show how a sub-band approach can be used to improve the computational cost and the speed of an adaptive algorithm for a multi-point and multi channel equalizer. At the end, the focus will be on a natural sound system, i.e., a honey bees colony. In this case, an innovative acquisition system for honey bees sound monitoring will be presented. Then, the approaches developed for sound analysis will be exposed and applied to the recorded sounds in two different situations. Finally, the obtained results, achieved with the application of classification algorithms, will be exposed. In the final part of the work some minor contributions still related to signal processing applied to real sound systems are presented. In particular, an implementation of an active noise control system is discussed, and two algorithms for digital effects where the former improves the sound performances of compact loudspeakers and the latter generates a stereophonic effect for electric guitars are exposed.
Djebbar, Fatiha. "Contributions to Audio Steganography : Algorithms and Robustness Analysis." Thesis, Brest, 2012. http://www.theses.fr/2012BRES0005.
Full textDigital steganography is a young flourishing science emerged as a prominent source of data security. The primary goal of steganography is to reliably send hidden information secretly, not merely to obscure its presence. It exploits the characteristics of digital media files such as: image, audio, video, text by utilizing them as carriers to secretly communicate data. Encryption and watermarking techniques are already used to address concerns related to datasecurity. However, constantly-changing attacks on the integrity of digital data require new techniques to break the cycle of malicious attempts and expand the scope of involved applications. The main objective of steganographic systems is to provide secure, undetectable and imperceptible ways to conceal high-rate of data into digital medium. Steganography is used under the assumption that it will not be detected if no one is attempting to uncover it. Steganography techniques have found their way into various and versatile applications. Some of these applications are used for the benefit of people others are used maliciously. The threat posed by criminals, hackers, terrorists and spies using steganography is indeed real. To defeat malicious attempts when communicating secretly, researchers’ work has been lately extended toinclude a new and parallel research branch to countermeasure steganagraphy techniques called steganalysis. The main purpose of steganalysis technique is to detect the presence or not of hidden message and does not consider necessarily its successful extraction. Digital speech, in particular, constitutes a prominent source of data-hiding across novel telecommunication technologies such as covered voice-over-IP, audio conferencing, etc. This thesis investigatesdigital speech steganography and steganalysis and aims at: (1) presenting an algorithm that meets high data capacity, undetectability and imperceptibility requirements of steganographic systems, (2) controlling the distortion induced by the embedding process (3) presenting new concepts of spectral embedding areas in the Fourier domain which is applicable to magnitude and phase spectrums and (4) introducing a simple yet effective speech steganalysis algorithm based on lossless data compression techniques. The steganographic algorithm’s performance is measured by perceptual and statistical evaluation methods. On the other hand, the steganalysis algorithm’s performance is measured by how well the system can distinguish between stego- and cover-audio signals. The results are very promising and show interesting performance tradeoffs compared to related methods. Future work is based mainly on strengthening the proposed steganalysis algorithm to be able to detect small hiding capacity. As for our steganographic algorithm, we aim at integrating our steganographic in some emerging devices such as iPhone and further enhancing the capabilities of our steganographic algorithm to ensure hidden-data integrity under severe compression, noise and channel distortion
Kafentzis, George. "Adaptive Sinusoidal Models for Speech with Applications in Speech Modifications and Audio Analysis." Thesis, Rennes 1, 2014. http://www.theses.fr/2014REN1S085/document.
Full textSinusoidal Modeling is one of the most widely used parametric methods for speech and audio signal processing. The accurate estimation of sinusoidal parameters (amplitudes, frequencies, and phases) is a critical task for close representation of the analyzed signal. In this thesis, based on recent advances in sinusoidal analysis, we propose high resolution adaptive sinusoidal models for analysis, synthesis, and modifications systems of speech. Our goal is to provide systems that represent speech in a highly accurate and compact way. Inspired by the recently introduced adaptive Quasi-Harmonic Model (aQHM) and adaptive Harmonic Model (aHM), we overview the theory of adaptive Sinusoidal Modeling and we propose a model named the extended adaptive Quasi-Harmonic Model (eaQHM), which is a non-parametric model able to adjust the instantaneous amplitudes and phases of its basis functions to the underlying time-varying characteristics of the speech signal, thus significantly alleviating the so-called local stationarity hypothesis. The eaQHM is shown to outperform aQHM in analysis and resynthesis of voiced speech. Based on the eaQHM, a hybrid analysis/synthesis system of speech is presented (eaQHNM), along with a hybrid version of the aHM (aHNM). Moreover, we present motivation for a full-band representation of speech using the eaQHM, that is, representing all parts of speech as high resolution AM-FM sinusoids. Experiments show that adaptation and quasi-harmonicity is sufficient to provide transparent quality in unvoiced speech resynthesis. The full-band eaQHM analysis and synthesis system is presented next, which outperforms state-of-the-art systems, hybrid or full-band, in speech reconstruction, providing transparent quality confirmed by objective and subjective evaluations. Regarding applications, the eaQHM and the aHM are applied on speech modifications (time and pitch scaling). The resulting modifications are of high quality, and follow very simple rules, compared to other state-of-the-art modification systems. Results show that harmonicity is preferred over quasi-harmonicity in speech modifications due to the embedded simplicity of representation. Moreover, the full-band eaQHM is applied on the problem of modeling audio signals, and specifically of musical instrument sounds. The eaQHM is evaluated and compared to state-of-the-art systems, and is shown to outperform them in terms of resynthesis quality, successfully representing the attack, transient, and stationary part of a musical instrument sound. Finally, another application is suggested, namely the analysis and classification of emotional speech. The eaQHM is applied on the analysis of emotional speech, providing its instantaneous parameters as features that can be used in recognition and Vector-Quantization-based classification of the emotional content of speech. Although the sinusoidal models are not commonly used in such tasks, results are promising
SIMONETTA, FEDERICO. "MUSIC INTERPRETATION ANALYSIS. A MULTIMODAL APPROACH TO SCORE-INFORMED RESYNTHESIS OF PIANO RECORDINGS." Doctoral thesis, Università degli Studi di Milano, 2022. http://hdl.handle.net/2434/918909.
Full textSong, Guanghan. "Effect of sound in videos on gaze : contribution to audio-visual saliency modelling." Thesis, Grenoble, 2013. http://www.theses.fr/2013GRENT013/document.
Full textHumans receive large quantity of information from the environment with sight and hearing. To help us to react rapidly and properly, there exist mechanisms in the brain to bias attention towards particular regions, namely the salient regions. This attentional bias is not only influenced by vision, but also influenced by audio-visual interaction. According to existing literature, the visual attention can be studied towards eye movements, however the sound effect on eye movement in videos is little known. The aim of this thesis is to investigate the influence of sound in videos on eye movement and to propose an audio-visual saliency model to predict salient regions in videos more accurately. For this purpose, we designed a first audio-visual experiment of eye tracking. We created a database of short video excerpts selected from various films. These excerpts were viewed by participants either with their original soundtrack (AV condition), or without soundtrack (V condition). We analyzed the difference of eye positions between participants with AV and V conditions. The results show that there does exist an effect of sound on eye movement and the effect is greater for the on-screen speech class. Then, we designed a second audio-visual experiment with thirteen classes of sound. Through comparing the difference of eye positions between participants with AV and V conditions, we conclude that the effect of sound is different depending on the type of sound, and the classes with human voice (i.e. speech, singer, human noise and singers classes) have the greatest effect. More precisely, sound source significantly attracted eye position only when the sound was human voice. Moreover, participants with AV condition had a shorter average duration of fixation than with V condition. Finally, we proposed a preliminary audio-visual saliency model based on the findings of the above experiments. In this model, two fusion strategies of audio and visual information were described: one for speech sound class, and one for musical instrument sound class. The audio-visual fusion strategies defined in the model improves its predictability with AV condition
Elfitri, I. "Analysis by synthesis spatial audio coding." Thesis, University of Surrey, 2013. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.590657.
Full textFazekas, György. "Semantic audio analysis utilities and applications." Thesis, Queen Mary, University of London, 2012. http://qmro.qmul.ac.uk/xmlui/handle/123456789/8443.
Full textSteinhour, Jacob B. "The Social and Pedagogical Advantages of Audio Forensics and Restoration Education." Ohio University Honors Tutorial College / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=ouhonors1276014966.
Full textXiao, Zhongzhe Chen Liming. "Recognition of emotions in audio signals." Ecully : Ecole Centrale de Lyon, 2008. http://bibli.ec-lyon.fr/exl-doc/zxiao.pdf.
Full textBooks on the topic "Audio analysi"
Schuller, Björn W. Intelligent Audio Analysis. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-36806-6.
Full textSchuller, Björn W. Intelligent Audio Analysis. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013.
Find full textMaher, Robert C. Principles of Forensic Audio Analysis. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-99453-6.
Full textLerch, Alexander. Audio content analysis: An introduction. Hoboken, N.J: Wiley, 2012.
Find full textLerch, Alexander. An Introduction to Audio Content Analysis. Hoboken, NJ, USA: John Wiley & Sons, Inc., 2012. http://dx.doi.org/10.1002/9781118393550.
Full textPollock, Alan. Surgical audit. London: Butterworths, 1989.
Find full textCotton, Courtenay. Characterizing Audio Events for Video Soundtrack Analysis. [New York, N.Y.?]: [publisher not identified], 2013.
Find full textauthor, Pikrakis Aggelos, ed. Introduction to audio analysis: A MATLAB approach. Kidlington, Oxford: Academic Press is an imprint of Elsevier, 2014.
Find full textWillborn, Walter W. O. Audit standards: A comparative analysis. 2nd ed. Milwaukee, Wis: ASQC Quality Press, 1993.
Find full textWillborn, Walter W. O. Audit standards: A comparative analysis. Milwaukee, Wis: American Society for Quality Control, 1987.
Find full textBook chapters on the topic "Audio analysi"
Dawson, Catherine. "Audio analysis." In A–Z of Digital Research Methods, 10–16. Abingdon, Oxon ; New York, NY : Routledge, 2019.: Routledge, 2019. http://dx.doi.org/10.4324/9781351044677-3.
Full textSchuller, Björn. "Audio Data." In Intelligent Audio Analysis, 23–40. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-36806-6_5.
Full textSchuller, Björn. "Audio Features." In Intelligent Audio Analysis, 41–97. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-36806-6_6.
Full textSchuller, Björn. "Audio Recognition." In Intelligent Audio Analysis, 99–138. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-36806-6_7.
Full textSchuller, Björn. "Audio Source Separation." In Intelligent Audio Analysis, 139–47. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-36806-6_8.
Full textZhang, Tong, and C. C. Jay Kuo. "Audio Feature Analysis." In Content-Based Audio Classification and Retrieval for Audiovisual Data Parsing, 35–54. Boston, MA: Springer US, 2001. http://dx.doi.org/10.1007/978-1-4757-3339-6_3.
Full textLu, Lie, and Alan Hanjalic. "Audio Content Analysis." In Encyclopedia of Database Systems, 1–3. New York, NY: Springer New York, 2016. http://dx.doi.org/10.1007/978-1-4899-7993-3_1528-2.
Full textLu, Lie, and Alan Hanjalic. "Audio Content Analysis." In Encyclopedia of Database Systems, 154–56. Boston, MA: Springer US, 2009. http://dx.doi.org/10.1007/978-0-387-39940-9_1528.
Full textBraddock, John Paul. "Mastering audio analysis." In Mastering in Music, 104–18. [1.] | New York : Taylor and Francis, 2021. | Series: Perspectives on music production: Focal Press, 2020. http://dx.doi.org/10.4324/9780429276590-7.
Full textLu, Lie, and Alan Hanjalic. "Audio Content Analysis." In Encyclopedia of Database Systems, 198–201. New York, NY: Springer New York, 2018. http://dx.doi.org/10.1007/978-1-4614-8265-9_1528.
Full textConference papers on the topic "Audio analysi"
Barreto, Cephas A. S., Victor V. Targino, Tales V. de M. Alves, Lucas V. Bazante, Rafael V. R. de Oliveira, Ricardo A. R. do A. Junior, João C. Xavier-Júnior, and Anne Magály de P. Canuto. "Applying Feature Selection Combination in Audios of Whale for Improving Classification." In Encontro Nacional de Inteligência Artificial e Computacional. Sociedade Brasileira de Computação - SBC, 2022. http://dx.doi.org/10.5753/eniac.2022.227616.
Full textZhan, Yunzhen, and Xiaochen Yuan. "Audio post-processing detection and identification based on audio features." In 2017 International Conference on Wavelet Analysis and Pattern Recognition (ICWAPR). IEEE, 2017. http://dx.doi.org/10.1109/icwapr.2017.8076681.
Full textCouvreur, L., F. Bettens, J. Hancq, and M. Mancas. "Normalized auditory attention levels for automatic audio surveillance." In RISK ANALYSIS 2008. Southampton, UK: WIT Press, 2008. http://dx.doi.org/10.2495/risk080441.
Full textDelgado, Alejandro, SKoT McDonald, Ning Xu, and Mark Sandler. "A New Dataset for Amateur Vocal Percussion Analysis." In AM'19: Audio Mostly. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3356590.3356844.
Full textPfeiffer, Silvia, Stephan Fischer, and Wolfgang Effelsberg. "Automatic audio content analysis." In the fourth ACM international conference. New York, New York, USA: ACM Press, 1996. http://dx.doi.org/10.1145/244130.244139.
Full textRamirez, Rafael. "Session details: Audio analysis." In MM '10: ACM Multimedia Conference. New York, NY, USA: ACM, 2010. http://dx.doi.org/10.1145/3258351.
Full textPerez, Mauricio, Rodolfo Coelho De Souza, and Regis Rossi Alves Faria. "Digital Design of Audio Signal Processing Using Time Delay." In Simpósio Brasileiro de Computação Musical. Sociedade Brasileira de Computação - SBC, 2019. http://dx.doi.org/10.5753/sbcm.2019.10449.
Full textAntsiferova, V., T. Pesetskaya, I. Yuldoshev, Syanyan Lu, Cin Van, and V. Lavlinskiy. "ANALYSIS OF RELEVANT CHARACTERISTICS OF INTERJECTION AUDIO SIGNALS ON THE EXAMPLE OF THE RUSSIAN LANGUAGE." In Modern aspects of modeling systems and processes. FSBE Institution of Higher Education Voronezh State University of Forestry and Technologies named after G.F. Morozov, 2021. http://dx.doi.org/10.34220/mamsp_9-14.
Full textBelloch, Jose A., Christian Antoñanzas, Pablo Gutierrez-Parera, and Mª Angeles Simarro. "Audiovisual Tool for understanding Audio concepts for being used in bachelor’s degree programmes." In HEAd'16 - International Conference on Higher Education Advances. Valencia: Universitat Politècnica València, 2016. http://dx.doi.org/10.4995/head16.2016.2923.
Full textEngeln, Lars, Nhat Long Le, Matthew McGinity, and Rainer Groh. "Similarity Analysis of Visual Sketch-based Search for Sounds." In AM '21: Audio Mostly 2021. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3478384.3478423.
Full textReports on the topic "Audio analysi"
Wisher, Robert A., Annette N. Priest, and Edward C. Glover. Audio Teletraining for Unit Clerks: A Cost-Effectiveness Analysis. Fort Belvoir, VA: Defense Technical Information Center, June 1997. http://dx.doi.org/10.21236/ada337689.
Full textTololiu, Kevin Efrain, Arie Kurnianto, and krisztina Csokasi. Audio Intervention for Acute Pain Management - Protocol of Systematic Review and Meta-Analysis. INPLASY - International Platform of Registered Systematic Review and Meta-analysis Protocols, January 2023. http://dx.doi.org/10.37766/inplasy2023.1.0002.
Full textBrady-Herbst, Brenene. An Analysis of Spondee Recognition Thresholds in Auditory-only and Audio-visual Conditions. Portland State University Library, January 2000. http://dx.doi.org/10.15760/etd.7094.
Full textParanjothi, Gokul, Jonathan Morgenstein, and Hallie Lucas. MFVI Energy Efficiency Audit Training - Module 1.2: Lighting Analysis. Office of Scientific and Technical Information (OSTI), September 2022. http://dx.doi.org/10.2172/1886871.
Full textSapiro, Guillermo. Structured and Collaborative Signal Models: Theory and Applications in Image, Video, and Audio Analysis. Fort Belvoir, VA: Defense Technical Information Center, January 2013. http://dx.doi.org/10.21236/ada586672.
Full textJajodia, Sushi. Integration of Audit Data Analysis and Mining Techniques into Aide. Fort Belvoir, VA: Defense Technical Information Center, July 2006. http://dx.doi.org/10.21236/ada456840.
Full textParanjothi, Gokul, Jonathan Morgenstein, and Hallie Lucas. MFVI Energy Efficiency Audit Training Module 2.2: Plug Loads Analysis. Office of Scientific and Technical Information (OSTI), September 2022. http://dx.doi.org/10.2172/1889268.
Full textKhan, Mahreen. Evaluating External Government Audit. Institute of Development Studies, September 2022. http://dx.doi.org/10.19088/k4d.2022.140.
Full textQi, Yuan. Learning Algorithms for Audio and Video Processing: Independent Component Analysis and Support Vector Machine Based Approaches. Fort Belvoir, VA: Defense Technical Information Center, August 2000. http://dx.doi.org/10.21236/ada458739.
Full textKiianovska, N. M. The development of theory and methods of using cloud-based information and communication technologies in teaching mathematics of engineering students in the United States. Видавничий центр ДВНЗ «Криворізький національний університет», December 2014. http://dx.doi.org/10.31812/0564/1094.
Full text