Littérature scientifique sur le sujet « Audio effects modelling »
Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres
Sommaire
Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Audio effects modelling ».
À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.
Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.
Articles de revues sur le sujet "Audio effects modelling"
Tur, Ada. « Deep Learning for Style Transfer and Experimentation with Audio Effects and Music Creation ». Proceedings of the AAAI Conference on Artificial Intelligence 38, no 21 (24 mars 2024) : 23766–67. http://dx.doi.org/10.1609/aaai.v38i21.30558.
Texte intégralVanhatalo, Tara, Pierrick Legrand, Myriam Desainte-Catherine, Pierre Hanna et Guillaume Pille. « Evaluation of Real-Time Aliasing Reduction Methods in Neural Networks for Nonlinear Audio Effects Modelling ». Journal of the Audio Engineering Society 72, no 3 (7 mars 2024) : 114–22. http://dx.doi.org/10.17743/jaes.2022.0122.
Texte intégralZhong, Jiaxin, Ray Kirby, Mahmoud Karimi et Haishan Zou. « A spherical wave expansion for a steerable parametric array loudspeaker using Zernike polynomials ». Journal of the Acoustical Society of America 152, no 4 (octobre 2022) : 2296–308. http://dx.doi.org/10.1121/10.0014832.
Texte intégralKovsh, Oleksandr, et Oleksii Kopachinskyi. « Features of Editing in Modern Audiovisual Production : Special Effects and Transitions ». Bulletin of Kyiv National University of Culture and Arts. Series in Audiovisual Art and Production 6, no 1 (30 avril 2023) : 105–17. http://dx.doi.org/10.31866/2617-2674.6.1.2023.279255.
Texte intégralSasse, Heide, et Miriam Leuchter. « Capturing Primary School Students’ Emotional Responses with a Sensor Wristband ». Frontline Learning Research 9, no 3 (25 mai 2021) : 31–51. http://dx.doi.org/10.14786/flr.v9i3.723.
Texte intégralAzizi, Zahra, Rebecca J. Hirst, Fiona N. Newell, Rose Anne Kenny et Annalisa Setti. « Audio-visual integration is more precise in older adults with a high level of long-term physical activity ». PLOS ONE 18, no 10 (4 octobre 2023) : e0292373. http://dx.doi.org/10.1371/journal.pone.0292373.
Texte intégralMarselia, Maya, et Cita Meysiana. « Pembuatan Animasi 3D Sosialisasi Penggunaan Jalur Simpangan dan Bundaran Ketika Berkendara ». VOCATECH : Vocational Education and Technology Journal 2, no 2 (27 avril 2021) : 108–13. http://dx.doi.org/10.38038/vocatech.v2i2.55.
Texte intégralIsrael, Kai, Christopher Zerres et Dieter K. Tscheulin. « Presenting hotels in virtual reality : does it influence the booking intention ? » Journal of Hospitality and Tourism Technology 10, no 3 (17 septembre 2019) : 443–63. http://dx.doi.org/10.1108/jhtt-03-2018-0020.
Texte intégralWang, Shunguo, Mehrdad Bastani, Steven Constable, Thomas Kalscheuer et Alireza Malehmir. « Boat-towed radio-magnetotelluric and controlled source audio-magnetotelluric study to resolve fracture zones at Äspö Hard Rock Laboratory site, Sweden ». Geophysical Journal International 218, no 2 (23 avril 2019) : 1008–31. http://dx.doi.org/10.1093/gji/ggz162.
Texte intégralShahid Iqbal Rai, Maida Maqsood, Bushra Hanif, Muhammad Ali Adam, Muhammad Arslan, Hira Shafiq et Muhammad Sijawal. « Computational linguistics at the crossroads : A comprehensive review of NLP advancements ». World Journal of Advanced Engineering Technology and Sciences 11, no 2 (30 avril 2024) : 578–91. http://dx.doi.org/10.30574/wjaets.2024.11.2.0146.
Texte intégralThèses sur le sujet "Audio effects modelling"
Vanhatalo, Tara. « Simulation en temps réel d'effets audio non-linéaires par intelligence artificielle ». Electronic Thesis or Diss., Bordeaux, 2024. http://www.theses.fr/2024BORD0077.
Texte intégralCertain products in the realm of music technology have uniquely desirable sonic characteristics that are often sought after by musicians. These characteristics are often due to the nonlinearities of their electronic circuits. We are concerned with preserving the sound of this gear through digital simulations and making them widely available to numerous musicians. This field of study has seen a large rise in the use of neural networks for the simulation in recent years. This work applies neural networks for the task. Particularly, we focus on real-time capable black-box methods for nonlinear effects modelling, with the guitarist in mind. We cover the current state-of-the-art and identify areas warranting improvement or study with a final goal of product development. A first step of identifying architectures capable of real-time processing in a streaming manner is followed by augmenting and improving these architectures and their training pipeline through a number of methods. These methods include continuous integration with unit testing, automatic hyperparameter optimisation, and the use of transfer learning. A real-time prototype utilising a custom C++ backend is created using these methods. A study in real-time anti-aliasing for black-box models is presented as it was found that these networks exhibit high amounts of aliasing distortion. Work on user control incorporation is also started for a comprehensive simulation of the analogue systems. This enables a full range of tone-shaping possibilities for the end user. The performance of the approaches presented is assessed both through objective and subjective evaluation. Finally, a number of possible directions for future work are also presented
Song, Guanghan. « Effect of sound in videos on gaze : contribution to audio-visual saliency modelling ». Thesis, Grenoble, 2013. http://www.theses.fr/2013GRENT013/document.
Texte intégralHumans receive large quantity of information from the environment with sight and hearing. To help us to react rapidly and properly, there exist mechanisms in the brain to bias attention towards particular regions, namely the salient regions. This attentional bias is not only influenced by vision, but also influenced by audio-visual interaction. According to existing literature, the visual attention can be studied towards eye movements, however the sound effect on eye movement in videos is little known. The aim of this thesis is to investigate the influence of sound in videos on eye movement and to propose an audio-visual saliency model to predict salient regions in videos more accurately. For this purpose, we designed a first audio-visual experiment of eye tracking. We created a database of short video excerpts selected from various films. These excerpts were viewed by participants either with their original soundtrack (AV condition), or without soundtrack (V condition). We analyzed the difference of eye positions between participants with AV and V conditions. The results show that there does exist an effect of sound on eye movement and the effect is greater for the on-screen speech class. Then, we designed a second audio-visual experiment with thirteen classes of sound. Through comparing the difference of eye positions between participants with AV and V conditions, we conclude that the effect of sound is different depending on the type of sound, and the classes with human voice (i.e. speech, singer, human noise and singers classes) have the greatest effect. More precisely, sound source significantly attracted eye position only when the sound was human voice. Moreover, participants with AV condition had a shorter average duration of fixation than with V condition. Finally, we proposed a preliminary audio-visual saliency model based on the findings of the above experiments. In this model, two fusion strategies of audio and visual information were described: one for speech sound class, and one for musical instrument sound class. The audio-visual fusion strategies defined in the model improves its predictability with AV condition
Wallin, Emil. « Evaluation of Physically Inspired Models in Video Game Melee Weapon SFX ». Thesis, Luleå tekniska universitet, Medier, ljudteknik och teater, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-78968.
Texte intégralChapitres de livres sur le sujet "Audio effects modelling"
Anderson, Raymond A. « Finalization ». Dans Credit Intelligence & ; Modelling, 795–826. Oxford University Press, 2021. http://dx.doi.org/10.1093/oso/9780192844194.003.0026.
Texte intégralActes de conférences sur le sujet "Audio effects modelling"
Comunità, Marco, Christian J. Steinmetz, Huy Phan et Joshua D. Reiss. « Modelling Black-Box Audio Effects with Time-Varying Feature Modulation ». Dans ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2023. http://dx.doi.org/10.1109/icassp49357.2023.10097173.
Texte intégralArmstrong D'Souza, Dony, et V. Veena Devi Shastrimath. « Modelling of Audio Effects for Vocal and Music Synthesis in Real Time ». Dans 2019 3rd International Conference on Computing Methodologies and Communication (ICCMC). IEEE, 2019. http://dx.doi.org/10.1109/iccmc.2019.8819852.
Texte intégralRUNJI, JOEL MURITHI, et GENDA CHEN. « AUGMENTED REALITY WITH LIVE VIDEO STREAMING FOR BEYOND VISUAL LINE OF SIGHT INSPECTION FOR A STEEL BRIDGE ». Dans Structural Health Monitoring 2023. Destech Publications, Inc., 2023. http://dx.doi.org/10.12783/shm2023/36980.
Texte intégralFitzpatrick, Joe, et Flaithri Neff. « A Web Guide to Perceptually Congruent Sonification ». Dans ICAD 2021 : The 26th International Conference on Auditory Display. icad.org : International Community for Auditory Display, 2021. http://dx.doi.org/10.21785/icad2021.014.
Texte intégralRapports d'organisations sur le sujet "Audio effects modelling"
Murad, M. Hassan, Stephanie M. Chang, Celia Fiordalisi, Jennifer S. Lin, Timothy J. Wilt, Amy Tsou, Brian Leas et al. Improving the Utility of Evidence Synthesis for Decision Makers in the Face of Insufficient Evidence. Agency for Healthcare Research and Quality (AHRQ), avril 2021. http://dx.doi.org/10.23970/ahrqepcwhitepaperimproving.
Texte intégralRankin, Nicole, Deborah McGregor, Candice Donnelly, Bethany Van Dort, Richard De Abreu Lourenco, Anne Cust et Emily Stone. Lung cancer screening using low-dose computed tomography for high risk populations : Investigating effectiveness and screening program implementation considerations : An Evidence Check rapid review brokered by the Sax Institute (www.saxinstitute.org.au) for the Cancer Institute NSW. The Sax Institute, octobre 2019. http://dx.doi.org/10.57022/clzt5093.
Texte intégral