Letteratura scientifica selezionata sul tema "Audio effects modelling"
Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili
Consulta la lista di attuali articoli, libri, tesi, atti di convegni e altre fonti scientifiche attinenti al tema "Audio effects modelling".
Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.
Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.
Articoli di riviste sul tema "Audio effects modelling"
Tur, Ada. "Deep Learning for Style Transfer and Experimentation with Audio Effects and Music Creation". Proceedings of the AAAI Conference on Artificial Intelligence 38, n. 21 (24 marzo 2024): 23766–67. http://dx.doi.org/10.1609/aaai.v38i21.30558.
Testo completoVanhatalo, Tara, Pierrick Legrand, Myriam Desainte-Catherine, Pierre Hanna e Guillaume Pille. "Evaluation of Real-Time Aliasing Reduction Methods in Neural Networks for Nonlinear Audio Effects Modelling". Journal of the Audio Engineering Society 72, n. 3 (7 marzo 2024): 114–22. http://dx.doi.org/10.17743/jaes.2022.0122.
Testo completoZhong, Jiaxin, Ray Kirby, Mahmoud Karimi e Haishan Zou. "A spherical wave expansion for a steerable parametric array loudspeaker using Zernike polynomials". Journal of the Acoustical Society of America 152, n. 4 (ottobre 2022): 2296–308. http://dx.doi.org/10.1121/10.0014832.
Testo completoKovsh, Oleksandr, e Oleksii Kopachinskyi. "Features of Editing in Modern Audiovisual Production: Special Effects and Transitions". Bulletin of Kyiv National University of Culture and Arts. Series in Audiovisual Art and Production 6, n. 1 (30 aprile 2023): 105–17. http://dx.doi.org/10.31866/2617-2674.6.1.2023.279255.
Testo completoSasse, Heide, e Miriam Leuchter. "Capturing Primary School Students’ Emotional Responses with a Sensor Wristband". Frontline Learning Research 9, n. 3 (25 maggio 2021): 31–51. http://dx.doi.org/10.14786/flr.v9i3.723.
Testo completoAzizi, Zahra, Rebecca J. Hirst, Fiona N. Newell, Rose Anne Kenny e Annalisa Setti. "Audio-visual integration is more precise in older adults with a high level of long-term physical activity". PLOS ONE 18, n. 10 (4 ottobre 2023): e0292373. http://dx.doi.org/10.1371/journal.pone.0292373.
Testo completoMarselia, Maya, e Cita Meysiana. "Pembuatan Animasi 3D Sosialisasi Penggunaan Jalur Simpangan dan Bundaran Ketika Berkendara". VOCATECH: Vocational Education and Technology Journal 2, n. 2 (27 aprile 2021): 108–13. http://dx.doi.org/10.38038/vocatech.v2i2.55.
Testo completoIsrael, Kai, Christopher Zerres e Dieter K. Tscheulin. "Presenting hotels in virtual reality: does it influence the booking intention?" Journal of Hospitality and Tourism Technology 10, n. 3 (17 settembre 2019): 443–63. http://dx.doi.org/10.1108/jhtt-03-2018-0020.
Testo completoWang, Shunguo, Mehrdad Bastani, Steven Constable, Thomas Kalscheuer e Alireza Malehmir. "Boat-towed radio-magnetotelluric and controlled source audio-magnetotelluric study to resolve fracture zones at Äspö Hard Rock Laboratory site, Sweden". Geophysical Journal International 218, n. 2 (23 aprile 2019): 1008–31. http://dx.doi.org/10.1093/gji/ggz162.
Testo completoShahid Iqbal Rai, Maida Maqsood, Bushra Hanif, Muhammad Ali Adam, Muhammad Arslan, Hira Shafiq e Muhammad Sijawal. "Computational linguistics at the crossroads: A comprehensive review of NLP advancements". World Journal of Advanced Engineering Technology and Sciences 11, n. 2 (30 aprile 2024): 578–91. http://dx.doi.org/10.30574/wjaets.2024.11.2.0146.
Testo completoTesi sul tema "Audio effects modelling"
Vanhatalo, Tara. "Simulation en temps réel d'effets audio non-linéaires par intelligence artificielle". Electronic Thesis or Diss., Bordeaux, 2024. http://www.theses.fr/2024BORD0077.
Testo completoCertain products in the realm of music technology have uniquely desirable sonic characteristics that are often sought after by musicians. These characteristics are often due to the nonlinearities of their electronic circuits. We are concerned with preserving the sound of this gear through digital simulations and making them widely available to numerous musicians. This field of study has seen a large rise in the use of neural networks for the simulation in recent years. This work applies neural networks for the task. Particularly, we focus on real-time capable black-box methods for nonlinear effects modelling, with the guitarist in mind. We cover the current state-of-the-art and identify areas warranting improvement or study with a final goal of product development. A first step of identifying architectures capable of real-time processing in a streaming manner is followed by augmenting and improving these architectures and their training pipeline through a number of methods. These methods include continuous integration with unit testing, automatic hyperparameter optimisation, and the use of transfer learning. A real-time prototype utilising a custom C++ backend is created using these methods. A study in real-time anti-aliasing for black-box models is presented as it was found that these networks exhibit high amounts of aliasing distortion. Work on user control incorporation is also started for a comprehensive simulation of the analogue systems. This enables a full range of tone-shaping possibilities for the end user. The performance of the approaches presented is assessed both through objective and subjective evaluation. Finally, a number of possible directions for future work are also presented
Song, Guanghan. "Effect of sound in videos on gaze : contribution to audio-visual saliency modelling". Thesis, Grenoble, 2013. http://www.theses.fr/2013GRENT013/document.
Testo completoHumans receive large quantity of information from the environment with sight and hearing. To help us to react rapidly and properly, there exist mechanisms in the brain to bias attention towards particular regions, namely the salient regions. This attentional bias is not only influenced by vision, but also influenced by audio-visual interaction. According to existing literature, the visual attention can be studied towards eye movements, however the sound effect on eye movement in videos is little known. The aim of this thesis is to investigate the influence of sound in videos on eye movement and to propose an audio-visual saliency model to predict salient regions in videos more accurately. For this purpose, we designed a first audio-visual experiment of eye tracking. We created a database of short video excerpts selected from various films. These excerpts were viewed by participants either with their original soundtrack (AV condition), or without soundtrack (V condition). We analyzed the difference of eye positions between participants with AV and V conditions. The results show that there does exist an effect of sound on eye movement and the effect is greater for the on-screen speech class. Then, we designed a second audio-visual experiment with thirteen classes of sound. Through comparing the difference of eye positions between participants with AV and V conditions, we conclude that the effect of sound is different depending on the type of sound, and the classes with human voice (i.e. speech, singer, human noise and singers classes) have the greatest effect. More precisely, sound source significantly attracted eye position only when the sound was human voice. Moreover, participants with AV condition had a shorter average duration of fixation than with V condition. Finally, we proposed a preliminary audio-visual saliency model based on the findings of the above experiments. In this model, two fusion strategies of audio and visual information were described: one for speech sound class, and one for musical instrument sound class. The audio-visual fusion strategies defined in the model improves its predictability with AV condition
Wallin, Emil. "Evaluation of Physically Inspired Models in Video Game Melee Weapon SFX". Thesis, Luleå tekniska universitet, Medier, ljudteknik och teater, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-78968.
Testo completoCapitoli di libri sul tema "Audio effects modelling"
Anderson, Raymond A. "Finalization". In Credit Intelligence & Modelling, 795–826. Oxford University Press, 2021. http://dx.doi.org/10.1093/oso/9780192844194.003.0026.
Testo completoAtti di convegni sul tema "Audio effects modelling"
Comunità, Marco, Christian J. Steinmetz, Huy Phan e Joshua D. Reiss. "Modelling Black-Box Audio Effects with Time-Varying Feature Modulation". In ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2023. http://dx.doi.org/10.1109/icassp49357.2023.10097173.
Testo completoArmstrong D'Souza, Dony, e V. Veena Devi Shastrimath. "Modelling of Audio Effects for Vocal and Music Synthesis in Real Time". In 2019 3rd International Conference on Computing Methodologies and Communication (ICCMC). IEEE, 2019. http://dx.doi.org/10.1109/iccmc.2019.8819852.
Testo completoRUNJI, JOEL MURITHI, e GENDA CHEN. "AUGMENTED REALITY WITH LIVE VIDEO STREAMING FOR BEYOND VISUAL LINE OF SIGHT INSPECTION FOR A STEEL BRIDGE". In Structural Health Monitoring 2023. Destech Publications, Inc., 2023. http://dx.doi.org/10.12783/shm2023/36980.
Testo completoFitzpatrick, Joe, e Flaithri Neff. "A Web Guide to Perceptually Congruent Sonification". In ICAD 2021: The 26th International Conference on Auditory Display. icad.org: International Community for Auditory Display, 2021. http://dx.doi.org/10.21785/icad2021.014.
Testo completoRapporti di organizzazioni sul tema "Audio effects modelling"
Murad, M. Hassan, Stephanie M. Chang, Celia Fiordalisi, Jennifer S. Lin, Timothy J. Wilt, Amy Tsou, Brian Leas et al. Improving the Utility of Evidence Synthesis for Decision Makers in the Face of Insufficient Evidence. Agency for Healthcare Research and Quality (AHRQ), aprile 2021. http://dx.doi.org/10.23970/ahrqepcwhitepaperimproving.
Testo completoRankin, Nicole, Deborah McGregor, Candice Donnelly, Bethany Van Dort, Richard De Abreu Lourenco, Anne Cust e Emily Stone. Lung cancer screening using low-dose computed tomography for high risk populations: Investigating effectiveness and screening program implementation considerations: An Evidence Check rapid review brokered by the Sax Institute (www.saxinstitute.org.au) for the Cancer Institute NSW. The Sax Institute, ottobre 2019. http://dx.doi.org/10.57022/clzt5093.
Testo completo