Literatura científica selecionada sobre o tema "Audio effects modelling"
Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos
Consulte a lista de atuais artigos, livros, teses, anais de congressos e outras fontes científicas relevantes para o tema "Audio effects modelling".
Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.
Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.
Artigos de revistas sobre o assunto "Audio effects modelling"
Tur, Ada. "Deep Learning for Style Transfer and Experimentation with Audio Effects and Music Creation". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 21 (24 de março de 2024): 23766–67. http://dx.doi.org/10.1609/aaai.v38i21.30558.
Texto completo da fonteVanhatalo, Tara, Pierrick Legrand, Myriam Desainte-Catherine, Pierre Hanna e Guillaume Pille. "Evaluation of Real-Time Aliasing Reduction Methods in Neural Networks for Nonlinear Audio Effects Modelling". Journal of the Audio Engineering Society 72, n.º 3 (7 de março de 2024): 114–22. http://dx.doi.org/10.17743/jaes.2022.0122.
Texto completo da fonteZhong, Jiaxin, Ray Kirby, Mahmoud Karimi e Haishan Zou. "A spherical wave expansion for a steerable parametric array loudspeaker using Zernike polynomials". Journal of the Acoustical Society of America 152, n.º 4 (outubro de 2022): 2296–308. http://dx.doi.org/10.1121/10.0014832.
Texto completo da fonteKovsh, Oleksandr, e Oleksii Kopachinskyi. "Features of Editing in Modern Audiovisual Production: Special Effects and Transitions". Bulletin of Kyiv National University of Culture and Arts. Series in Audiovisual Art and Production 6, n.º 1 (30 de abril de 2023): 105–17. http://dx.doi.org/10.31866/2617-2674.6.1.2023.279255.
Texto completo da fonteSasse, Heide, e Miriam Leuchter. "Capturing Primary School Students’ Emotional Responses with a Sensor Wristband". Frontline Learning Research 9, n.º 3 (25 de maio de 2021): 31–51. http://dx.doi.org/10.14786/flr.v9i3.723.
Texto completo da fonteAzizi, Zahra, Rebecca J. Hirst, Fiona N. Newell, Rose Anne Kenny e Annalisa Setti. "Audio-visual integration is more precise in older adults with a high level of long-term physical activity". PLOS ONE 18, n.º 10 (4 de outubro de 2023): e0292373. http://dx.doi.org/10.1371/journal.pone.0292373.
Texto completo da fonteMarselia, Maya, e Cita Meysiana. "Pembuatan Animasi 3D Sosialisasi Penggunaan Jalur Simpangan dan Bundaran Ketika Berkendara". VOCATECH: Vocational Education and Technology Journal 2, n.º 2 (27 de abril de 2021): 108–13. http://dx.doi.org/10.38038/vocatech.v2i2.55.
Texto completo da fonteIsrael, Kai, Christopher Zerres e Dieter K. Tscheulin. "Presenting hotels in virtual reality: does it influence the booking intention?" Journal of Hospitality and Tourism Technology 10, n.º 3 (17 de setembro de 2019): 443–63. http://dx.doi.org/10.1108/jhtt-03-2018-0020.
Texto completo da fonteWang, Shunguo, Mehrdad Bastani, Steven Constable, Thomas Kalscheuer e Alireza Malehmir. "Boat-towed radio-magnetotelluric and controlled source audio-magnetotelluric study to resolve fracture zones at Äspö Hard Rock Laboratory site, Sweden". Geophysical Journal International 218, n.º 2 (23 de abril de 2019): 1008–31. http://dx.doi.org/10.1093/gji/ggz162.
Texto completo da fonteShahid Iqbal Rai, Maida Maqsood, Bushra Hanif, Muhammad Ali Adam, Muhammad Arslan, Hira Shafiq e Muhammad Sijawal. "Computational linguistics at the crossroads: A comprehensive review of NLP advancements". World Journal of Advanced Engineering Technology and Sciences 11, n.º 2 (30 de abril de 2024): 578–91. http://dx.doi.org/10.30574/wjaets.2024.11.2.0146.
Texto completo da fonteTeses / dissertações sobre o assunto "Audio effects modelling"
Vanhatalo, Tara. "Simulation en temps réel d'effets audio non-linéaires par intelligence artificielle". Electronic Thesis or Diss., Bordeaux, 2024. http://www.theses.fr/2024BORD0077.
Texto completo da fonteCertain products in the realm of music technology have uniquely desirable sonic characteristics that are often sought after by musicians. These characteristics are often due to the nonlinearities of their electronic circuits. We are concerned with preserving the sound of this gear through digital simulations and making them widely available to numerous musicians. This field of study has seen a large rise in the use of neural networks for the simulation in recent years. This work applies neural networks for the task. Particularly, we focus on real-time capable black-box methods for nonlinear effects modelling, with the guitarist in mind. We cover the current state-of-the-art and identify areas warranting improvement or study with a final goal of product development. A first step of identifying architectures capable of real-time processing in a streaming manner is followed by augmenting and improving these architectures and their training pipeline through a number of methods. These methods include continuous integration with unit testing, automatic hyperparameter optimisation, and the use of transfer learning. A real-time prototype utilising a custom C++ backend is created using these methods. A study in real-time anti-aliasing for black-box models is presented as it was found that these networks exhibit high amounts of aliasing distortion. Work on user control incorporation is also started for a comprehensive simulation of the analogue systems. This enables a full range of tone-shaping possibilities for the end user. The performance of the approaches presented is assessed both through objective and subjective evaluation. Finally, a number of possible directions for future work are also presented
Song, Guanghan. "Effect of sound in videos on gaze : contribution to audio-visual saliency modelling". Thesis, Grenoble, 2013. http://www.theses.fr/2013GRENT013/document.
Texto completo da fonteHumans receive large quantity of information from the environment with sight and hearing. To help us to react rapidly and properly, there exist mechanisms in the brain to bias attention towards particular regions, namely the salient regions. This attentional bias is not only influenced by vision, but also influenced by audio-visual interaction. According to existing literature, the visual attention can be studied towards eye movements, however the sound effect on eye movement in videos is little known. The aim of this thesis is to investigate the influence of sound in videos on eye movement and to propose an audio-visual saliency model to predict salient regions in videos more accurately. For this purpose, we designed a first audio-visual experiment of eye tracking. We created a database of short video excerpts selected from various films. These excerpts were viewed by participants either with their original soundtrack (AV condition), or without soundtrack (V condition). We analyzed the difference of eye positions between participants with AV and V conditions. The results show that there does exist an effect of sound on eye movement and the effect is greater for the on-screen speech class. Then, we designed a second audio-visual experiment with thirteen classes of sound. Through comparing the difference of eye positions between participants with AV and V conditions, we conclude that the effect of sound is different depending on the type of sound, and the classes with human voice (i.e. speech, singer, human noise and singers classes) have the greatest effect. More precisely, sound source significantly attracted eye position only when the sound was human voice. Moreover, participants with AV condition had a shorter average duration of fixation than with V condition. Finally, we proposed a preliminary audio-visual saliency model based on the findings of the above experiments. In this model, two fusion strategies of audio and visual information were described: one for speech sound class, and one for musical instrument sound class. The audio-visual fusion strategies defined in the model improves its predictability with AV condition
Wallin, Emil. "Evaluation of Physically Inspired Models in Video Game Melee Weapon SFX". Thesis, Luleå tekniska universitet, Medier, ljudteknik och teater, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-78968.
Texto completo da fonteCapítulos de livros sobre o assunto "Audio effects modelling"
Anderson, Raymond A. "Finalization". In Credit Intelligence & Modelling, 795–826. Oxford University Press, 2021. http://dx.doi.org/10.1093/oso/9780192844194.003.0026.
Texto completo da fonteTrabalhos de conferências sobre o assunto "Audio effects modelling"
Comunità, Marco, Christian J. Steinmetz, Huy Phan e Joshua D. Reiss. "Modelling Black-Box Audio Effects with Time-Varying Feature Modulation". In ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2023. http://dx.doi.org/10.1109/icassp49357.2023.10097173.
Texto completo da fonteArmstrong D'Souza, Dony, e V. Veena Devi Shastrimath. "Modelling of Audio Effects for Vocal and Music Synthesis in Real Time". In 2019 3rd International Conference on Computing Methodologies and Communication (ICCMC). IEEE, 2019. http://dx.doi.org/10.1109/iccmc.2019.8819852.
Texto completo da fonteRUNJI, JOEL MURITHI, e GENDA CHEN. "AUGMENTED REALITY WITH LIVE VIDEO STREAMING FOR BEYOND VISUAL LINE OF SIGHT INSPECTION FOR A STEEL BRIDGE". In Structural Health Monitoring 2023. Destech Publications, Inc., 2023. http://dx.doi.org/10.12783/shm2023/36980.
Texto completo da fonteFitzpatrick, Joe, e Flaithri Neff. "A Web Guide to Perceptually Congruent Sonification". In ICAD 2021: The 26th International Conference on Auditory Display. icad.org: International Community for Auditory Display, 2021. http://dx.doi.org/10.21785/icad2021.014.
Texto completo da fonteRelatórios de organizações sobre o assunto "Audio effects modelling"
Murad, M. Hassan, Stephanie M. Chang, Celia Fiordalisi, Jennifer S. Lin, Timothy J. Wilt, Amy Tsou, Brian Leas et al. Improving the Utility of Evidence Synthesis for Decision Makers in the Face of Insufficient Evidence. Agency for Healthcare Research and Quality (AHRQ), abril de 2021. http://dx.doi.org/10.23970/ahrqepcwhitepaperimproving.
Texto completo da fonteRankin, Nicole, Deborah McGregor, Candice Donnelly, Bethany Van Dort, Richard De Abreu Lourenco, Anne Cust e Emily Stone. Lung cancer screening using low-dose computed tomography for high risk populations: Investigating effectiveness and screening program implementation considerations: An Evidence Check rapid review brokered by the Sax Institute (www.saxinstitute.org.au) for the Cancer Institute NSW. The Sax Institute, outubro de 2019. http://dx.doi.org/10.57022/clzt5093.
Texto completo da fonte