Добірка наукової літератури з теми "Audio effects modelling"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Audio effects modelling".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Статті в журналах з теми "Audio effects modelling"
Tur, Ada. "Deep Learning for Style Transfer and Experimentation with Audio Effects and Music Creation." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 21 (March 24, 2024): 23766–67. http://dx.doi.org/10.1609/aaai.v38i21.30558.
Повний текст джерелаVanhatalo, Tara, Pierrick Legrand, Myriam Desainte-Catherine, Pierre Hanna, and Guillaume Pille. "Evaluation of Real-Time Aliasing Reduction Methods in Neural Networks for Nonlinear Audio Effects Modelling." Journal of the Audio Engineering Society 72, no. 3 (March 7, 2024): 114–22. http://dx.doi.org/10.17743/jaes.2022.0122.
Повний текст джерелаZhong, Jiaxin, Ray Kirby, Mahmoud Karimi, and Haishan Zou. "A spherical wave expansion for a steerable parametric array loudspeaker using Zernike polynomials." Journal of the Acoustical Society of America 152, no. 4 (October 2022): 2296–308. http://dx.doi.org/10.1121/10.0014832.
Повний текст джерелаKovsh, Oleksandr, and Oleksii Kopachinskyi. "Features of Editing in Modern Audiovisual Production: Special Effects and Transitions." Bulletin of Kyiv National University of Culture and Arts. Series in Audiovisual Art and Production 6, no. 1 (April 30, 2023): 105–17. http://dx.doi.org/10.31866/2617-2674.6.1.2023.279255.
Повний текст джерелаSasse, Heide, and Miriam Leuchter. "Capturing Primary School Students’ Emotional Responses with a Sensor Wristband." Frontline Learning Research 9, no. 3 (May 25, 2021): 31–51. http://dx.doi.org/10.14786/flr.v9i3.723.
Повний текст джерелаAzizi, Zahra, Rebecca J. Hirst, Fiona N. Newell, Rose Anne Kenny, and Annalisa Setti. "Audio-visual integration is more precise in older adults with a high level of long-term physical activity." PLOS ONE 18, no. 10 (October 4, 2023): e0292373. http://dx.doi.org/10.1371/journal.pone.0292373.
Повний текст джерелаMarselia, Maya, and Cita Meysiana. "Pembuatan Animasi 3D Sosialisasi Penggunaan Jalur Simpangan dan Bundaran Ketika Berkendara." VOCATECH: Vocational Education and Technology Journal 2, no. 2 (April 27, 2021): 108–13. http://dx.doi.org/10.38038/vocatech.v2i2.55.
Повний текст джерелаIsrael, Kai, Christopher Zerres, and Dieter K. Tscheulin. "Presenting hotels in virtual reality: does it influence the booking intention?" Journal of Hospitality and Tourism Technology 10, no. 3 (September 17, 2019): 443–63. http://dx.doi.org/10.1108/jhtt-03-2018-0020.
Повний текст джерелаWang, Shunguo, Mehrdad Bastani, Steven Constable, Thomas Kalscheuer, and Alireza Malehmir. "Boat-towed radio-magnetotelluric and controlled source audio-magnetotelluric study to resolve fracture zones at Äspö Hard Rock Laboratory site, Sweden." Geophysical Journal International 218, no. 2 (April 23, 2019): 1008–31. http://dx.doi.org/10.1093/gji/ggz162.
Повний текст джерелаShahid Iqbal Rai, Maida Maqsood, Bushra Hanif, Muhammad Ali Adam, Muhammad Arslan, Hira Shafiq, and Muhammad Sijawal. "Computational linguistics at the crossroads: A comprehensive review of NLP advancements." World Journal of Advanced Engineering Technology and Sciences 11, no. 2 (April 30, 2024): 578–91. http://dx.doi.org/10.30574/wjaets.2024.11.2.0146.
Повний текст джерелаДисертації з теми "Audio effects modelling"
Vanhatalo, Tara. "Simulation en temps réel d'effets audio non-linéaires par intelligence artificielle." Electronic Thesis or Diss., Bordeaux, 2024. http://www.theses.fr/2024BORD0077.
Повний текст джерелаCertain products in the realm of music technology have uniquely desirable sonic characteristics that are often sought after by musicians. These characteristics are often due to the nonlinearities of their electronic circuits. We are concerned with preserving the sound of this gear through digital simulations and making them widely available to numerous musicians. This field of study has seen a large rise in the use of neural networks for the simulation in recent years. This work applies neural networks for the task. Particularly, we focus on real-time capable black-box methods for nonlinear effects modelling, with the guitarist in mind. We cover the current state-of-the-art and identify areas warranting improvement or study with a final goal of product development. A first step of identifying architectures capable of real-time processing in a streaming manner is followed by augmenting and improving these architectures and their training pipeline through a number of methods. These methods include continuous integration with unit testing, automatic hyperparameter optimisation, and the use of transfer learning. A real-time prototype utilising a custom C++ backend is created using these methods. A study in real-time anti-aliasing for black-box models is presented as it was found that these networks exhibit high amounts of aliasing distortion. Work on user control incorporation is also started for a comprehensive simulation of the analogue systems. This enables a full range of tone-shaping possibilities for the end user. The performance of the approaches presented is assessed both through objective and subjective evaluation. Finally, a number of possible directions for future work are also presented
Song, Guanghan. "Effect of sound in videos on gaze : contribution to audio-visual saliency modelling." Thesis, Grenoble, 2013. http://www.theses.fr/2013GRENT013/document.
Повний текст джерелаHumans receive large quantity of information from the environment with sight and hearing. To help us to react rapidly and properly, there exist mechanisms in the brain to bias attention towards particular regions, namely the salient regions. This attentional bias is not only influenced by vision, but also influenced by audio-visual interaction. According to existing literature, the visual attention can be studied towards eye movements, however the sound effect on eye movement in videos is little known. The aim of this thesis is to investigate the influence of sound in videos on eye movement and to propose an audio-visual saliency model to predict salient regions in videos more accurately. For this purpose, we designed a first audio-visual experiment of eye tracking. We created a database of short video excerpts selected from various films. These excerpts were viewed by participants either with their original soundtrack (AV condition), or without soundtrack (V condition). We analyzed the difference of eye positions between participants with AV and V conditions. The results show that there does exist an effect of sound on eye movement and the effect is greater for the on-screen speech class. Then, we designed a second audio-visual experiment with thirteen classes of sound. Through comparing the difference of eye positions between participants with AV and V conditions, we conclude that the effect of sound is different depending on the type of sound, and the classes with human voice (i.e. speech, singer, human noise and singers classes) have the greatest effect. More precisely, sound source significantly attracted eye position only when the sound was human voice. Moreover, participants with AV condition had a shorter average duration of fixation than with V condition. Finally, we proposed a preliminary audio-visual saliency model based on the findings of the above experiments. In this model, two fusion strategies of audio and visual information were described: one for speech sound class, and one for musical instrument sound class. The audio-visual fusion strategies defined in the model improves its predictability with AV condition
Wallin, Emil. "Evaluation of Physically Inspired Models in Video Game Melee Weapon SFX." Thesis, Luleå tekniska universitet, Medier, ljudteknik och teater, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-78968.
Повний текст джерелаЧастини книг з теми "Audio effects modelling"
Anderson, Raymond A. "Finalization." In Credit Intelligence & Modelling, 795–826. Oxford University Press, 2021. http://dx.doi.org/10.1093/oso/9780192844194.003.0026.
Повний текст джерелаТези доповідей конференцій з теми "Audio effects modelling"
Comunità, Marco, Christian J. Steinmetz, Huy Phan, and Joshua D. Reiss. "Modelling Black-Box Audio Effects with Time-Varying Feature Modulation." In ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2023. http://dx.doi.org/10.1109/icassp49357.2023.10097173.
Повний текст джерелаArmstrong D'Souza, Dony, and V. Veena Devi Shastrimath. "Modelling of Audio Effects for Vocal and Music Synthesis in Real Time." In 2019 3rd International Conference on Computing Methodologies and Communication (ICCMC). IEEE, 2019. http://dx.doi.org/10.1109/iccmc.2019.8819852.
Повний текст джерелаRUNJI, JOEL MURITHI, and GENDA CHEN. "AUGMENTED REALITY WITH LIVE VIDEO STREAMING FOR BEYOND VISUAL LINE OF SIGHT INSPECTION FOR A STEEL BRIDGE." In Structural Health Monitoring 2023. Destech Publications, Inc., 2023. http://dx.doi.org/10.12783/shm2023/36980.
Повний текст джерелаFitzpatrick, Joe, and Flaithri Neff. "A Web Guide to Perceptually Congruent Sonification." In ICAD 2021: The 26th International Conference on Auditory Display. icad.org: International Community for Auditory Display, 2021. http://dx.doi.org/10.21785/icad2021.014.
Повний текст джерелаЗвіти організацій з теми "Audio effects modelling"
Murad, M. Hassan, Stephanie M. Chang, Celia Fiordalisi, Jennifer S. Lin, Timothy J. Wilt, Amy Tsou, Brian Leas, et al. Improving the Utility of Evidence Synthesis for Decision Makers in the Face of Insufficient Evidence. Agency for Healthcare Research and Quality (AHRQ), April 2021. http://dx.doi.org/10.23970/ahrqepcwhitepaperimproving.
Повний текст джерелаRankin, Nicole, Deborah McGregor, Candice Donnelly, Bethany Van Dort, Richard De Abreu Lourenco, Anne Cust, and Emily Stone. Lung cancer screening using low-dose computed tomography for high risk populations: Investigating effectiveness and screening program implementation considerations: An Evidence Check rapid review brokered by the Sax Institute (www.saxinstitute.org.au) for the Cancer Institute NSW. The Sax Institute, October 2019. http://dx.doi.org/10.57022/clzt5093.
Повний текст джерела