Artykuły w czasopismach na temat „Neural audio synthesis”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „Neural audio synthesis”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.
Li, Dongze, Kang Zhao, Wei Wang, Bo Peng, Yingya Zhang, Jing Dong i Tieniu Tan. "AE-NeRF: Audio Enhanced Neural Radiance Field for Few Shot Talking Head Synthesis". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 4 (24.03.2024): 3037–45. http://dx.doi.org/10.1609/aaai.v38i4.28086.
Pełny tekst źródłaVyawahare, Prof D. G. "Image to Audio Conversion for Blind People Using Neural Network". International Journal for Research in Applied Science and Engineering Technology 11, nr 12 (31.12.2023): 1949–57. http://dx.doi.org/10.22214/ijraset.2023.57712.
Pełny tekst źródłaKiefer, Chris. "Sample-level sound synthesis with recurrent neural networks and conceptors". PeerJ Computer Science 5 (8.07.2019): e205. http://dx.doi.org/10.7717/peerj-cs.205.
Pełny tekst źródłaLiu, Yunyi, i Craig Jin. "Impact on quality and diversity from integrating a reconstruction loss into neural audio synthesis". Journal of the Acoustical Society of America 154, nr 4_supplement (1.10.2023): A99. http://dx.doi.org/10.1121/10.0022922.
Pełny tekst źródłaKhandelwal, Karan, Krishiv Pandita, Kshitij Priyankar, Kumar Parakram i Tejaswini K. "Svara Rachana - Audio Driven Facial Expression Synthesis". International Journal for Research in Applied Science and Engineering Technology 12, nr 5 (31.05.2024): 2024–29. http://dx.doi.org/10.22214/ijraset.2024.62019.
Pełny tekst źródłaVOITKO, Viktoriia, Svitlana BEVZ, Sergii BURBELO i Pavlo STAVYTSKYI. "AUDIO GENERATION TECHNOLOGY OF A SYSTEM OF SYNTHESIS AND ANALYSIS OF MUSIC COMPOSITIONS". Herald of Khmelnytskyi National University 305, nr 1 (23.02.2022): 64–67. http://dx.doi.org/10.31891/2307-5732-2022-305-1-64-67.
Pełny tekst źródłaLi, Naihan, Yanqing Liu, Yu Wu, Shujie Liu, Sheng Zhao i Ming Liu. "RobuTrans: A Robust Transformer-Based Text-to-Speech Model". Proceedings of the AAAI Conference on Artificial Intelligence 34, nr 05 (3.04.2020): 8228–35. http://dx.doi.org/10.1609/aaai.v34i05.6337.
Pełny tekst źródłaHryhorenko, N., N. Larionov i V. Bredikhin. "RESEARCH OF THE PROCESS OF VISUAL ART TRANSMISSION IN MUSIC AND THE CREATION OF COLLECTIONS FOR PEOPLE WITH VISUAL IMPAIRMENTS". Municipal economy of cities 6, nr 180 (4.12.2023): 2–6. http://dx.doi.org/10.33042/2522-1809-2023-6-180-2-6.
Pełny tekst źródłaAndreu, Sergi, i Monica Villanueva Aylagas. "Neural Synthesis of Sound Effects Using Flow-Based Deep Generative Models". Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment 18, nr 1 (11.10.2022): 2–9. http://dx.doi.org/10.1609/aiide.v18i1.21941.
Pełny tekst źródłaLi, Naihan, Shujie Liu, Yanqing Liu, Sheng Zhao i Ming Liu. "Neural Speech Synthesis with Transformer Network". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17.07.2019): 6706–13. http://dx.doi.org/10.1609/aaai.v33i01.33016706.
Pełny tekst źródłaLi, Yusen, Ying Shen i Dongqing Wang. "DIFFBAS: An Advanced Binaural Audio Synthesis Model Focusing on Binaural Differences Recovery". Applied Sciences 14, nr 8 (17.04.2024): 3385. http://dx.doi.org/10.3390/app14083385.
Pełny tekst źródłaRoebel, Axel, i Frederik Bous. "Neural Vocoding for Singing and Speaking Voices with the Multi-Band Excited WaveNet". Information 13, nr 3 (23.02.2022): 103. http://dx.doi.org/10.3390/info13030103.
Pełny tekst źródłaGarcía, Víctor, Inma Hernáez i Eva Navas. "Evaluation of Tacotron Based Synthesizers for Spanish and Basque". Applied Sciences 12, nr 3 (7.02.2022): 1686. http://dx.doi.org/10.3390/app12031686.
Pełny tekst źródłaPrihasto, Bima, i Nur Fajri Azhar. "Evaluation of Recurrent Neural Network Based on Indonesian Speech Synthesis for Small Datasets". Advances in Science and Technology 104 (luty 2021): 17–25. http://dx.doi.org/10.4028/www.scientific.net/ast.104.17.
Pełny tekst źródłaVenkatesh, Satvik, David Moffat i Eduardo Reck Miranda. "Investigating the Effects of Training Set Synthesis for Audio Segmentation of Radio Broadcast". Electronics 10, nr 7 (31.03.2021): 827. http://dx.doi.org/10.3390/electronics10070827.
Pełny tekst źródłaTao Chen. "Music Tone Synthesis based Anti-Interference Dynamic Integral Neural Network optimized with Artificial Hummingbird Optimization Algorithm". Journal of Electrical Systems 20, nr 3s (4.04.2024): 2665–76. http://dx.doi.org/10.52783/jes.3162.
Pełny tekst źródłaSerebryanaya, L. V., i I. E. Lasy. "Automatic recognition and representation of text in the form of audio stream". Doklady BGUIR 19, nr 6 (1.10.2021): 51–58. http://dx.doi.org/10.35596/1729-7648-2021-19-6-51-58.
Pełny tekst źródłaPatnaik, W. Shivani. "Background Noise Suppression in Audio File using LSTM Network". International Journal for Research in Applied Science and Engineering Technology 10, nr 6 (30.06.2022): 1310–16. http://dx.doi.org/10.22214/ijraset.2022.44109.
Pełny tekst źródłaMu, Jin. "Pose Estimation-Assisted Dance Tracking System Based on Convolutional Neural Network". Computational Intelligence and Neuroscience 2022 (3.06.2022): 1–10. http://dx.doi.org/10.1155/2022/2301395.
Pełny tekst źródłaShejole, Prof Sakshi, Piyush Jaiswal, Neha Karmal, Vivek Patil i Samnan Shaikh. "Autotuned Voice Cloning Enabling Multilingualism". International Journal for Research in Applied Science and Engineering Technology 11, nr 5 (31.05.2023): 5945–49. http://dx.doi.org/10.22214/ijraset.2023.52906.
Pełny tekst źródłaRodríguez Fernández-Peña, Alfonso Carlos. "AI is great, isn’t it? Tone direction and illocutionary force delivery of tag ques-tions in Amazon’s AI NTTS Polly". Journal of Experimental Phonetics 32 (28.11.2023): 227–42. http://dx.doi.org/10.1344/efe-2023-32-227-242.
Pełny tekst źródłaModi, Rohan. "Transcript Anatomization with Multi-Linguistic and Speech Synthesis Features". International Journal for Research in Applied Science and Engineering Technology 9, nr VI (20.06.2021): 1755–58. http://dx.doi.org/10.22214/ijraset.2021.35371.
Pełny tekst źródłaKazakova, M. A., i A. P. Sultanova. "Analysis of natural language processing technology: modern problems and approaches". Advanced Engineering Research 22, nr 2 (11.07.2022): 169–76. http://dx.doi.org/10.23947/2687-1653-2022-22-2-169-176.
Pełny tekst źródłaMandeel, Ali Raheem, Mohammed Salah Al-Radhi i Tamás Gábor Csapó. "Speaker Adaptation Experiments with Limited Data for End-to-End Text-To-Speech Synthesis using Tacotron2". Infocommunications journal 14, nr 3 (2022): 55–62. http://dx.doi.org/10.36244/icj.2022.3.7.
Pełny tekst źródłaThoidis, Iordanis, Lazaros Vrysis, Dimitrios Markou i George Papanikolaou. "Temporal Auditory Coding Features for Causal Speech Enhancement". Electronics 9, nr 10 (16.10.2020): 1698. http://dx.doi.org/10.3390/electronics9101698.
Pełny tekst źródłaVishwakama, Ramesh, Ramashish Yadav, Harsheet Sharma i Dr Saurabh Suman. "Automated Leaf Disease Detection System with Machine Learning". International Journal for Research in Applied Science and Engineering Technology 12, nr 2 (29.02.2024): 814–19. http://dx.doi.org/10.22214/ijraset.2024.58449.
Pełny tekst źródłaKane, Joseph, Michael N. Johnstone i Patryk Szewczyk. "Voice Synthesis Improvement by Machine Learning of Natural Prosody". Sensors 24, nr 5 (1.03.2024): 1624. http://dx.doi.org/10.3390/s24051624.
Pełny tekst źródłaRavikiran K, Neerav Nishant, M Sreedhar, N.Kavitha, Mathur N. Kathiravan i Geetha A. "Deep learning methods and integrated digital image processing techniques for detecting and evaluating wheat stripe rust disease". Scientific Temper 14, nr 03 (30.09.2023): 864–69. http://dx.doi.org/10.58414/scientifictemper.2023.14.3.47.
Pełny tekst źródłaGromov, N. V., i T. A. Levanova. "WaveNet vocoder for prediction of time series with extreme events". Genes & Cells 18, nr 4 (15.12.2023): 847–49. http://dx.doi.org/10.17816/gc623433.
Pełny tekst źródłaHakim, Heba, i Ali Marhoon. "Indoor Low Cost Assistive Device using 2D SLAM Based on LiDAR for Visually Impaired People". Iraqi Journal for Electrical and Electronic Engineering 15, nr 2 (1.12.2019): 115–21. http://dx.doi.org/10.37917/ijeee.15.2.12.
Pełny tekst źródłaBai, Jinqiang, Zhaoxiang Liu, Yimin Lin, Ye Li, Shiguo Lian i Dijun Liu. "Wearable Travel Aid for Environment Perception and Navigation of Visually Impaired People". Electronics 8, nr 6 (20.06.2019): 697. http://dx.doi.org/10.3390/electronics8060697.
Pełny tekst źródłaNicol, Rozenn, i Jean-Yves Monfort. "Acoustic research for telecoms: bridging the heritage to the future". Acta Acustica 7 (2023): 64. http://dx.doi.org/10.1051/aacus/2023056.
Pełny tekst źródłaYu, Junxiao, Zhengyuan Xu, Xu He, Jian Wang, Bin Liu, Rui Feng, Songsheng Zhu, Wei Wang i Jianqing Li. "DIA-TTS: Deep-Inherited Attention-Based Text-to-Speech Synthesizer". Entropy 25, nr 1 (26.12.2022): 41. http://dx.doi.org/10.3390/e25010041.
Pełny tekst źródłaWang, Tianmeng. "Research and Application Analysis of Correlative Optimization Algorithms for GAN". Highlights in Science, Engineering and Technology 57 (11.07.2023): 141–47. http://dx.doi.org/10.54097/hset.v57i.9992.
Pełny tekst źródłaDorofeeva, S. V. "Neuroplasicity and the developmental dyslexia intervention". Genes & Cells 18, nr 4 (15.12.2023): 706–9. http://dx.doi.org/10.17816/gc623418.
Pełny tekst źródłaHood, Graeme, Kieran Hand, Emma Cramp, Philip Howard, Susan Hopkins i Diane Ashiru-Oredope. "Measuring Appropriate Antibiotic Prescribing in Acute Hospitals: Development of a National Audit Tool Through a Delphi Consensus". Antibiotics 8, nr 2 (29.04.2019): 49. http://dx.doi.org/10.3390/antibiotics8020049.
Pełny tekst źródłaLi, Wanting, Yiting Chen i Buzhou Tang. "Improving Generative Adversarial Network based Vocoding Through Multi-Scale Convolution". ACM Transactions on Asian and Low-Resource Language Information Processing, 16.08.2023. http://dx.doi.org/10.1145/3610532.
Pełny tekst źródłaLluís, Francesc, Vasileios Chatziioannou i Alex Hofmann. "Points2Sound: from mono to binaural audio using 3D point cloud scenes". EURASIP Journal on Audio, Speech, and Music Processing 2022, nr 1 (29.12.2022). http://dx.doi.org/10.1186/s13636-022-00265-4.
Pełny tekst źródłaKhanjani, Zahra, Gabrielle Watson i Vandana P. Janeja. "Audio deepfakes: A survey". Frontiers in Big Data 5 (9.01.2023). http://dx.doi.org/10.3389/fdata.2022.1001063.
Pełny tekst źródłaDyer, Mark. "Neural Synthesis as a Methodology for Art-Anthropology in Contemporary Music". Organised Sound, 16.09.2022, 1–8. http://dx.doi.org/10.1017/s1355771822000371.
Pełny tekst źródłaComanducci, Luca, Fabio Antonacci i Augusto Sarti. "Synthesis of soundfields through irregular loudspeaker arrays based on convolutional neural networks". EURASIP Journal on Audio, Speech, and Music Processing 2024, nr 1 (28.03.2024). http://dx.doi.org/10.1186/s13636-024-00337-7.
Pełny tekst źródłaPatole, Prof Mrunalinee, Akhilesh Pandey, Kaustubh Bhagwat, Mukesh Vaishnav i Salikram Chadar. "A Survey on “Text-to-Speech Systems for Real-Time Audio Synthesis”". International Journal of Advanced Research in Science, Communication and Technology, 10.06.2021, 375–79. http://dx.doi.org/10.48175/ijarsct-1400.
Pełny tekst źródłaAngrick, Miguel, Maarten C. Ottenhoff, Lorenz Diener, Darius Ivucic, Gabriel Ivucic, Sophocles Goulis, Jeremy Saal i in. "Real-time synthesis of imagined speech processes from minimally invasive recordings of neural activity". Communications Biology 4, nr 1 (23.09.2021). http://dx.doi.org/10.1038/s42003-021-02578-0.
Pełny tekst źródłaZhang, Ni. "Informatization Integration Strategy of Modern Vocal Music Teaching and Traditional Music Culture in Colleges and Universities in the Era of Artificial Intelligence". Applied Mathematics and Nonlinear Sciences, 2.12.2023. http://dx.doi.org/10.2478/amns.2023.2.01333.
Pełny tekst źródłaHayes, Ben, Jordie Shier, György Fazekas, Andrew McPherson i Charalampos Saitis. "A review of differentiable digital signal processing for music and speech synthesis". Frontiers in Signal Processing 3 (11.01.2024). http://dx.doi.org/10.3389/frsip.2023.1284100.
Pełny tekst źródłaKohler, Jonas, Maarten C. Ottenhoff, Sophocles Goulis, Miguel Angrick, Albert J. Colon, Louis Wagner, Simon Tousseyn, Pieter L. Kubben i Christian Herff. "Synthesizing Speech from Intracranial Depth Electrodes using an Encoder-Decoder Framework". Neurons, Behavior, Data analysis, and Theory, 9.12.2022. http://dx.doi.org/10.51628/001c.57524.
Pełny tekst źródłaSimionato, Riccardo, Stefano Fasciani i Sverre Holm. "Physics-informed differentiable method for piano modeling". Frontiers in Signal Processing 3 (13.02.2024). http://dx.doi.org/10.3389/frsip.2023.1276748.
Pełny tekst źródłaКожирбаев, Ж. М. "ҚАЗАҚ ТІЛІ ҮШІН ИНТЕГРАЛДЫҚ (END-TO-END) СӨЙЛЕУ СИНТЕЗІ". BULLETIN Series Physical and Mathematical Sciences 79, nr 3(2022) (25.09.2023). http://dx.doi.org/10.51889/9340.2022.21.68.023.
Pełny tekst źródłaAlsaadawı, Hussein Farooq Tayeb, i Resul Daş. "Multimodal Emotion Recognition Using Bi-LG-GCN for MELD Dataset". Balkan Journal of Electrical and Computer Engineering, 16.10.2023. http://dx.doi.org/10.17694/bajece.1372107.
Pełny tekst źródłaMithoowani, Siraj, Andrew Mulloy, Augustin Toma i Ameen Patel. "To err is human: A case-based review of cognitive bias and its role in clinical decision making". Canadian Journal of General Internal Medicine 12, nr 2 (30.08.2017). http://dx.doi.org/10.22374/cjgim.v12i2.166.
Pełny tekst źródła