Artigos de revistas sobre o tema "Neural audio synthesis"
Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos
Veja os 50 melhores artigos de revistas para estudos sobre o assunto "Neural audio synthesis".
Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.
Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.
Veja os artigos de revistas das mais diversas áreas científicas e compile uma bibliografia correta.
Li, Dongze, Kang Zhao, Wei Wang, Bo Peng, Yingya Zhang, Jing Dong e Tieniu Tan. "AE-NeRF: Audio Enhanced Neural Radiance Field for Few Shot Talking Head Synthesis". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 4 (24 de março de 2024): 3037–45. http://dx.doi.org/10.1609/aaai.v38i4.28086.
Texto completo da fonteVyawahare, Prof D. G. "Image to Audio Conversion for Blind People Using Neural Network". International Journal for Research in Applied Science and Engineering Technology 11, n.º 12 (31 de dezembro de 2023): 1949–57. http://dx.doi.org/10.22214/ijraset.2023.57712.
Texto completo da fonteKiefer, Chris. "Sample-level sound synthesis with recurrent neural networks and conceptors". PeerJ Computer Science 5 (8 de julho de 2019): e205. http://dx.doi.org/10.7717/peerj-cs.205.
Texto completo da fonteLiu, Yunyi, e Craig Jin. "Impact on quality and diversity from integrating a reconstruction loss into neural audio synthesis". Journal of the Acoustical Society of America 154, n.º 4_supplement (1 de outubro de 2023): A99. http://dx.doi.org/10.1121/10.0022922.
Texto completo da fonteKhandelwal, Karan, Krishiv Pandita, Kshitij Priyankar, Kumar Parakram e Tejaswini K. "Svara Rachana - Audio Driven Facial Expression Synthesis". International Journal for Research in Applied Science and Engineering Technology 12, n.º 5 (31 de maio de 2024): 2024–29. http://dx.doi.org/10.22214/ijraset.2024.62019.
Texto completo da fonteVOITKO, Viktoriia, Svitlana BEVZ, Sergii BURBELO e Pavlo STAVYTSKYI. "AUDIO GENERATION TECHNOLOGY OF A SYSTEM OF SYNTHESIS AND ANALYSIS OF MUSIC COMPOSITIONS". Herald of Khmelnytskyi National University 305, n.º 1 (23 de fevereiro de 2022): 64–67. http://dx.doi.org/10.31891/2307-5732-2022-305-1-64-67.
Texto completo da fonteLi, Naihan, Yanqing Liu, Yu Wu, Shujie Liu, Sheng Zhao e Ming Liu. "RobuTrans: A Robust Transformer-Based Text-to-Speech Model". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 05 (3 de abril de 2020): 8228–35. http://dx.doi.org/10.1609/aaai.v34i05.6337.
Texto completo da fonteHryhorenko, N., N. Larionov e V. Bredikhin. "RESEARCH OF THE PROCESS OF VISUAL ART TRANSMISSION IN MUSIC AND THE CREATION OF COLLECTIONS FOR PEOPLE WITH VISUAL IMPAIRMENTS". Municipal economy of cities 6, n.º 180 (4 de dezembro de 2023): 2–6. http://dx.doi.org/10.33042/2522-1809-2023-6-180-2-6.
Texto completo da fonteAndreu, Sergi, e Monica Villanueva Aylagas. "Neural Synthesis of Sound Effects Using Flow-Based Deep Generative Models". Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment 18, n.º 1 (11 de outubro de 2022): 2–9. http://dx.doi.org/10.1609/aiide.v18i1.21941.
Texto completo da fonteLi, Naihan, Shujie Liu, Yanqing Liu, Sheng Zhao e Ming Liu. "Neural Speech Synthesis with Transformer Network". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17 de julho de 2019): 6706–13. http://dx.doi.org/10.1609/aaai.v33i01.33016706.
Texto completo da fonteLi, Yusen, Ying Shen e Dongqing Wang. "DIFFBAS: An Advanced Binaural Audio Synthesis Model Focusing on Binaural Differences Recovery". Applied Sciences 14, n.º 8 (17 de abril de 2024): 3385. http://dx.doi.org/10.3390/app14083385.
Texto completo da fonteRoebel, Axel, e Frederik Bous. "Neural Vocoding for Singing and Speaking Voices with the Multi-Band Excited WaveNet". Information 13, n.º 3 (23 de fevereiro de 2022): 103. http://dx.doi.org/10.3390/info13030103.
Texto completo da fonteGarcía, Víctor, Inma Hernáez e Eva Navas. "Evaluation of Tacotron Based Synthesizers for Spanish and Basque". Applied Sciences 12, n.º 3 (7 de fevereiro de 2022): 1686. http://dx.doi.org/10.3390/app12031686.
Texto completo da fontePrihasto, Bima, e Nur Fajri Azhar. "Evaluation of Recurrent Neural Network Based on Indonesian Speech Synthesis for Small Datasets". Advances in Science and Technology 104 (fevereiro de 2021): 17–25. http://dx.doi.org/10.4028/www.scientific.net/ast.104.17.
Texto completo da fonteVenkatesh, Satvik, David Moffat e Eduardo Reck Miranda. "Investigating the Effects of Training Set Synthesis for Audio Segmentation of Radio Broadcast". Electronics 10, n.º 7 (31 de março de 2021): 827. http://dx.doi.org/10.3390/electronics10070827.
Texto completo da fonteTao Chen. "Music Tone Synthesis based Anti-Interference Dynamic Integral Neural Network optimized with Artificial Hummingbird Optimization Algorithm". Journal of Electrical Systems 20, n.º 3s (4 de abril de 2024): 2665–76. http://dx.doi.org/10.52783/jes.3162.
Texto completo da fonteSerebryanaya, L. V., e I. E. Lasy. "Automatic recognition and representation of text in the form of audio stream". Doklady BGUIR 19, n.º 6 (1 de outubro de 2021): 51–58. http://dx.doi.org/10.35596/1729-7648-2021-19-6-51-58.
Texto completo da fontePatnaik, W. Shivani. "Background Noise Suppression in Audio File using LSTM Network". International Journal for Research in Applied Science and Engineering Technology 10, n.º 6 (30 de junho de 2022): 1310–16. http://dx.doi.org/10.22214/ijraset.2022.44109.
Texto completo da fonteMu, Jin. "Pose Estimation-Assisted Dance Tracking System Based on Convolutional Neural Network". Computational Intelligence and Neuroscience 2022 (3 de junho de 2022): 1–10. http://dx.doi.org/10.1155/2022/2301395.
Texto completo da fonteShejole, Prof Sakshi, Piyush Jaiswal, Neha Karmal, Vivek Patil e Samnan Shaikh. "Autotuned Voice Cloning Enabling Multilingualism". International Journal for Research in Applied Science and Engineering Technology 11, n.º 5 (31 de maio de 2023): 5945–49. http://dx.doi.org/10.22214/ijraset.2023.52906.
Texto completo da fonteRodríguez Fernández-Peña, Alfonso Carlos. "AI is great, isn’t it? Tone direction and illocutionary force delivery of tag ques-tions in Amazon’s AI NTTS Polly". Journal of Experimental Phonetics 32 (28 de novembro de 2023): 227–42. http://dx.doi.org/10.1344/efe-2023-32-227-242.
Texto completo da fonteModi, Rohan. "Transcript Anatomization with Multi-Linguistic and Speech Synthesis Features". International Journal for Research in Applied Science and Engineering Technology 9, n.º VI (20 de junho de 2021): 1755–58. http://dx.doi.org/10.22214/ijraset.2021.35371.
Texto completo da fonteKazakova, M. A., e A. P. Sultanova. "Analysis of natural language processing technology: modern problems and approaches". Advanced Engineering Research 22, n.º 2 (11 de julho de 2022): 169–76. http://dx.doi.org/10.23947/2687-1653-2022-22-2-169-176.
Texto completo da fonteMandeel, Ali Raheem, Mohammed Salah Al-Radhi e Tamás Gábor Csapó. "Speaker Adaptation Experiments with Limited Data for End-to-End Text-To-Speech Synthesis using Tacotron2". Infocommunications journal 14, n.º 3 (2022): 55–62. http://dx.doi.org/10.36244/icj.2022.3.7.
Texto completo da fonteThoidis, Iordanis, Lazaros Vrysis, Dimitrios Markou e George Papanikolaou. "Temporal Auditory Coding Features for Causal Speech Enhancement". Electronics 9, n.º 10 (16 de outubro de 2020): 1698. http://dx.doi.org/10.3390/electronics9101698.
Texto completo da fonteVishwakama, Ramesh, Ramashish Yadav, Harsheet Sharma e Dr Saurabh Suman. "Automated Leaf Disease Detection System with Machine Learning". International Journal for Research in Applied Science and Engineering Technology 12, n.º 2 (29 de fevereiro de 2024): 814–19. http://dx.doi.org/10.22214/ijraset.2024.58449.
Texto completo da fonteKane, Joseph, Michael N. Johnstone e Patryk Szewczyk. "Voice Synthesis Improvement by Machine Learning of Natural Prosody". Sensors 24, n.º 5 (1 de março de 2024): 1624. http://dx.doi.org/10.3390/s24051624.
Texto completo da fonteRavikiran K, Neerav Nishant, M Sreedhar, N.Kavitha, Mathur N. Kathiravan e Geetha A. "Deep learning methods and integrated digital image processing techniques for detecting and evaluating wheat stripe rust disease". Scientific Temper 14, n.º 03 (30 de setembro de 2023): 864–69. http://dx.doi.org/10.58414/scientifictemper.2023.14.3.47.
Texto completo da fonteGromov, N. V., e T. A. Levanova. "WaveNet vocoder for prediction of time series with extreme events". Genes & Cells 18, n.º 4 (15 de dezembro de 2023): 847–49. http://dx.doi.org/10.17816/gc623433.
Texto completo da fonteHakim, Heba, e Ali Marhoon. "Indoor Low Cost Assistive Device using 2D SLAM Based on LiDAR for Visually Impaired People". Iraqi Journal for Electrical and Electronic Engineering 15, n.º 2 (1 de dezembro de 2019): 115–21. http://dx.doi.org/10.37917/ijeee.15.2.12.
Texto completo da fonteBai, Jinqiang, Zhaoxiang Liu, Yimin Lin, Ye Li, Shiguo Lian e Dijun Liu. "Wearable Travel Aid for Environment Perception and Navigation of Visually Impaired People". Electronics 8, n.º 6 (20 de junho de 2019): 697. http://dx.doi.org/10.3390/electronics8060697.
Texto completo da fonteNicol, Rozenn, e Jean-Yves Monfort. "Acoustic research for telecoms: bridging the heritage to the future". Acta Acustica 7 (2023): 64. http://dx.doi.org/10.1051/aacus/2023056.
Texto completo da fonteYu, Junxiao, Zhengyuan Xu, Xu He, Jian Wang, Bin Liu, Rui Feng, Songsheng Zhu, Wei Wang e Jianqing Li. "DIA-TTS: Deep-Inherited Attention-Based Text-to-Speech Synthesizer". Entropy 25, n.º 1 (26 de dezembro de 2022): 41. http://dx.doi.org/10.3390/e25010041.
Texto completo da fonteWang, Tianmeng. "Research and Application Analysis of Correlative Optimization Algorithms for GAN". Highlights in Science, Engineering and Technology 57 (11 de julho de 2023): 141–47. http://dx.doi.org/10.54097/hset.v57i.9992.
Texto completo da fonteDorofeeva, S. V. "Neuroplasicity and the developmental dyslexia intervention". Genes & Cells 18, n.º 4 (15 de dezembro de 2023): 706–9. http://dx.doi.org/10.17816/gc623418.
Texto completo da fonteHood, Graeme, Kieran Hand, Emma Cramp, Philip Howard, Susan Hopkins e Diane Ashiru-Oredope. "Measuring Appropriate Antibiotic Prescribing in Acute Hospitals: Development of a National Audit Tool Through a Delphi Consensus". Antibiotics 8, n.º 2 (29 de abril de 2019): 49. http://dx.doi.org/10.3390/antibiotics8020049.
Texto completo da fonteLi, Wanting, Yiting Chen e Buzhou Tang. "Improving Generative Adversarial Network based Vocoding Through Multi-Scale Convolution". ACM Transactions on Asian and Low-Resource Language Information Processing, 16 de agosto de 2023. http://dx.doi.org/10.1145/3610532.
Texto completo da fonteLluís, Francesc, Vasileios Chatziioannou e Alex Hofmann. "Points2Sound: from mono to binaural audio using 3D point cloud scenes". EURASIP Journal on Audio, Speech, and Music Processing 2022, n.º 1 (29 de dezembro de 2022). http://dx.doi.org/10.1186/s13636-022-00265-4.
Texto completo da fonteKhanjani, Zahra, Gabrielle Watson e Vandana P. Janeja. "Audio deepfakes: A survey". Frontiers in Big Data 5 (9 de janeiro de 2023). http://dx.doi.org/10.3389/fdata.2022.1001063.
Texto completo da fonteDyer, Mark. "Neural Synthesis as a Methodology for Art-Anthropology in Contemporary Music". Organised Sound, 16 de setembro de 2022, 1–8. http://dx.doi.org/10.1017/s1355771822000371.
Texto completo da fonteComanducci, Luca, Fabio Antonacci e Augusto Sarti. "Synthesis of soundfields through irregular loudspeaker arrays based on convolutional neural networks". EURASIP Journal on Audio, Speech, and Music Processing 2024, n.º 1 (28 de março de 2024). http://dx.doi.org/10.1186/s13636-024-00337-7.
Texto completo da fontePatole, Prof Mrunalinee, Akhilesh Pandey, Kaustubh Bhagwat, Mukesh Vaishnav e Salikram Chadar. "A Survey on “Text-to-Speech Systems for Real-Time Audio Synthesis”". International Journal of Advanced Research in Science, Communication and Technology, 10 de junho de 2021, 375–79. http://dx.doi.org/10.48175/ijarsct-1400.
Texto completo da fonteAngrick, Miguel, Maarten C. Ottenhoff, Lorenz Diener, Darius Ivucic, Gabriel Ivucic, Sophocles Goulis, Jeremy Saal et al. "Real-time synthesis of imagined speech processes from minimally invasive recordings of neural activity". Communications Biology 4, n.º 1 (23 de setembro de 2021). http://dx.doi.org/10.1038/s42003-021-02578-0.
Texto completo da fonteZhang, Ni. "Informatization Integration Strategy of Modern Vocal Music Teaching and Traditional Music Culture in Colleges and Universities in the Era of Artificial Intelligence". Applied Mathematics and Nonlinear Sciences, 2 de dezembro de 2023. http://dx.doi.org/10.2478/amns.2023.2.01333.
Texto completo da fonteHayes, Ben, Jordie Shier, György Fazekas, Andrew McPherson e Charalampos Saitis. "A review of differentiable digital signal processing for music and speech synthesis". Frontiers in Signal Processing 3 (11 de janeiro de 2024). http://dx.doi.org/10.3389/frsip.2023.1284100.
Texto completo da fonteKohler, Jonas, Maarten C. Ottenhoff, Sophocles Goulis, Miguel Angrick, Albert J. Colon, Louis Wagner, Simon Tousseyn, Pieter L. Kubben e Christian Herff. "Synthesizing Speech from Intracranial Depth Electrodes using an Encoder-Decoder Framework". Neurons, Behavior, Data analysis, and Theory, 9 de dezembro de 2022. http://dx.doi.org/10.51628/001c.57524.
Texto completo da fonteSimionato, Riccardo, Stefano Fasciani e Sverre Holm. "Physics-informed differentiable method for piano modeling". Frontiers in Signal Processing 3 (13 de fevereiro de 2024). http://dx.doi.org/10.3389/frsip.2023.1276748.
Texto completo da fonteКожирбаев, Ж. М. "ҚАЗАҚ ТІЛІ ҮШІН ИНТЕГРАЛДЫҚ (END-TO-END) СӨЙЛЕУ СИНТЕЗІ". BULLETIN Series Physical and Mathematical Sciences 79, n.º 3(2022) (25 de setembro de 2023). http://dx.doi.org/10.51889/9340.2022.21.68.023.
Texto completo da fonteAlsaadawı, Hussein Farooq Tayeb, e Resul Daş. "Multimodal Emotion Recognition Using Bi-LG-GCN for MELD Dataset". Balkan Journal of Electrical and Computer Engineering, 16 de outubro de 2023. http://dx.doi.org/10.17694/bajece.1372107.
Texto completo da fonteMithoowani, Siraj, Andrew Mulloy, Augustin Toma e Ameen Patel. "To err is human: A case-based review of cognitive bias and its role in clinical decision making". Canadian Journal of General Internal Medicine 12, n.º 2 (30 de agosto de 2017). http://dx.doi.org/10.22374/cjgim.v12i2.166.
Texto completo da fonte