Literatura académica sobre el tema "Transfert de style zero-shot"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Transfert de style zero-shot".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Artículos de revistas sobre el tema "Transfert de style zero-shot"
Zhang, Yu, Rongjie Huang, Ruiqi Li, JinZheng He, Yan Xia, Feiyang Chen, Xinyu Duan, Baoxing Huai y Zhou Zhao. "StyleSinger: Style Transfer for Out-of-Domain Singing Voice Synthesis". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 17 (24 de marzo de 2024): 19597–605. http://dx.doi.org/10.1609/aaai.v38i17.29932.
Texto completoXi, Jier, Xiufen Ye y Chuanlong Li. "Sonar Image Target Detection Based on Style Transfer Learning and Random Shape of Noise under Zero Shot Target". Remote Sensing 14, n.º 24 (10 de diciembre de 2022): 6260. http://dx.doi.org/10.3390/rs14246260.
Texto completoWang, Wenjing, Jizheng Xu, Li Zhang, Yue Wang y Jiaying Liu. "Consistent Video Style Transfer via Compound Regularization". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 07 (3 de abril de 2020): 12233–40. http://dx.doi.org/10.1609/aaai.v34i07.6905.
Texto completoPark, Jangkyoung, Ammar Ul Hassan y Jaeyoung Choi. "CCFont: Component-Based Chinese Font Generation Model Using Generative Adversarial Networks (GANs)". Applied Sciences 12, n.º 16 (10 de agosto de 2022): 8005. http://dx.doi.org/10.3390/app12168005.
Texto completoAzizah, Kurniawati y Wisnu Jatmiko. "Transfer Learning, Style Control, and Speaker Reconstruction Loss for Zero-Shot Multilingual Multi-Speaker Text-to-Speech on Low-Resource Languages". IEEE Access 10 (2022): 5895–911. http://dx.doi.org/10.1109/access.2022.3141200.
Texto completoYang, Zhenhua, Dezhi Peng, Yuxin Kong, Yuyi Zhang, Cong Yao y Lianwen Jin. "FontDiffuser: One-Shot Font Generation via Denoising Diffusion with Multi-Scale Content Aggregation and Style Contrastive Learning". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 7 (24 de marzo de 2024): 6603–11. http://dx.doi.org/10.1609/aaai.v38i7.28482.
Texto completoCheng, Jikang, Zhen Han, Zhongyuan Wang y Liang Chen. "“One-Shot” Super-Resolution via Backward Style Transfer for Fast High-Resolution Style Transfer". IEEE Signal Processing Letters 28 (2021): 1485–89. http://dx.doi.org/10.1109/lsp.2021.3098230.
Texto completoYu, Yong. "Few Shot POP Chinese Font Style Transfer using CycleGAN". Journal of Physics: Conference Series 2171, n.º 1 (1 de enero de 2022): 012031. http://dx.doi.org/10.1088/1742-6596/2171/1/012031.
Texto completoZhu, Anna, Xiongbo Lu, Xiang Bai, Seiichi Uchida, Brian Kenji Iwana y Shengwu Xiong. "Few-Shot Text Style Transfer via Deep Feature Similarity". IEEE Transactions on Image Processing 29 (2020): 6932–46. http://dx.doi.org/10.1109/tip.2020.2995062.
Texto completoFeng, Wancheng, Yingchao Liu, Jiaming Pei, Wenxuan Liu, Chunpeng Tian y Lukun Wang. "Local Consistency Guidance: Personalized Stylization Method of Face Video (Student Abstract)". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 21 (24 de marzo de 2024): 23486–87. http://dx.doi.org/10.1609/aaai.v38i21.30440.
Texto completoTesis sobre el tema "Transfert de style zero-shot"
Fares, Mireille. "Multimodal Expressive Gesturing With Style". Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS017.
Texto completoThe generation of expressive gestures allows Embodied Conversational Agents (ECA) to articulate the speech intent and content in a human-like fashion. The central theme of the manuscript is to leverage and control the ECAs’ behavioral expressivity by modelling the complex multimodal behavior that humans employ during communication. The driving forces of the Thesis are twofold: (1) to exploit speech prosody, visual prosody and language with the aim of synthesizing expressive and human-like behaviors for ECAs; (2) to control the style of the synthesized gestures such that we can generate them with the style of any speaker. With these motivations in mind, we first propose a semantically aware and speech-driven facial and head gesture synthesis model trained on the TEDx Corpus which we collected. Then we propose ZS-MSTM 1.0, an approach to synthesize stylized upper-body gestures, driven by the content of a source speaker’s speech and corresponding to the style of any target speakers, seen or unseen by our model. It is trained on PATS Corpus which includes multimodal data of speakers having different behavioral style. ZS-MSTM 1.0 is not limited to PATS speakers, and can generate gestures in the style of any newly coming speaker without further training or fine-tuning, rendering our approach zero-shot. Behavioral style is modelled based on multimodal speakers’ data - language, body gestures, and speech - and independent from the speaker’s identity ("ID"). We additionally propose ZS-MSTM 2.0 to generate stylized facial gestures in addition to the upper-body gestures. We train ZS-MSTM 2.0 on PATS Corpus, which we extended to include dialog acts and 2D facial landmarks
Lakew, Surafel Melaku. "Multilingual Neural Machine Translation for Low Resource Languages". Doctoral thesis, Università degli studi di Trento, 2020. http://hdl.handle.net/11572/257906.
Texto completoLakew, Surafel Melaku. "Multilingual Neural Machine Translation for Low Resource Languages". Doctoral thesis, Università degli studi di Trento, 2020. http://hdl.handle.net/11572/257906.
Texto completoCapítulos de libros sobre el tema "Transfert de style zero-shot"
Huang, Yaoxiong, Mengchao He, Lianwen Jin y Yongpan Wang. "RD-GAN: Few/Zero-Shot Chinese Character Style Transfer via Radical Decomposition and Rendering". En Computer Vision – ECCV 2020, 156–72. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58539-6_10.
Texto completoXu, Ruiqi, Yongfeng Huang, Xin Chen y Lin Zhang. "Specializing Small Language Models Towards Complex Style Transfer via Latent Attribute Pre-Training". En Frontiers in Artificial Intelligence and Applications. IOS Press, 2023. http://dx.doi.org/10.3233/faia230591.
Texto completoEsmaeili Shayan, Mostafa. "Solar Energy and Its Purpose in Net-Zero Energy Building". En Zero-Energy Buildings - New Approaches and Technologies. IntechOpen, 2020. http://dx.doi.org/10.5772/intechopen.93500.
Texto completoDas, Pranjit, P. S. Ramapraba, K. Seethalakshmi, M. Anitha Mary, S. Karthick y Boopathi Sampath. "Sustainable Advanced Techniques for Enhancing the Image Process". En Fostering Cross-Industry Sustainability With Intelligent Technologies, 350–74. IGI Global, 2023. http://dx.doi.org/10.4018/979-8-3693-1638-2.ch022.
Texto completoActas de conferencias sobre el tema "Transfert de style zero-shot"
Tang, Hao, Songhua Liu, Tianwei Lin, Shaoli Huang, Fu Li, Dongliang He y Xinchao Wang. "Master: Meta Style Transformer for Controllable Zero-Shot and Few-Shot Artistic Style Transfer". En 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2023. http://dx.doi.org/10.1109/cvpr52729.2023.01758.
Texto completoLee, Sang-Hoon, Ha-Yeong Choi, Hyung-Seok Oh y Seong-Whan Lee. "HierVST: Hierarchical Adaptive Zero-shot Voice Style Transfer". En INTERSPEECH 2023. ISCA: ISCA, 2023. http://dx.doi.org/10.21437/interspeech.2023-1608.
Texto completoSun, Haochen, Lei Wu, Xiang Li y Xiangxu Meng. "Style-woven Attention Network for Zero-shot Ink Wash Painting Style Transfer". En ICMR '22: International Conference on Multimedia Retrieval. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3512527.3531391.
Texto completoIzumi, Kota y Keiji Yanai. "Zero-Shot Font Style Transfer with a Differentiable Renderer". En MMAsia '22: ACM Multimedia Asia. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3551626.3564961.
Texto completoLiu, Kunhao, Fangneng Zhan, Yiwen Chen, Jiahui Zhang, Yingchen Yu, Abdulmotaleb El Saddik, Shijian Lu y Eric Xing. "StyleRF: Zero-Shot 3D Style Transfer of Neural Radiance Fields". En 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2023. http://dx.doi.org/10.1109/cvpr52729.2023.00806.
Texto completoFares, Mireille, Catherine Pelachaud y Nicolas Obin. "Zero-Shot Style Transfer for Multimodal Data-Driven Gesture Synthesis". En 2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG). IEEE, 2023. http://dx.doi.org/10.1109/fg57933.2023.10042658.
Texto completoSheng, Lu, Ziyi Lin, Jing Shao y Xiaogang Wang. "Avatar-Net: Multi-scale Zero-Shot Style Transfer by Feature Decoration". En 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2018. http://dx.doi.org/10.1109/cvpr.2018.00860.
Texto completoYang, Serin, Hyunmin Hwang y Jong Chul Ye. "Zero-Shot Contrastive Loss for Text-Guided Diffusion Image Style Transfer". En 2023 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 2023. http://dx.doi.org/10.1109/iccv51070.2023.02091.
Texto completoSong, Kun, Yi Ren, Yi Lei, Chunfeng Wang, Kun Wei, Lei Xie, Xiang Yin y Zejun Ma. "StyleS2ST: Zero-shot Style Transfer for Direct Speech-to-speech Translation". En INTERSPEECH 2023. ISCA: ISCA, 2023. http://dx.doi.org/10.21437/interspeech.2023-648.
Texto completoChen, Liyang, Zhiyong Wu, Runnan Li, Weihong Bao, Jun Ling, Xu Tan y Sheng Zhao. "VAST: Vivify Your Talking Avatar via Zero-Shot Expressive Facial Style Transfer". En 2023 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW). IEEE, 2023. http://dx.doi.org/10.1109/iccvw60793.2023.00320.
Texto completo