Literatura académica sobre el tema "Zero-Shot Style Transfer"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Zero-Shot Style Transfer".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Artículos de revistas sobre el tema "Zero-Shot Style Transfer"
Zhang, Yu, Rongjie Huang, Ruiqi Li, JinZheng He, Yan Xia, Feiyang Chen, Xinyu Duan, Baoxing Huai y Zhou Zhao. "StyleSinger: Style Transfer for Out-of-Domain Singing Voice Synthesis". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 17 (24 de marzo de 2024): 19597–605. http://dx.doi.org/10.1609/aaai.v38i17.29932.
Texto completoXi, Jier, Xiufen Ye y Chuanlong Li. "Sonar Image Target Detection Based on Style Transfer Learning and Random Shape of Noise under Zero Shot Target". Remote Sensing 14, n.º 24 (10 de diciembre de 2022): 6260. http://dx.doi.org/10.3390/rs14246260.
Texto completoWang, Wenjing, Jizheng Xu, Li Zhang, Yue Wang y Jiaying Liu. "Consistent Video Style Transfer via Compound Regularization". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 07 (3 de abril de 2020): 12233–40. http://dx.doi.org/10.1609/aaai.v34i07.6905.
Texto completoPark, Jangkyoung, Ammar Ul Hassan y Jaeyoung Choi. "CCFont: Component-Based Chinese Font Generation Model Using Generative Adversarial Networks (GANs)". Applied Sciences 12, n.º 16 (10 de agosto de 2022): 8005. http://dx.doi.org/10.3390/app12168005.
Texto completoAzizah, Kurniawati y Wisnu Jatmiko. "Transfer Learning, Style Control, and Speaker Reconstruction Loss for Zero-Shot Multilingual Multi-Speaker Text-to-Speech on Low-Resource Languages". IEEE Access 10 (2022): 5895–911. http://dx.doi.org/10.1109/access.2022.3141200.
Texto completoBai, Zhongyu, Hongli Xu, Qichuan Ding y Xiangyue Zhang. "Side-Scan Sonar Image Classification with Zero-Shot and Style Transfer". IEEE Transactions on Instrumentation and Measurement, 2024, 1. http://dx.doi.org/10.1109/tim.2024.3352693.
Texto completoFares, Mireille, Catherine Pelachaud y Nicolas Obin. "Zero-shot style transfer for gesture animation driven by text and speech using adversarial disentanglement of multimodal style encoding". Frontiers in Artificial Intelligence 6 (12 de junio de 2023). http://dx.doi.org/10.3389/frai.2023.1142997.
Texto completoXu, Hongli, Zhongyu Bai, Xiangyue Zhang y Qichuan Ding. "MFSANet: Zero-Shot Side-Scan Sonar Image Recognition Based on Style Transfer". IEEE Geoscience and Remote Sensing Letters, 2023, 1. http://dx.doi.org/10.1109/lgrs.2023.3318051.
Texto completoZhang, Qing, Jing Zhang, Xiangdong Su, Feilong Bao y Guanglai Gao. "Contour detection network for zero-shot sketch-based image retrieval". Complex & Intelligent Systems, 2 de junio de 2023. http://dx.doi.org/10.1007/s40747-023-01096-2.
Texto completoTesis sobre el tema "Zero-Shot Style Transfer"
Lakew, Surafel Melaku. "Multilingual Neural Machine Translation for Low Resource Languages". Doctoral thesis, Università degli studi di Trento, 2020. http://hdl.handle.net/11572/257906.
Texto completoLakew, Surafel Melaku. "Multilingual Neural Machine Translation for Low Resource Languages". Doctoral thesis, Università degli studi di Trento, 2020. http://hdl.handle.net/11572/257906.
Texto completoFares, Mireille. "Multimodal Expressive Gesturing With Style". Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS017.
Texto completoThe generation of expressive gestures allows Embodied Conversational Agents (ECA) to articulate the speech intent and content in a human-like fashion. The central theme of the manuscript is to leverage and control the ECAs’ behavioral expressivity by modelling the complex multimodal behavior that humans employ during communication. The driving forces of the Thesis are twofold: (1) to exploit speech prosody, visual prosody and language with the aim of synthesizing expressive and human-like behaviors for ECAs; (2) to control the style of the synthesized gestures such that we can generate them with the style of any speaker. With these motivations in mind, we first propose a semantically aware and speech-driven facial and head gesture synthesis model trained on the TEDx Corpus which we collected. Then we propose ZS-MSTM 1.0, an approach to synthesize stylized upper-body gestures, driven by the content of a source speaker’s speech and corresponding to the style of any target speakers, seen or unseen by our model. It is trained on PATS Corpus which includes multimodal data of speakers having different behavioral style. ZS-MSTM 1.0 is not limited to PATS speakers, and can generate gestures in the style of any newly coming speaker without further training or fine-tuning, rendering our approach zero-shot. Behavioral style is modelled based on multimodal speakers’ data - language, body gestures, and speech - and independent from the speaker’s identity ("ID"). We additionally propose ZS-MSTM 2.0 to generate stylized facial gestures in addition to the upper-body gestures. We train ZS-MSTM 2.0 on PATS Corpus, which we extended to include dialog acts and 2D facial landmarks
Capítulos de libros sobre el tema "Zero-Shot Style Transfer"
Huang, Yaoxiong, Mengchao He, Lianwen Jin y Yongpan Wang. "RD-GAN: Few/Zero-Shot Chinese Character Style Transfer via Radical Decomposition and Rendering". En Computer Vision – ECCV 2020, 156–72. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58539-6_10.
Texto completoActas de conferencias sobre el tema "Zero-Shot Style Transfer"
Lee, Sang-Hoon, Ha-Yeong Choi, Hyung-Seok Oh y Seong-Whan Lee. "HierVST: Hierarchical Adaptive Zero-shot Voice Style Transfer". En INTERSPEECH 2023. ISCA: ISCA, 2023. http://dx.doi.org/10.21437/interspeech.2023-1608.
Texto completoTang, Hao, Songhua Liu, Tianwei Lin, Shaoli Huang, Fu Li, Dongliang He y Xinchao Wang. "Master: Meta Style Transformer for Controllable Zero-Shot and Few-Shot Artistic Style Transfer". En 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2023. http://dx.doi.org/10.1109/cvpr52729.2023.01758.
Texto completoIzumi, Kota y Keiji Yanai. "Zero-Shot Font Style Transfer with a Differentiable Renderer". En MMAsia '22: ACM Multimedia Asia. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3551626.3564961.
Texto completoSun, Haochen, Lei Wu, Xiang Li y Xiangxu Meng. "Style-woven Attention Network for Zero-shot Ink Wash Painting Style Transfer". En ICMR '22: International Conference on Multimedia Retrieval. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3512527.3531391.
Texto completoLiu, Kunhao, Fangneng Zhan, Yiwen Chen, Jiahui Zhang, Yingchen Yu, Abdulmotaleb El Saddik, Shijian Lu y Eric Xing. "StyleRF: Zero-Shot 3D Style Transfer of Neural Radiance Fields". En 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2023. http://dx.doi.org/10.1109/cvpr52729.2023.00806.
Texto completoFares, Mireille, Catherine Pelachaud y Nicolas Obin. "Zero-Shot Style Transfer for Multimodal Data-Driven Gesture Synthesis". En 2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG). IEEE, 2023. http://dx.doi.org/10.1109/fg57933.2023.10042658.
Texto completoSheng, Lu, Ziyi Lin, Jing Shao y Xiaogang Wang. "Avatar-Net: Multi-scale Zero-Shot Style Transfer by Feature Decoration". En 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2018. http://dx.doi.org/10.1109/cvpr.2018.00860.
Texto completoYang, Serin, Hyunmin Hwang y Jong Chul Ye. "Zero-Shot Contrastive Loss for Text-Guided Diffusion Image Style Transfer". En 2023 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 2023. http://dx.doi.org/10.1109/iccv51070.2023.02091.
Texto completoSong, Kun, Yi Ren, Yi Lei, Chunfeng Wang, Kun Wei, Lei Xie, Xiang Yin y Zejun Ma. "StyleS2ST: Zero-shot Style Transfer for Direct Speech-to-speech Translation". En INTERSPEECH 2023. ISCA: ISCA, 2023. http://dx.doi.org/10.21437/interspeech.2023-648.
Texto completoChen, Liyang, Zhiyong Wu, Runnan Li, Weihong Bao, Jun Ling, Xu Tan y Sheng Zhao. "VAST: Vivify Your Talking Avatar via Zero-Shot Expressive Facial Style Transfer". En 2023 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW). IEEE, 2023. http://dx.doi.org/10.1109/iccvw60793.2023.00320.
Texto completo