Academic literature on the topic 'Transfert de style zero-shot'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Transfert de style zero-shot.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Transfert de style zero-shot"
Zhang, Yu, Rongjie Huang, Ruiqi Li, JinZheng He, Yan Xia, Feiyang Chen, Xinyu Duan, Baoxing Huai, and Zhou Zhao. "StyleSinger: Style Transfer for Out-of-Domain Singing Voice Synthesis." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 17 (March 24, 2024): 19597–605. http://dx.doi.org/10.1609/aaai.v38i17.29932.
Full textXi, Jier, Xiufen Ye, and Chuanlong Li. "Sonar Image Target Detection Based on Style Transfer Learning and Random Shape of Noise under Zero Shot Target." Remote Sensing 14, no. 24 (December 10, 2022): 6260. http://dx.doi.org/10.3390/rs14246260.
Full textWang, Wenjing, Jizheng Xu, Li Zhang, Yue Wang, and Jiaying Liu. "Consistent Video Style Transfer via Compound Regularization." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 07 (April 3, 2020): 12233–40. http://dx.doi.org/10.1609/aaai.v34i07.6905.
Full textPark, Jangkyoung, Ammar Ul Hassan, and Jaeyoung Choi. "CCFont: Component-Based Chinese Font Generation Model Using Generative Adversarial Networks (GANs)." Applied Sciences 12, no. 16 (August 10, 2022): 8005. http://dx.doi.org/10.3390/app12168005.
Full textAzizah, Kurniawati, and Wisnu Jatmiko. "Transfer Learning, Style Control, and Speaker Reconstruction Loss for Zero-Shot Multilingual Multi-Speaker Text-to-Speech on Low-Resource Languages." IEEE Access 10 (2022): 5895–911. http://dx.doi.org/10.1109/access.2022.3141200.
Full textYang, Zhenhua, Dezhi Peng, Yuxin Kong, Yuyi Zhang, Cong Yao, and Lianwen Jin. "FontDiffuser: One-Shot Font Generation via Denoising Diffusion with Multi-Scale Content Aggregation and Style Contrastive Learning." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 7 (March 24, 2024): 6603–11. http://dx.doi.org/10.1609/aaai.v38i7.28482.
Full textCheng, Jikang, Zhen Han, Zhongyuan Wang, and Liang Chen. "“One-Shot” Super-Resolution via Backward Style Transfer for Fast High-Resolution Style Transfer." IEEE Signal Processing Letters 28 (2021): 1485–89. http://dx.doi.org/10.1109/lsp.2021.3098230.
Full textYu, Yong. "Few Shot POP Chinese Font Style Transfer using CycleGAN." Journal of Physics: Conference Series 2171, no. 1 (January 1, 2022): 012031. http://dx.doi.org/10.1088/1742-6596/2171/1/012031.
Full textZhu, Anna, Xiongbo Lu, Xiang Bai, Seiichi Uchida, Brian Kenji Iwana, and Shengwu Xiong. "Few-Shot Text Style Transfer via Deep Feature Similarity." IEEE Transactions on Image Processing 29 (2020): 6932–46. http://dx.doi.org/10.1109/tip.2020.2995062.
Full textFeng, Wancheng, Yingchao Liu, Jiaming Pei, Wenxuan Liu, Chunpeng Tian, and Lukun Wang. "Local Consistency Guidance: Personalized Stylization Method of Face Video (Student Abstract)." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 21 (March 24, 2024): 23486–87. http://dx.doi.org/10.1609/aaai.v38i21.30440.
Full textDissertations / Theses on the topic "Transfert de style zero-shot"
Fares, Mireille. "Multimodal Expressive Gesturing With Style." Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS017.
Full textThe generation of expressive gestures allows Embodied Conversational Agents (ECA) to articulate the speech intent and content in a human-like fashion. The central theme of the manuscript is to leverage and control the ECAs’ behavioral expressivity by modelling the complex multimodal behavior that humans employ during communication. The driving forces of the Thesis are twofold: (1) to exploit speech prosody, visual prosody and language with the aim of synthesizing expressive and human-like behaviors for ECAs; (2) to control the style of the synthesized gestures such that we can generate them with the style of any speaker. With these motivations in mind, we first propose a semantically aware and speech-driven facial and head gesture synthesis model trained on the TEDx Corpus which we collected. Then we propose ZS-MSTM 1.0, an approach to synthesize stylized upper-body gestures, driven by the content of a source speaker’s speech and corresponding to the style of any target speakers, seen or unseen by our model. It is trained on PATS Corpus which includes multimodal data of speakers having different behavioral style. ZS-MSTM 1.0 is not limited to PATS speakers, and can generate gestures in the style of any newly coming speaker without further training or fine-tuning, rendering our approach zero-shot. Behavioral style is modelled based on multimodal speakers’ data - language, body gestures, and speech - and independent from the speaker’s identity ("ID"). We additionally propose ZS-MSTM 2.0 to generate stylized facial gestures in addition to the upper-body gestures. We train ZS-MSTM 2.0 on PATS Corpus, which we extended to include dialog acts and 2D facial landmarks
Lakew, Surafel Melaku. "Multilingual Neural Machine Translation for Low Resource Languages." Doctoral thesis, Università degli studi di Trento, 2020. http://hdl.handle.net/11572/257906.
Full textLakew, Surafel Melaku. "Multilingual Neural Machine Translation for Low Resource Languages." Doctoral thesis, Università degli studi di Trento, 2020. http://hdl.handle.net/11572/257906.
Full textBook chapters on the topic "Transfert de style zero-shot"
Huang, Yaoxiong, Mengchao He, Lianwen Jin, and Yongpan Wang. "RD-GAN: Few/Zero-Shot Chinese Character Style Transfer via Radical Decomposition and Rendering." In Computer Vision – ECCV 2020, 156–72. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58539-6_10.
Full textXu, Ruiqi, Yongfeng Huang, Xin Chen, and Lin Zhang. "Specializing Small Language Models Towards Complex Style Transfer via Latent Attribute Pre-Training." In Frontiers in Artificial Intelligence and Applications. IOS Press, 2023. http://dx.doi.org/10.3233/faia230591.
Full textEsmaeili Shayan, Mostafa. "Solar Energy and Its Purpose in Net-Zero Energy Building." In Zero-Energy Buildings - New Approaches and Technologies. IntechOpen, 2020. http://dx.doi.org/10.5772/intechopen.93500.
Full textDas, Pranjit, P. S. Ramapraba, K. Seethalakshmi, M. Anitha Mary, S. Karthick, and Boopathi Sampath. "Sustainable Advanced Techniques for Enhancing the Image Process." In Fostering Cross-Industry Sustainability With Intelligent Technologies, 350–74. IGI Global, 2023. http://dx.doi.org/10.4018/979-8-3693-1638-2.ch022.
Full textConference papers on the topic "Transfert de style zero-shot"
Tang, Hao, Songhua Liu, Tianwei Lin, Shaoli Huang, Fu Li, Dongliang He, and Xinchao Wang. "Master: Meta Style Transformer for Controllable Zero-Shot and Few-Shot Artistic Style Transfer." In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2023. http://dx.doi.org/10.1109/cvpr52729.2023.01758.
Full textLee, Sang-Hoon, Ha-Yeong Choi, Hyung-Seok Oh, and Seong-Whan Lee. "HierVST: Hierarchical Adaptive Zero-shot Voice Style Transfer." In INTERSPEECH 2023. ISCA: ISCA, 2023. http://dx.doi.org/10.21437/interspeech.2023-1608.
Full textSun, Haochen, Lei Wu, Xiang Li, and Xiangxu Meng. "Style-woven Attention Network for Zero-shot Ink Wash Painting Style Transfer." In ICMR '22: International Conference on Multimedia Retrieval. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3512527.3531391.
Full textIzumi, Kota, and Keiji Yanai. "Zero-Shot Font Style Transfer with a Differentiable Renderer." In MMAsia '22: ACM Multimedia Asia. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3551626.3564961.
Full textLiu, Kunhao, Fangneng Zhan, Yiwen Chen, Jiahui Zhang, Yingchen Yu, Abdulmotaleb El Saddik, Shijian Lu, and Eric Xing. "StyleRF: Zero-Shot 3D Style Transfer of Neural Radiance Fields." In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2023. http://dx.doi.org/10.1109/cvpr52729.2023.00806.
Full textFares, Mireille, Catherine Pelachaud, and Nicolas Obin. "Zero-Shot Style Transfer for Multimodal Data-Driven Gesture Synthesis." In 2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG). IEEE, 2023. http://dx.doi.org/10.1109/fg57933.2023.10042658.
Full textSheng, Lu, Ziyi Lin, Jing Shao, and Xiaogang Wang. "Avatar-Net: Multi-scale Zero-Shot Style Transfer by Feature Decoration." In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2018. http://dx.doi.org/10.1109/cvpr.2018.00860.
Full textYang, Serin, Hyunmin Hwang, and Jong Chul Ye. "Zero-Shot Contrastive Loss for Text-Guided Diffusion Image Style Transfer." In 2023 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 2023. http://dx.doi.org/10.1109/iccv51070.2023.02091.
Full textSong, Kun, Yi Ren, Yi Lei, Chunfeng Wang, Kun Wei, Lei Xie, Xiang Yin, and Zejun Ma. "StyleS2ST: Zero-shot Style Transfer for Direct Speech-to-speech Translation." In INTERSPEECH 2023. ISCA: ISCA, 2023. http://dx.doi.org/10.21437/interspeech.2023-648.
Full textChen, Liyang, Zhiyong Wu, Runnan Li, Weihong Bao, Jun Ling, Xu Tan, and Sheng Zhao. "VAST: Vivify Your Talking Avatar via Zero-Shot Expressive Facial Style Transfer." In 2023 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW). IEEE, 2023. http://dx.doi.org/10.1109/iccvw60793.2023.00320.
Full text