Auswahl der wissenschaftlichen Literatur zum Thema „Zero-Shot Style Transfer“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Inhaltsverzeichnis
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Zero-Shot Style Transfer" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "Zero-Shot Style Transfer"
Zhang, Yu, Rongjie Huang, Ruiqi Li, JinZheng He, Yan Xia, Feiyang Chen, Xinyu Duan, Baoxing Huai und Zhou Zhao. „StyleSinger: Style Transfer for Out-of-Domain Singing Voice Synthesis“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 17 (24.03.2024): 19597–605. http://dx.doi.org/10.1609/aaai.v38i17.29932.
Der volle Inhalt der QuelleXi, Jier, Xiufen Ye und Chuanlong Li. „Sonar Image Target Detection Based on Style Transfer Learning and Random Shape of Noise under Zero Shot Target“. Remote Sensing 14, Nr. 24 (10.12.2022): 6260. http://dx.doi.org/10.3390/rs14246260.
Der volle Inhalt der QuelleWang, Wenjing, Jizheng Xu, Li Zhang, Yue Wang und Jiaying Liu. „Consistent Video Style Transfer via Compound Regularization“. Proceedings of the AAAI Conference on Artificial Intelligence 34, Nr. 07 (03.04.2020): 12233–40. http://dx.doi.org/10.1609/aaai.v34i07.6905.
Der volle Inhalt der QuellePark, Jangkyoung, Ammar Ul Hassan und Jaeyoung Choi. „CCFont: Component-Based Chinese Font Generation Model Using Generative Adversarial Networks (GANs)“. Applied Sciences 12, Nr. 16 (10.08.2022): 8005. http://dx.doi.org/10.3390/app12168005.
Der volle Inhalt der QuelleAzizah, Kurniawati, und Wisnu Jatmiko. „Transfer Learning, Style Control, and Speaker Reconstruction Loss for Zero-Shot Multilingual Multi-Speaker Text-to-Speech on Low-Resource Languages“. IEEE Access 10 (2022): 5895–911. http://dx.doi.org/10.1109/access.2022.3141200.
Der volle Inhalt der QuelleBai, Zhongyu, Hongli Xu, Qichuan Ding und Xiangyue Zhang. „Side-Scan Sonar Image Classification with Zero-Shot and Style Transfer“. IEEE Transactions on Instrumentation and Measurement, 2024, 1. http://dx.doi.org/10.1109/tim.2024.3352693.
Der volle Inhalt der QuelleFares, Mireille, Catherine Pelachaud und Nicolas Obin. „Zero-shot style transfer for gesture animation driven by text and speech using adversarial disentanglement of multimodal style encoding“. Frontiers in Artificial Intelligence 6 (12.06.2023). http://dx.doi.org/10.3389/frai.2023.1142997.
Der volle Inhalt der QuelleXu, Hongli, Zhongyu Bai, Xiangyue Zhang und Qichuan Ding. „MFSANet: Zero-Shot Side-Scan Sonar Image Recognition Based on Style Transfer“. IEEE Geoscience and Remote Sensing Letters, 2023, 1. http://dx.doi.org/10.1109/lgrs.2023.3318051.
Der volle Inhalt der QuelleZhang, Qing, Jing Zhang, Xiangdong Su, Feilong Bao und Guanglai Gao. „Contour detection network for zero-shot sketch-based image retrieval“. Complex & Intelligent Systems, 02.06.2023. http://dx.doi.org/10.1007/s40747-023-01096-2.
Der volle Inhalt der QuelleDissertationen zum Thema "Zero-Shot Style Transfer"
Lakew, Surafel Melaku. „Multilingual Neural Machine Translation for Low Resource Languages“. Doctoral thesis, Università degli studi di Trento, 2020. http://hdl.handle.net/11572/257906.
Der volle Inhalt der QuelleLakew, Surafel Melaku. „Multilingual Neural Machine Translation for Low Resource Languages“. Doctoral thesis, Università degli studi di Trento, 2020. http://hdl.handle.net/11572/257906.
Der volle Inhalt der QuelleFares, Mireille. „Multimodal Expressive Gesturing With Style“. Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS017.
Der volle Inhalt der QuelleThe generation of expressive gestures allows Embodied Conversational Agents (ECA) to articulate the speech intent and content in a human-like fashion. The central theme of the manuscript is to leverage and control the ECAs’ behavioral expressivity by modelling the complex multimodal behavior that humans employ during communication. The driving forces of the Thesis are twofold: (1) to exploit speech prosody, visual prosody and language with the aim of synthesizing expressive and human-like behaviors for ECAs; (2) to control the style of the synthesized gestures such that we can generate them with the style of any speaker. With these motivations in mind, we first propose a semantically aware and speech-driven facial and head gesture synthesis model trained on the TEDx Corpus which we collected. Then we propose ZS-MSTM 1.0, an approach to synthesize stylized upper-body gestures, driven by the content of a source speaker’s speech and corresponding to the style of any target speakers, seen or unseen by our model. It is trained on PATS Corpus which includes multimodal data of speakers having different behavioral style. ZS-MSTM 1.0 is not limited to PATS speakers, and can generate gestures in the style of any newly coming speaker without further training or fine-tuning, rendering our approach zero-shot. Behavioral style is modelled based on multimodal speakers’ data - language, body gestures, and speech - and independent from the speaker’s identity ("ID"). We additionally propose ZS-MSTM 2.0 to generate stylized facial gestures in addition to the upper-body gestures. We train ZS-MSTM 2.0 on PATS Corpus, which we extended to include dialog acts and 2D facial landmarks
Buchteile zum Thema "Zero-Shot Style Transfer"
Huang, Yaoxiong, Mengchao He, Lianwen Jin und Yongpan Wang. „RD-GAN: Few/Zero-Shot Chinese Character Style Transfer via Radical Decomposition and Rendering“. In Computer Vision – ECCV 2020, 156–72. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58539-6_10.
Der volle Inhalt der QuelleKonferenzberichte zum Thema "Zero-Shot Style Transfer"
Lee, Sang-Hoon, Ha-Yeong Choi, Hyung-Seok Oh und Seong-Whan Lee. „HierVST: Hierarchical Adaptive Zero-shot Voice Style Transfer“. In INTERSPEECH 2023. ISCA: ISCA, 2023. http://dx.doi.org/10.21437/interspeech.2023-1608.
Der volle Inhalt der QuelleTang, Hao, Songhua Liu, Tianwei Lin, Shaoli Huang, Fu Li, Dongliang He und Xinchao Wang. „Master: Meta Style Transformer for Controllable Zero-Shot and Few-Shot Artistic Style Transfer“. In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2023. http://dx.doi.org/10.1109/cvpr52729.2023.01758.
Der volle Inhalt der QuelleIzumi, Kota, und Keiji Yanai. „Zero-Shot Font Style Transfer with a Differentiable Renderer“. In MMAsia '22: ACM Multimedia Asia. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3551626.3564961.
Der volle Inhalt der QuelleSun, Haochen, Lei Wu, Xiang Li und Xiangxu Meng. „Style-woven Attention Network for Zero-shot Ink Wash Painting Style Transfer“. In ICMR '22: International Conference on Multimedia Retrieval. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3512527.3531391.
Der volle Inhalt der QuelleLiu, Kunhao, Fangneng Zhan, Yiwen Chen, Jiahui Zhang, Yingchen Yu, Abdulmotaleb El Saddik, Shijian Lu und Eric Xing. „StyleRF: Zero-Shot 3D Style Transfer of Neural Radiance Fields“. In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2023. http://dx.doi.org/10.1109/cvpr52729.2023.00806.
Der volle Inhalt der QuelleFares, Mireille, Catherine Pelachaud und Nicolas Obin. „Zero-Shot Style Transfer for Multimodal Data-Driven Gesture Synthesis“. In 2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG). IEEE, 2023. http://dx.doi.org/10.1109/fg57933.2023.10042658.
Der volle Inhalt der QuelleSheng, Lu, Ziyi Lin, Jing Shao und Xiaogang Wang. „Avatar-Net: Multi-scale Zero-Shot Style Transfer by Feature Decoration“. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2018. http://dx.doi.org/10.1109/cvpr.2018.00860.
Der volle Inhalt der QuelleYang, Serin, Hyunmin Hwang und Jong Chul Ye. „Zero-Shot Contrastive Loss for Text-Guided Diffusion Image Style Transfer“. In 2023 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 2023. http://dx.doi.org/10.1109/iccv51070.2023.02091.
Der volle Inhalt der QuelleSong, Kun, Yi Ren, Yi Lei, Chunfeng Wang, Kun Wei, Lei Xie, Xiang Yin und Zejun Ma. „StyleS2ST: Zero-shot Style Transfer for Direct Speech-to-speech Translation“. In INTERSPEECH 2023. ISCA: ISCA, 2023. http://dx.doi.org/10.21437/interspeech.2023-648.
Der volle Inhalt der QuelleChen, Liyang, Zhiyong Wu, Runnan Li, Weihong Bao, Jun Ling, Xu Tan und Sheng Zhao. „VAST: Vivify Your Talking Avatar via Zero-Shot Expressive Facial Style Transfer“. In 2023 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW). IEEE, 2023. http://dx.doi.org/10.1109/iccvw60793.2023.00320.
Der volle Inhalt der Quelle