Добірка наукової літератури з теми "Multi-domain image translation"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Multi-domain image translation".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Статті в журналах з теми "Multi-domain image translation"
Shao, Mingwen, Youcai Zhang, Huan Liu, Chao Wang, Le Li, and Xun Shao. "DMDIT: Diverse multi-domain image-to-image translation." Knowledge-Based Systems 229 (October 2021): 107311. http://dx.doi.org/10.1016/j.knosys.2021.107311.
Повний текст джерелаLiu, Huajun, Lei Chen, Haigang Sui, Qing Zhu, Dian Lei, and Shubo Liu. "Unsupervised multi-domain image translation with domain representation learning." Signal Processing: Image Communication 99 (November 2021): 116452. http://dx.doi.org/10.1016/j.image.2021.116452.
Повний текст джерелаCai, Naxin, Houjin Chen, Yanfeng Li, Yahui Peng, and Linqiang Guo. "Registration on DCE-MRI images via multi-domain image-to-image translation." Computerized Medical Imaging and Graphics 104 (March 2023): 102169. http://dx.doi.org/10.1016/j.compmedimag.2022.102169.
Повний текст джерелаXia, Weihao, Yujiu Yang, and Jing-Hao Xue. "Unsupervised multi-domain multimodal image-to-image translation with explicit domain-constrained disentanglement." Neural Networks 131 (November 2020): 50–63. http://dx.doi.org/10.1016/j.neunet.2020.07.023.
Повний текст джерелаShen, Yangyun, Runnan Huang, and Wenkai Huang. "GD-StarGAN: Multi-domain image-to-image translation in garment design." PLOS ONE 15, no. 4 (April 21, 2020): e0231719. http://dx.doi.org/10.1371/journal.pone.0231719.
Повний текст джерелаZhang, Yifei, Weipeng Li, Daling Wang, and Shi Feng. "Unsupervised Image Translation Using Multi-Scale Residual GAN." Mathematics 10, no. 22 (November 19, 2022): 4347. http://dx.doi.org/10.3390/math10224347.
Повний текст джерелаXu, Wenju, and Guanghui Wang. "A Domain Gap Aware Generative Adversarial Network for Multi-Domain Image Translation." IEEE Transactions on Image Processing 31 (2022): 72–84. http://dx.doi.org/10.1109/tip.2021.3125266.
Повний текст джерелаKomatsu, Rina, and Tad Gonsalves. "Multi-CartoonGAN with Conditional Adaptive Instance-Layer Normalization for Conditional Artistic Face Translation." AI 3, no. 1 (January 24, 2022): 37–52. http://dx.doi.org/10.3390/ai3010003.
Повний текст джерелаFeng, Long, Guohua Geng, Qihang Li, Yi Jiang, Zhan Li, and Kang Li. "CRPGAN: Learning image-to-image translation of two unpaired images by cross-attention mechanism and parallelization strategy." PLOS ONE 18, no. 1 (January 6, 2023): e0280073. http://dx.doi.org/10.1371/journal.pone.0280073.
Повний текст джерелаTao, Rentuo, Ziqiang Li, Renshuai Tao, and Bin Li. "ResAttr-GAN: Unpaired Deep Residual Attributes Learning for Multi-Domain Face Image Translation." IEEE Access 7 (2019): 132594–608. http://dx.doi.org/10.1109/access.2019.2941272.
Повний текст джерелаДисертації з теми "Multi-domain image translation"
Liu, Yahui. "Exploring Multi-Domain and Multi-Modal Representations for Unsupervised Image-to-Image Translation." Doctoral thesis, Università degli studi di Trento, 2022. http://hdl.handle.net/11572/342634.
Повний текст джерелаWu, Po-Wui, and 吳柏威. "RA-GAN: Multi-domain Image-to-Image Translation via Relative Attributes." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/6q5k8e.
Повний текст джерела國立臺灣大學
資訊工程學研究所
107
Multi-domain image-to-image translation has gained increasing attention recently. Previous methods take an image and some target attributes as inputs and generate an output image that has the desired attributes. However, this has one limitation. They require specifying the entire set of attributes even most of them would not be changed. To address this limitation, we propose RA-GAN, a novel and practical formulation to multi-domain image-to-image translation. The key idea is the use of relative attributes, which describes the desired change on selected attributes. To this end, we propose an adversarial framework that learns a single generator to translate images that not only match the relative attributes but also exhibit better quality. Moreover, Our generator is capable of modifying images by changing particular attributes of interest in a continuous manner while preserving the other ones. Experimental results demonstrate the effectiveness of our approach both qualitatively and quantitatively to the tasks of facial attribute transfer and interpolation.
Hsu, Shu-Yu, and 許書宇. "SemiStarGAN: Semi-Supervised Generative Adversarial Networks for Multi-Domain Image-to-Image Translation." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/n4zqyy.
Повний текст джерела國立臺灣大學
資訊工程學研究所
106
Recent studies have shown significant advance for multi-domain image-to-image translation, and generative adversarial networks (GANs) are widely used to address this problem. However, existing methods all require a large number of domain-labeled images to train an effective image generator, but it may take time and effort to collect a large number of labeled data for real-world problems. In this thesis, we propose SemiStarGAN, a semi-supervised GAN network to tackle this issue. The proposed method utilizes unlabeled images by incorporating a novel discriminator/classifier network architecture Y model, and two existing semi-supervised learning techniques---pseudo labeling and self-ensembling. Experimental results on the CelebA dataset using domains of facial attributes show that the proposed method achieves comparable performance with state-of-the-art methods using considerably less labeled training images.
Yung-YuChang and 張詠裕. "Multi-Domain Image-to-Image Translations based on Generative Adversarial Networks." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/89654d.
Повний текст джерела國立成功大學
工程科學系
106
In recent years, domain translation has been a breakthrough in the field of deep learning. However, most of the issues raised so far are dedicated to a single situation, and trained through paired datasets. The effect is significant, but the defect is that the architectures lack scalability and the paired data update in the future is difficult. The demand for computer vision assistance systems is increasing, and there is more than one mission requirement in some environments. In this Thesis, we propose a multi-domain image translation model which has two advantages in terms of flexibility: one is the depth of the architecture that can be designed according to expectations, and the other is the number of domains that can be designed according to the number of tasks. We demonstrate the effectiveness of our theory on dehaze, debluring, and denoising tasks.
Частини книг з теми "Multi-domain image translation"
He, Ziliang, Zhenguo Yang, Xudong Mao, Jianming Lv, Qing Li, and Wenyin Liu. "Self-attention StarGAN for Multi-domain Image-to-Image Translation." In Lecture Notes in Computer Science, 537–49. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-30508-6_43.
Повний текст джерелаCao, Jie, Huaibo Huang, Yi Li, Ran He, and Zhenan Sun. "Informative Sample Mining Network for Multi-domain Image-to-Image Translation." In Computer Vision – ECCV 2020, 404–19. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58529-7_24.
Повний текст джерелаPan, Bing, Zexuan Ji, and Qiang Chen. "MultiGAN: Multi-domain Image Translation from OCT to OCTA." In Pattern Recognition and Computer Vision, 336–47. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-18910-4_28.
Повний текст джерелаTang, Hao, Dan Xu, Wei Wang, Yan Yan, and Nicu Sebe. "Dual Generator Generative Adversarial Networks for Multi-domain Image-to-Image Translation." In Computer Vision – ACCV 2018, 3–21. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-20887-5_1.
Повний текст джерелаHsu, Shu-Yu, Chih-Yuan Yang, Chi-Chia Huang, and Jane Yung-jen Hsu. "SemiStarGAN: Semi-supervised Generative Adversarial Networks for Multi-domain Image-to-Image Translation." In Computer Vision – ACCV 2018, 338–53. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-20870-7_21.
Повний текст джерелаLuo, Lei, and William H. Hsu. "AMMUNIT: An Attention-Based Multimodal Multi-domain UNsupervised Image-to-Image Translation Framework." In Lecture Notes in Computer Science, 358–70. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-15931-2_30.
Повний текст джерелаGe, Hongwei, Yao Yao, Zheng Chen, and Liang Sun. "Unsupervised Transformation Network Based on GANs for Target-Domain Oriented Multi-domain Image Translation." In Computer Vision – ACCV 2018, 398–413. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-20890-5_26.
Повний текст джерелаТези доповідей конференцій з теми "Multi-domain image translation"
Lin, Jianxin, Yingce Xia, Yijun Wang, Tao Qin, and Zhibo Chen. "Image-to-Image Translation with Multi-Path Consistency Regularization." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/413.
Повний текст джерелаZhu, Yuanlue, Mengchao Bai, Linlin Shen, and Zhiwei Wen. "SwitchGAN for Multi-domain Facial Image Translation." In 2019 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 2019. http://dx.doi.org/10.1109/icme.2019.00209.
Повний текст джерелаGomez, Raul, Yahui Liu, Marco De Nadai, Dimosthenis Karatzas, Bruno Lepri, and Nicu Sebe. "Retrieval Guided Unsupervised Multi-domain Image to Image Translation." In MM '20: The 28th ACM International Conference on Multimedia. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3394171.3413785.
Повний текст джерелаGe, Yingjun, Xiaodong Wang, and Jiting Zhou. "Federated learning based multi-domain image-to-image translation." In International Conference on Mechanisms and Robotics (ICMAR 2022), edited by Zeguang Pei. SPIE, 2022. http://dx.doi.org/10.1117/12.2652535.
Повний текст джерелаHui, Le, Xiang Li, Jiaxin Chen, Hongliang He, and Jian Yang. "Unsupervised Multi-Domain Image Translation with Domain-Specific Encoders/Decoders." In 2018 24th International Conference on Pattern Recognition (ICPR). IEEE, 2018. http://dx.doi.org/10.1109/icpr.2018.8545169.
Повний текст джерелаRahman, Mohammad Mahfujur, Clinton Fookes, Mahsa Baktashmotlagh, and Sridha Sridharan. "Multi-Component Image Translation for Deep Domain Generalization." In 2019 IEEE Winter Conference on Applications of Computer Vision (WACV). IEEE, 2019. http://dx.doi.org/10.1109/wacv.2019.00067.
Повний текст джерелаLin, Yu-Jing, Po-Wei Wu, Che-Han Chang, Edward Chang, and Shih-Wei Liao. "RelGAN: Multi-Domain Image-to-Image Translation via Relative Attributes." In 2019 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 2019. http://dx.doi.org/10.1109/iccv.2019.00601.
Повний текст джерелаFu, Huiyuan, Ting Yu, Xin Wang, and Huadong Ma. "Cross-Granularity Learning for Multi-Domain Image-to-Image Translation." In MM '20: The 28th ACM International Conference on Multimedia. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3394171.3413656.
Повний текст джерелаNguyen, The-Phuc, Stephane Lathuiliere, and Elisa Ricci. "Multi-Domain Image-to-Image Translation with Adaptive Inference Graph." In 2020 25th International Conference on Pattern Recognition (ICPR). IEEE, 2021. http://dx.doi.org/10.1109/icpr48806.2021.9412713.
Повний текст джерелаYang, Xuewen, Dongliang Xie, and Xin Wang. "Crossing-Domain Generative Adversarial Networks for Unsupervised Multi-Domain Image-to-Image Translation." In MM '18: ACM Multimedia Conference. New York, NY, USA: ACM, 2018. http://dx.doi.org/10.1145/3240508.3240716.
Повний текст джерела