Literatura académica sobre el tema "Cross-modality Translation"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Cross-modality Translation".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Artículos de revistas sobre el tema "Cross-modality Translation"
Holubenko, Nataliia. "Modality from the Cross-cultural Studies Perspective: a Practical Approach to Intersemiotic Translation". World Journal of English Language 13, n.º 2 (27 de enero de 2023): 86. http://dx.doi.org/10.5430/wjel.v13n2p86.
Texto completoLiu, Ajian, Zichang Tan, Jun Wan, Yanyan Liang, Zhen Lei, Guodong Guo y Stan Z. Li. "Face Anti-Spoofing via Adversarial Cross-Modality Translation". IEEE Transactions on Information Forensics and Security 16 (2021): 2759–72. http://dx.doi.org/10.1109/tifs.2021.3065495.
Texto completoRabadán, Rosa. "Modality and modal verbs in contrast". Languages in Contrast 6, n.º 2 (15 de diciembre de 2006): 261–306. http://dx.doi.org/10.1075/lic.6.2.04rab.
Texto completoWang, Yu y Jianping Zhang. "CMMCSegNet: Cross-Modality Multicascade Indirect LGE Segmentation on Multimodal Cardiac MR". Computational and Mathematical Methods in Medicine 2021 (5 de junio de 2021): 1–14. http://dx.doi.org/10.1155/2021/9942149.
Texto completoDanni, Yu. "A Genre Approach to the Translation of Political Speeches Based on a Chinese-Italian-English Trilingual Parallel Corpus". SAGE Open 10, n.º 2 (abril de 2020): 215824402093360. http://dx.doi.org/10.1177/2158244020933607.
Texto completoWu, Kevin E., Kathryn E. Yost, Howard Y. Chang y James Zou. "BABEL enables cross-modality translation between multiomic profiles at single-cell resolution". Proceedings of the National Academy of Sciences 118, n.º 15 (7 de abril de 2021): e2023070118. http://dx.doi.org/10.1073/pnas.2023070118.
Texto completoSharma, Akanksha y Neeru Jindal. "Cross-Modality Breast Image Translation with Improved Resolution Using Generative Adversarial Networks". Wireless Personal Communications 119, n.º 4 (29 de marzo de 2021): 2877–91. http://dx.doi.org/10.1007/s11277-021-08376-5.
Texto completoMai, Sijie, Haifeng Hu y Songlong Xing. "Modality to Modality Translation: An Adversarial Representation Learning and Graph Fusion Network for Multimodal Fusion". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 01 (3 de abril de 2020): 164–72. http://dx.doi.org/10.1609/aaai.v34i01.5347.
Texto completoLee, Yong-Hyeok, Dong-Won Jang, Jae-Bin Kim, Rae-Hong Park y Hyung-Min Park. "Audio–Visual Speech Recognition Based on Dual Cross-Modality Attentions with the Transformer Model". Applied Sciences 10, n.º 20 (17 de octubre de 2020): 7263. http://dx.doi.org/10.3390/app10207263.
Texto completoWang, Yabing, Fan Wang, Jianfeng Dong y Hao Luo. "CL2CM: Improving Cross-Lingual Cross-Modal Retrieval via Cross-Lingual Knowledge Transfer". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 6 (24 de marzo de 2024): 5651–59. http://dx.doi.org/10.1609/aaai.v38i6.28376.
Texto completoTesis sobre el tema "Cross-modality Translation"
Longuefosse, Arthur. "Apprentissage profond pour la conversion d’IRM vers TDM en imagerie thoracique". Electronic Thesis or Diss., Bordeaux, 2024. http://www.theses.fr/2024BORD0489.
Texto completoThoracic imaging faces significant challenges, with each imaging modality presenting its own limitations. CT, the gold standard for lung imaging, delivers high spatial resolution but relies on ionizing radiation, posing risks for patients requiring frequent scans. Conversely, lung MRI, offers a radiation-free alternative but is hindered by technical issues such as low contrast and artifacts, limiting its broader clinical use. Recently, UTE-MRI shows promise in addressing some of these limitations, but still lacks the high resolution and image quality of CT, particularly for detailed structural assessment. The primary objective of this thesis is to develop and validate deep learning-based models for synthesizing CT-like images from UTE-MRI. Specifically, we aim to assess the image quality, anatomical accuracy, and clinical applicability of these synthetic CT images in comparison to the original UTE-MRI and real CT scans in thoracic imaging. Initially, we explored the fundamentals of medical image synthesis, establishing the groundwork for MR to CT translation. We implemented a 2D GAN model based on the pix2pixHD framework, optimizing it using SPADE normalization and refining preprocessing techniques such as resampling and registration. Clinical evaluation with expert radiologists showed promising results in comparing synthetic images to real CT scans. Synthesis was further enhanced by introducing perceptual loss, which improved structural details and visual quality, and incorporated 2.5D strategies to balance between 2D and 3D synthesis. Additionally, we emphasized a rigorous validation process using task-specific metrics, challenging traditional intensity-based and global metrics by focusing on the accurate reconstruction of anatomical structures. In the final stage, we developed a robust and scalable 3D synthesis framework by adapting nnU-Net for CT generation, along with an anatomical feature-prioritized loss function, enabling superior reconstruction of critical structures such as airways and vessels. Our work highlights the potential of deep learning-based models for generating high-quality synthetic CT images from UTE-MRI, offering a significant improvement in non-invasive lung imaging. These advances could greatly enhance the clinical applicability of UTE-MRI, providing a safer alternative to CT for the follow-up of chronic lung diseases. Furthermore, a patent is currently in preparation for the adoption of our method, paving the way for potential clinical use
"Cross-modality semantic integration and robust interpretation of multimodal user interactions". Thesis, 2010. http://library.cuhk.edu.hk/record=b6075023.
Texto completoWe have also performed a latent semantic modeling (LSM) for interpreting multimodal user input consisting of speech and pen gestures. Each modality of a multimodal input carries semantics related to a domain-specific task goal (TG). Each input is annotated manually with a TG based on the semantics. Multimodal input usually has a simpler syntactic structure and different order of semantic constituents from unimodal input. Therefore, we proposed to use LSM to derive the latent semantics from the multimodal inputs. In order to achieve this, we characterized the cross-modal integration pattern as 3-tuple multimodal terms taking into account SLR, pen gesture type and their temporal relation. The correlation term matrix is then decomposed using singular value decomposition (SVD) to derive the latent semantics automatically. TG inference on disjoint test set based on the latent semantics achieves accurate performance for 99% of the multimodal inquiries.
Hui, Pui Yu.
Adviser: Helen Meng.
Source: Dissertation Abstracts International, Volume: 73-02, Section: B, page: .
Thesis (Ph.D.)--Chinese University of Hong Kong, 2010.
Includes bibliographical references (leaves 294-306).
Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [201-] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Abstract also in Chinese.
Capítulos de libros sobre el tema "Cross-modality Translation"
Zhang, Ran, Laetitia Meng-Papaxanthos, Jean-Philippe Vert y William Stafford Noble. "Semi-supervised Single-Cell Cross-modality Translation Using Polarbear". En Lecture Notes in Computer Science, 20–35. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04749-7_2.
Texto completoKang, Bogyeong, Hyeonyeong Nam, Ji-Wung Han, Keun-Soo Heo y Tae-Eui Kam. "Multi-view Cross-Modality MR Image Translation for Vestibular Schwannoma and Cochlea Segmentation". En Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries, 100–108. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-44153-0_10.
Texto completoYang, Tao y Lisheng Wang. "Koos Classification of Vestibular Schwannoma via Image Translation-Based Unsupervised Cross-Modality Domain Adaptation". En Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries, 59–67. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-44153-0_6.
Texto completoZhao, Ziyuan, Kaixin Xu, Huai Zhe Yeo, Xulei Yang y Cuntai Guan. "MS-MT: Multi-scale Mean Teacher with Contrastive Unpaired Translation for Cross-Modality Vestibular Schwannoma and Cochlea Segmentation". En Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries, 68–78. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-44153-0_7.
Texto completoZhu, Lei, Ling Ling Chan, Teck Khim Ng, Meihui Zhang y Beng Chin Ooi. "Deep Co-Training for Cross-Modality Medical Image Segmentation". En Frontiers in Artificial Intelligence and Applications. IOS Press, 2023. http://dx.doi.org/10.3233/faia230633.
Texto completoActas de conferencias sobre el tema "Cross-modality Translation"
Li, Yingtai, Shuo Yang, Xiaoyan Wu, Shan He y S. Kevin Zhou. "Taming Stable Diffusion for MRI Cross-Modality Translation". En 2024 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), 2134–41. IEEE, 2024. https://doi.org/10.1109/bibm62325.2024.10822349.
Texto completoHassanzadeh, Reihaneh, Anees Abrol, Hamid Reza Hassanzadeh y Vince D. Calhoun. "Cross-Modality Translation with Generative Adversarial Networks to Unveil Alzheimer’s Disease Biomarkers". En 2024 46th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), 1–4. IEEE, 2024. https://doi.org/10.1109/embc53108.2024.10781737.
Texto completoXiang, Yixin, Xianhua Zeng, Dajiang Lei y Tao Fu. "MOADM: Manifold Optimization Adversarial Diffusion Model for Cross-Modality Medical Image Translation". En 2024 IEEE International Conference on Medical Artificial Intelligence (MedAI), 380–85. IEEE, 2024. https://doi.org/10.1109/medai62885.2024.00057.
Texto completoZhao, Pu, Hong Pan y Siyu Xia. "MRI-Trans-GAN: 3D MRI Cross-Modality Translation". En 2021 40th Chinese Control Conference (CCC). IEEE, 2021. http://dx.doi.org/10.23919/ccc52363.2021.9550256.
Texto completoQi, Jinwei y Yuxin Peng. "Cross-modal Bidirectional Translation via Reinforcement Learning". En Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/365.
Texto completoTang, Shi, Xinchen Ye, Fei Xue y Rui Xu. "Cross-Modality depth Estimation via Unsupervised Stereo RGB-to-infrared Translation". En ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2023. http://dx.doi.org/10.1109/icassp49357.2023.10095982.
Texto completoYe, Jinhui, Wenxiang Jiao, Xing Wang, Zhaopeng Tu y Hui Xiong. "Cross-modality Data Augmentation for End-to-End Sign Language Translation". En Findings of the Association for Computational Linguistics: EMNLP 2023. Stroudsburg, PA, USA: Association for Computational Linguistics, 2023. http://dx.doi.org/10.18653/v1/2023.findings-emnlp.904.
Texto completoMaji, Prasenjit, Kunal Dhibar y Hemanta Kumar Mondal. "Revolutionizing and Enhancing Medical Diagnostics with Conditional GANs for Cross-Modality Image Translation". En 2024 11th International Conference on Computing for Sustainable Global Development (INDIACom). IEEE, 2024. http://dx.doi.org/10.23919/indiacom61295.2024.10498844.
Texto completoXu, Siwei, Junhao Liu y Jing Zhang. "scACT: Accurate Cross-modality Translation via Cycle-consistent Training from Unpaired Single-cell Data". En CIKM '24: The 33rd ACM International Conference on Information and Knowledge Management, 2722–31. New York, NY, USA: ACM, 2024. http://dx.doi.org/10.1145/3627673.3679576.
Texto completoCheng, Xize, Tao Jin, Rongjie Huang, Linjun Li, Wang Lin, Zehan Wang, Ye Wang, Huadai Liu, Aoxiong Yin y Zhou Zhao. "MixSpeech: Cross-Modality Self-Learning with Audio-Visual Stream Mixup for Visual Speech Translation and Recognition". En 2023 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 2023. http://dx.doi.org/10.1109/iccv51070.2023.01442.
Texto completo