Academic literature on the topic 'Cross-modality Translation'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Cross-modality Translation.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Cross-modality Translation"
Holubenko, Nataliia. "Modality from the Cross-cultural Studies Perspective: a Practical Approach to Intersemiotic Translation." World Journal of English Language 13, no. 2 (January 27, 2023): 86. http://dx.doi.org/10.5430/wjel.v13n2p86.
Full textLiu, Ajian, Zichang Tan, Jun Wan, Yanyan Liang, Zhen Lei, Guodong Guo, and Stan Z. Li. "Face Anti-Spoofing via Adversarial Cross-Modality Translation." IEEE Transactions on Information Forensics and Security 16 (2021): 2759–72. http://dx.doi.org/10.1109/tifs.2021.3065495.
Full textRabadán, Rosa. "Modality and modal verbs in contrast." Languages in Contrast 6, no. 2 (December 15, 2006): 261–306. http://dx.doi.org/10.1075/lic.6.2.04rab.
Full textWang, Yu, and Jianping Zhang. "CMMCSegNet: Cross-Modality Multicascade Indirect LGE Segmentation on Multimodal Cardiac MR." Computational and Mathematical Methods in Medicine 2021 (June 5, 2021): 1–14. http://dx.doi.org/10.1155/2021/9942149.
Full textDanni, Yu. "A Genre Approach to the Translation of Political Speeches Based on a Chinese-Italian-English Trilingual Parallel Corpus." SAGE Open 10, no. 2 (April 2020): 215824402093360. http://dx.doi.org/10.1177/2158244020933607.
Full textWu, Kevin E., Kathryn E. Yost, Howard Y. Chang, and James Zou. "BABEL enables cross-modality translation between multiomic profiles at single-cell resolution." Proceedings of the National Academy of Sciences 118, no. 15 (April 7, 2021): e2023070118. http://dx.doi.org/10.1073/pnas.2023070118.
Full textSharma, Akanksha, and Neeru Jindal. "Cross-Modality Breast Image Translation with Improved Resolution Using Generative Adversarial Networks." Wireless Personal Communications 119, no. 4 (March 29, 2021): 2877–91. http://dx.doi.org/10.1007/s11277-021-08376-5.
Full textMai, Sijie, Haifeng Hu, and Songlong Xing. "Modality to Modality Translation: An Adversarial Representation Learning and Graph Fusion Network for Multimodal Fusion." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 01 (April 3, 2020): 164–72. http://dx.doi.org/10.1609/aaai.v34i01.5347.
Full textLee, Yong-Hyeok, Dong-Won Jang, Jae-Bin Kim, Rae-Hong Park, and Hyung-Min Park. "Audio–Visual Speech Recognition Based on Dual Cross-Modality Attentions with the Transformer Model." Applied Sciences 10, no. 20 (October 17, 2020): 7263. http://dx.doi.org/10.3390/app10207263.
Full textWang, Yabing, Fan Wang, Jianfeng Dong, and Hao Luo. "CL2CM: Improving Cross-Lingual Cross-Modal Retrieval via Cross-Lingual Knowledge Transfer." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 6 (March 24, 2024): 5651–59. http://dx.doi.org/10.1609/aaai.v38i6.28376.
Full textDissertations / Theses on the topic "Cross-modality Translation"
Longuefosse, Arthur. "Apprentissage profond pour la conversion d’IRM vers TDM en imagerie thoracique." Electronic Thesis or Diss., Bordeaux, 2024. http://www.theses.fr/2024BORD0489.
Full textThoracic imaging faces significant challenges, with each imaging modality presenting its own limitations. CT, the gold standard for lung imaging, delivers high spatial resolution but relies on ionizing radiation, posing risks for patients requiring frequent scans. Conversely, lung MRI, offers a radiation-free alternative but is hindered by technical issues such as low contrast and artifacts, limiting its broader clinical use. Recently, UTE-MRI shows promise in addressing some of these limitations, but still lacks the high resolution and image quality of CT, particularly for detailed structural assessment. The primary objective of this thesis is to develop and validate deep learning-based models for synthesizing CT-like images from UTE-MRI. Specifically, we aim to assess the image quality, anatomical accuracy, and clinical applicability of these synthetic CT images in comparison to the original UTE-MRI and real CT scans in thoracic imaging. Initially, we explored the fundamentals of medical image synthesis, establishing the groundwork for MR to CT translation. We implemented a 2D GAN model based on the pix2pixHD framework, optimizing it using SPADE normalization and refining preprocessing techniques such as resampling and registration. Clinical evaluation with expert radiologists showed promising results in comparing synthetic images to real CT scans. Synthesis was further enhanced by introducing perceptual loss, which improved structural details and visual quality, and incorporated 2.5D strategies to balance between 2D and 3D synthesis. Additionally, we emphasized a rigorous validation process using task-specific metrics, challenging traditional intensity-based and global metrics by focusing on the accurate reconstruction of anatomical structures. In the final stage, we developed a robust and scalable 3D synthesis framework by adapting nnU-Net for CT generation, along with an anatomical feature-prioritized loss function, enabling superior reconstruction of critical structures such as airways and vessels. Our work highlights the potential of deep learning-based models for generating high-quality synthetic CT images from UTE-MRI, offering a significant improvement in non-invasive lung imaging. These advances could greatly enhance the clinical applicability of UTE-MRI, providing a safer alternative to CT for the follow-up of chronic lung diseases. Furthermore, a patent is currently in preparation for the adoption of our method, paving the way for potential clinical use
"Cross-modality semantic integration and robust interpretation of multimodal user interactions." Thesis, 2010. http://library.cuhk.edu.hk/record=b6075023.
Full textWe have also performed a latent semantic modeling (LSM) for interpreting multimodal user input consisting of speech and pen gestures. Each modality of a multimodal input carries semantics related to a domain-specific task goal (TG). Each input is annotated manually with a TG based on the semantics. Multimodal input usually has a simpler syntactic structure and different order of semantic constituents from unimodal input. Therefore, we proposed to use LSM to derive the latent semantics from the multimodal inputs. In order to achieve this, we characterized the cross-modal integration pattern as 3-tuple multimodal terms taking into account SLR, pen gesture type and their temporal relation. The correlation term matrix is then decomposed using singular value decomposition (SVD) to derive the latent semantics automatically. TG inference on disjoint test set based on the latent semantics achieves accurate performance for 99% of the multimodal inquiries.
Hui, Pui Yu.
Adviser: Helen Meng.
Source: Dissertation Abstracts International, Volume: 73-02, Section: B, page: .
Thesis (Ph.D.)--Chinese University of Hong Kong, 2010.
Includes bibliographical references (leaves 294-306).
Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [201-] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Abstract also in Chinese.
Book chapters on the topic "Cross-modality Translation"
Zhang, Ran, Laetitia Meng-Papaxanthos, Jean-Philippe Vert, and William Stafford Noble. "Semi-supervised Single-Cell Cross-modality Translation Using Polarbear." In Lecture Notes in Computer Science, 20–35. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04749-7_2.
Full textKang, Bogyeong, Hyeonyeong Nam, Ji-Wung Han, Keun-Soo Heo, and Tae-Eui Kam. "Multi-view Cross-Modality MR Image Translation for Vestibular Schwannoma and Cochlea Segmentation." In Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries, 100–108. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-44153-0_10.
Full textYang, Tao, and Lisheng Wang. "Koos Classification of Vestibular Schwannoma via Image Translation-Based Unsupervised Cross-Modality Domain Adaptation." In Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries, 59–67. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-44153-0_6.
Full textZhao, Ziyuan, Kaixin Xu, Huai Zhe Yeo, Xulei Yang, and Cuntai Guan. "MS-MT: Multi-scale Mean Teacher with Contrastive Unpaired Translation for Cross-Modality Vestibular Schwannoma and Cochlea Segmentation." In Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries, 68–78. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-44153-0_7.
Full textZhu, Lei, Ling Ling Chan, Teck Khim Ng, Meihui Zhang, and Beng Chin Ooi. "Deep Co-Training for Cross-Modality Medical Image Segmentation." In Frontiers in Artificial Intelligence and Applications. IOS Press, 2023. http://dx.doi.org/10.3233/faia230633.
Full textConference papers on the topic "Cross-modality Translation"
Li, Yingtai, Shuo Yang, Xiaoyan Wu, Shan He, and S. Kevin Zhou. "Taming Stable Diffusion for MRI Cross-Modality Translation." In 2024 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), 2134–41. IEEE, 2024. https://doi.org/10.1109/bibm62325.2024.10822349.
Full textHassanzadeh, Reihaneh, Anees Abrol, Hamid Reza Hassanzadeh, and Vince D. Calhoun. "Cross-Modality Translation with Generative Adversarial Networks to Unveil Alzheimer’s Disease Biomarkers." In 2024 46th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), 1–4. IEEE, 2024. https://doi.org/10.1109/embc53108.2024.10781737.
Full textXiang, Yixin, Xianhua Zeng, Dajiang Lei, and Tao Fu. "MOADM: Manifold Optimization Adversarial Diffusion Model for Cross-Modality Medical Image Translation." In 2024 IEEE International Conference on Medical Artificial Intelligence (MedAI), 380–85. IEEE, 2024. https://doi.org/10.1109/medai62885.2024.00057.
Full textZhao, Pu, Hong Pan, and Siyu Xia. "MRI-Trans-GAN: 3D MRI Cross-Modality Translation." In 2021 40th Chinese Control Conference (CCC). IEEE, 2021. http://dx.doi.org/10.23919/ccc52363.2021.9550256.
Full textQi, Jinwei, and Yuxin Peng. "Cross-modal Bidirectional Translation via Reinforcement Learning." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/365.
Full textTang, Shi, Xinchen Ye, Fei Xue, and Rui Xu. "Cross-Modality depth Estimation via Unsupervised Stereo RGB-to-infrared Translation." In ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2023. http://dx.doi.org/10.1109/icassp49357.2023.10095982.
Full textYe, Jinhui, Wenxiang Jiao, Xing Wang, Zhaopeng Tu, and Hui Xiong. "Cross-modality Data Augmentation for End-to-End Sign Language Translation." In Findings of the Association for Computational Linguistics: EMNLP 2023. Stroudsburg, PA, USA: Association for Computational Linguistics, 2023. http://dx.doi.org/10.18653/v1/2023.findings-emnlp.904.
Full textMaji, Prasenjit, Kunal Dhibar, and Hemanta Kumar Mondal. "Revolutionizing and Enhancing Medical Diagnostics with Conditional GANs for Cross-Modality Image Translation." In 2024 11th International Conference on Computing for Sustainable Global Development (INDIACom). IEEE, 2024. http://dx.doi.org/10.23919/indiacom61295.2024.10498844.
Full textXu, Siwei, Junhao Liu, and Jing Zhang. "scACT: Accurate Cross-modality Translation via Cycle-consistent Training from Unpaired Single-cell Data." In CIKM '24: The 33rd ACM International Conference on Information and Knowledge Management, 2722–31. New York, NY, USA: ACM, 2024. http://dx.doi.org/10.1145/3627673.3679576.
Full textCheng, Xize, Tao Jin, Rongjie Huang, Linjun Li, Wang Lin, Zehan Wang, Ye Wang, Huadai Liu, Aoxiong Yin, and Zhou Zhao. "MixSpeech: Cross-Modality Self-Learning with Audio-Visual Stream Mixup for Visual Speech Translation and Recognition." In 2023 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 2023. http://dx.doi.org/10.1109/iccv51070.2023.01442.
Full text