To see the other types of publications on this topic, follow the link: Cross-modality Translation.

Journal articles on the topic 'Cross-modality Translation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Cross-modality Translation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Holubenko, Nataliia. "Modality from the Cross-cultural Studies Perspective: a Practical Approach to Intersemiotic Translation." World Journal of English Language 13, no. 2 (January 27, 2023): 86. http://dx.doi.org/10.5430/wjel.v13n2p86.

Full text
Abstract:
Any scientific question should be understood as a process of dynamic semiosis in search of truth. The revelatory web is goal-oriented (teleological), but with no stable outcome, static method, redefinition, or fixed agent. All outcomes, methods, and agents are temporary and are temporary “trends” in translation studies that can be abandoned for new ones. The translation can be categorized as a fragmented record or metaphorically as a mosaic, whose components allow the construction of a figurative, diegetic, dramatic world in intersemiotic translation, to be inscribed in the diagram of the narrative. The translation adopts the repetitive and non-repeating behavior patterns of a particular culture, rejecting trendy or outdated translation tools. The same applies to intersemiotic translation with interpretive and reinterpretive meaning. The ideas of the classics about a global approach to semiolinguistics have turned the whole traditional approach to translation studies upside down. The traditional view of the question of intercultural, intersemiotic translation focused on untested dichotomies labeled as dogmatic forms of double self-reflection. Intersemiotic translation offers experimental and temporal responses of a skeptical and evolutionary nature at the boundaries of the translated and untranslatable, correspondence and non-correspondence, conformity and unconformity, the starring role and purpose of intelligence, the dynamism and emotionality of the Falabilist spirit and the Falabilist heart of the translator. It focuses on the concepts of translation and retranslation, the fate of the intercultural text, the fate of the target text, and other semiotic issues of translation in the broadest sense, in the sense of an encoded phenomenon rather than an intersemiotic code. This paper analyzes cultural and linguistic transsemiosis from the perspective of translation and transduction to reveal the essence of intersemiosis. One considers the extrapolarity and complexity phenomenon of modality in terms of cognitive-discursive and semiotic features of its manifestation during translation. In the contemporary scientific pattern, the linguistic category of modality is considered as a functional-semantic, semantic-pragmatic, semantic- syntactic, syntactic, grammatical or logical category. One defines it as the inner attitude of the narrator to the content. The essence of modality in intersemiotic translatin is related to the inner linguistic thinking. Accordingly, intersemiotic translation is the recoding of the original text by means of another sign (semiotic) system.
APA, Harvard, Vancouver, ISO, and other styles
2

Liu, Ajian, Zichang Tan, Jun Wan, Yanyan Liang, Zhen Lei, Guodong Guo, and Stan Z. Li. "Face Anti-Spoofing via Adversarial Cross-Modality Translation." IEEE Transactions on Information Forensics and Security 16 (2021): 2759–72. http://dx.doi.org/10.1109/tifs.2021.3065495.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Rabadán, Rosa. "Modality and modal verbs in contrast." Languages in Contrast 6, no. 2 (December 15, 2006): 261–306. http://dx.doi.org/10.1075/lic.6.2.04rab.

Full text
Abstract:
This paper addresses the question of how English and Spanish encode the modal meanings of possibility and necessity. English modals and Spanish modal periphrases emerge as ‘cross-linguistic equivalents’ in this area. Data from two monolingual ‘comparable’ corpora — the Bank of English and CREA — reveal (i) differences in grammatical conceptualization in the English and the Spanish traditions and (ii) the relative inadequacy of classifications of modality for a translation-oriented contrast in this area. An English-Spanish contrastive map of the semantics (and expressive means) of modality will be an effective way to make relevant and accurate cross-linguistic information available. It is also the first step towards identifying potential translation pitfalls.
APA, Harvard, Vancouver, ISO, and other styles
4

Wang, Yu, and Jianping Zhang. "CMMCSegNet: Cross-Modality Multicascade Indirect LGE Segmentation on Multimodal Cardiac MR." Computational and Mathematical Methods in Medicine 2021 (June 5, 2021): 1–14. http://dx.doi.org/10.1155/2021/9942149.

Full text
Abstract:
Since Late-Gadolinium Enhancement (LGE) of cardiac magnetic resonance (CMR) visualizes myocardial infarction, and the balanced-Steady State Free Precession (bSSFP) cine sequence can capture cardiac motions and present clear boundaries; multimodal CMR segmentation has played an important role in the assessment of myocardial viability and clinical diagnosis, while automatic and accurate CMR segmentation still remains challenging due to a very small amount of labeled LGE data and the relatively low contrasts of LGE. The main purpose of our work is to learn the real/fake bSSFP modality with ground truths to indirectly segment the LGE modality of cardiac MR by using a proposed cross-modality multicascade framework: cross-modality translation network and automatic segmentation network, respectively. In the segmentation stage, a novel multicascade pix2pix network is designed to segment the fake bSSFP sequence obtained from a cross-modality translation network. Moreover, we propose perceptual loss measuring features between ground truth and prediction, which are extracted from the pretrained vgg network in the segmentation stage. We evaluate the performance of the proposed method on the multimodal CMR dataset and verify its superiority over other state-of-the-art approaches under different network structures and different types of adversarial losses in terms of dice accuracy in testing. Therefore, the proposed network is promising for Indirect Cardiac LGE Segmentation in clinical applications.
APA, Harvard, Vancouver, ISO, and other styles
5

Danni, Yu. "A Genre Approach to the Translation of Political Speeches Based on a Chinese-Italian-English Trilingual Parallel Corpus." SAGE Open 10, no. 2 (April 2020): 215824402093360. http://dx.doi.org/10.1177/2158244020933607.

Full text
Abstract:
Using a trilingual parallel corpus, this article investigates the translation of Chinese political speeches in Italian and English, with the aim to explore cross-linguistic variations regarding translation shifts of key functional elements in the genre of political speeches. The genre-based methodology includes a rhetorical move analysis, which is used to highlight key functional elements of the genre, and a functional grammar analysis of translation shifts of the lexico-grammatical elements identified in the previous stage. The findings show that the core communicative function of the genre is “Proposing deontic statements,” and modality of obligation is essential for the realization of this rhetorical function. Afterwards, the analysis of translation shifts of deontic modality reveals that the English translation is characterized by higher modality value shifts in comparison to the Italian translation. This difference may be related to the degree of autonomy in translation choice and understanding of the communicative purposes of the translation genre. In terms of methodological implications, this functionalist approach attempts to providing insights into the communicative purposes of the translation genre by focusing on how key functional elements are translated.
APA, Harvard, Vancouver, ISO, and other styles
6

Wu, Kevin E., Kathryn E. Yost, Howard Y. Chang, and James Zou. "BABEL enables cross-modality translation between multiomic profiles at single-cell resolution." Proceedings of the National Academy of Sciences 118, no. 15 (April 7, 2021): e2023070118. http://dx.doi.org/10.1073/pnas.2023070118.

Full text
Abstract:
Simultaneous profiling of multiomic modalities within a single cell is a grand challenge for single-cell biology. While there have been impressive technical innovations demonstrating feasibility—for example, generating paired measurements of single-cell transcriptome (single-cell RNA sequencing [scRNA-seq]) and chromatin accessibility (single-cell assay for transposase-accessible chromatin using sequencing [scATAC-seq])—widespread application of joint profiling is challenging due to its experimental complexity, noise, and cost. Here, we introduce BABEL, a deep learning method that translates between the transcriptome and chromatin profiles of a single cell. Leveraging an interoperable neural network model, BABEL can predict single-cell expression directly from a cell’s scATAC-seq and vice versa after training on relevant data. This makes it possible to computationally synthesize paired multiomic measurements when only one modality is experimentally available. Across several paired single-cell ATAC and gene expression datasets in human and mouse, we validate that BABEL accurately translates between these modalities for individual cells. BABEL also generalizes well to cell types within new biological contexts not seen during training. Starting from scATAC-seq of patient-derived basal cell carcinoma (BCC), BABEL generated single-cell expression that enabled fine-grained classification of complex cell states, despite having never seen BCC data. These predictions are comparable to analyses of experimental BCC scRNA-seq data for diverse cell types related to BABEL’s training data. We further show that BABEL can incorporate additional single-cell data modalities, such as protein epitope profiling, thus enabling translation across chromatin, RNA, and protein. BABEL offers a powerful approach for data exploration and hypothesis generation.
APA, Harvard, Vancouver, ISO, and other styles
7

Sharma, Akanksha, and Neeru Jindal. "Cross-Modality Breast Image Translation with Improved Resolution Using Generative Adversarial Networks." Wireless Personal Communications 119, no. 4 (March 29, 2021): 2877–91. http://dx.doi.org/10.1007/s11277-021-08376-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Mai, Sijie, Haifeng Hu, and Songlong Xing. "Modality to Modality Translation: An Adversarial Representation Learning and Graph Fusion Network for Multimodal Fusion." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 01 (April 3, 2020): 164–72. http://dx.doi.org/10.1609/aaai.v34i01.5347.

Full text
Abstract:
Learning joint embedding space for various modalities is of vital importance for multimodal fusion. Mainstream modality fusion approaches fail to achieve this goal, leaving a modality gap which heavily affects cross-modal fusion. In this paper, we propose a novel adversarial encoder-decoder-classifier framework to learn a modality-invariant embedding space. Since the distributions of various modalities vary in nature, to reduce the modality gap, we translate the distributions of source modalities into that of target modality via their respective encoders using adversarial training. Furthermore, we exert additional constraints on embedding space by introducing reconstruction loss and classification loss. Then we fuse the encoded representations using hierarchical graph neural network which explicitly explores unimodal, bimodal and trimodal interactions in multi-stage. Our method achieves state-of-the-art performance on multiple datasets. Visualization of the learned embeddings suggests that the joint embedding space learned by our method is discriminative.
APA, Harvard, Vancouver, ISO, and other styles
9

Lee, Yong-Hyeok, Dong-Won Jang, Jae-Bin Kim, Rae-Hong Park, and Hyung-Min Park. "Audio–Visual Speech Recognition Based on Dual Cross-Modality Attentions with the Transformer Model." Applied Sciences 10, no. 20 (October 17, 2020): 7263. http://dx.doi.org/10.3390/app10207263.

Full text
Abstract:
Since attention mechanism was introduced in neural machine translation, attention has been combined with the long short-term memory (LSTM) or replaced the LSTM in a transformer model to overcome the sequence-to-sequence (seq2seq) problems with the LSTM. In contrast to the neural machine translation, audio–visual speech recognition (AVSR) may provide improved performance by learning the correlation between audio and visual modalities. As a result that the audio has richer information than the video related to lips, AVSR is hard to train attentions with balanced modalities. In order to increase the role of visual modality to a level of audio modality by fully exploiting input information in learning attentions, we propose a dual cross-modality (DCM) attention scheme that utilizes both an audio context vector using video query and a video context vector using audio query. Furthermore, we introduce a connectionist-temporal-classification (CTC) loss in combination with our attention-based model to force monotonic alignments required in AVSR. Recognition experiments on LRS2-BBC and LRS3-TED datasets showed that the proposed model with the DCM attention scheme and the hybrid CTC/attention architecture achieved at least a relative improvement of 7.3% on average in the word error rate (WER) compared to competing methods based on the transformer model.
APA, Harvard, Vancouver, ISO, and other styles
10

Wang, Yabing, Fan Wang, Jianfeng Dong, and Hao Luo. "CL2CM: Improving Cross-Lingual Cross-Modal Retrieval via Cross-Lingual Knowledge Transfer." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 6 (March 24, 2024): 5651–59. http://dx.doi.org/10.1609/aaai.v38i6.28376.

Full text
Abstract:
Cross-lingual cross-modal retrieval has garnered increasing attention recently, which aims to achieve the alignment between vision and target language (V-T) without using any annotated V-T data pairs. Current methods employ machine translation (MT) to construct pseudo-parallel data pairs, which are then used to learn a multi-lingual and multi-modal embedding space that aligns visual and target-language representations. However, the large heterogeneous gap between vision and text, along with the noise present in target language translations, poses significant challenges in effectively aligning their representations. To address these challenges, we propose a general framework, Cross-Lingual to Cross-Modal (CL2CM), which improves the alignment between vision and target language using cross-lingual transfer. This approach allows us to fully leverage the merits of multi-lingual pre-trained models (e.g., mBERT) and the benefits of the same modality structure, i.e., smaller gap, to provide reliable and comprehensive semantic correspondence (knowledge) for the cross-modal network. We evaluate our proposed approach on two multilingual image-text datasets, Multi30K and MSCOCO, and one video-text dataset, VATEX. The results clearly demonstrate the effectiveness of our proposed method and its high potential for large-scale retrieval.
APA, Harvard, Vancouver, ISO, and other styles
11

Lyu, Zhen, Sabin Dahal, Shuai Zeng, Juexin Wang, Dong Xu, and Trupti Joshi. "CrossMP: Enabling Cross-Modality Translation between Single-Cell RNA-Seq and Single-Cell ATAC-Seq through Web-Based Portal." Genes 15, no. 7 (July 5, 2024): 882. http://dx.doi.org/10.3390/genes15070882.

Full text
Abstract:
In recent years, there has been a growing interest in profiling multiomic modalities within individual cells simultaneously. One such example is integrating combined single-cell RNA sequencing (scRNA-seq) data and single-cell transposase-accessible chromatin sequencing (scATAC-seq) data. Integrated analysis of diverse modalities has helped researchers make more accurate predictions and gain a more comprehensive understanding than with single-modality analysis. However, generating such multimodal data is technically challenging and expensive, leading to limited availability of single-cell co-assay data. Here, we propose a model for cross-modal prediction between the transcriptome and chromatin profiles in single cells. Our model is based on a deep neural network architecture that learns the latent representations from the source modality and then predicts the target modality. It demonstrates reliable performance in accurately translating between these modalities across multiple paired human scATAC-seq and scRNA-seq datasets. Additionally, we developed CrossMP, a web-based portal allowing researchers to upload their single-cell modality data through an interactive web interface and predict the other type of modality data, using high-performance computing resources plugged at the backend.
APA, Harvard, Vancouver, ISO, and other styles
12

Zhuge, Huimin, Brian Summa, Jihun Hamm, and J. Quincy Brown. "Deep learning 2D and 3D optical sectioning microscopy using cross-modality Pix2Pix cGAN image translation." Biomedical Optics Express 12, no. 12 (November 12, 2021): 7526. http://dx.doi.org/10.1364/boe.439894.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Liu, Yue, and Xinbo Huang. "Efficient Cross-Modality Insulator Augmentation for Multi-Domain Insulator Defect Detection in UAV Images." Sensors 24, no. 2 (January 10, 2024): 428. http://dx.doi.org/10.3390/s24020428.

Full text
Abstract:
Regular inspection of the insulator operating status is essential to ensure the safe and stable operation of the power system. Unmanned aerial vehicle (UAV) inspection has played an important role in transmission line inspection, replacing former manual inspection. With the development of deep learning technologies, deep learning-based insulator defect detection methods have drawn more and more attention and gained great improvement. However, former insulator defect detection methods mostly focus on designing complex refined network architecture, which will increase inference complexity in real applications. In this paper, we propose a novel efficient cross-modality insulator augmentation algorithm for multi-domain insulator defect detection to mimic real complex scenarios. It also alleviates the overfitting problem without adding the inference resources. The high-resolution insulator cross-modality translation (HICT) module is designed to generate multi-modality insulator images with rich texture information to eliminate the adverse effects of existing modality discrepancy. We propose the multi-domain insulator multi-scale spatial augmentation (MMA) module to simultaneously augment multi-domain insulator images with different spatial scales and leverage these fused images and location information to help the target model locate defects with various scales more accurately. Experimental results prove that the proposed cross-modality insulator augmentation algorithm can achieve superior performance in public UPID and SFID insulator defect datasets. Moreover, the proposed algorithm also gives a new perspective for improving insulator defect detection precision without adding inference resources, which is of great significance for advancing the detection of transmission lines.
APA, Harvard, Vancouver, ISO, and other styles
14

Zhang, Yi, Shizhou Zhang, Ying Li, and Yanning Zhang. "Single- and Cross-Modality Near Duplicate Image Pairs Detection via Spatial Transformer Comparing CNN." Sensors 21, no. 1 (January 2, 2021): 255. http://dx.doi.org/10.3390/s21010255.

Full text
Abstract:
Recently, both single modality and cross modality near-duplicate image detection tasks have received wide attention in the community of pattern recognition and computer vision. Existing deep neural networks-based methods have achieved remarkable performance in this task. However, most of the methods mainly focus on the learning of each image from the image pair, thus leading to less use of the information between the near duplicate image pairs to some extent. In this paper, to make more use of the correlations between image pairs, we propose a spatial transformer comparing convolutional neural network (CNN) model to compare near-duplicate image pairs. Specifically, we firstly propose a comparing CNN framework, which is equipped with a cross-stream to fully learn the correlation information between image pairs, while considering the features of each image. Furthermore, to deal with the local deformations led by cropping, translation, scaling, and non-rigid transformations, we additionally introduce a spatial transformer comparing CNN model by incorporating a spatial transformer module to the comparing CNN architecture. To demonstrate the effectiveness of the proposed method on both the single-modality and cross-modality (Optical-InfraRed) near-duplicate image pair detection tasks, we conduct extensive experiments on three popular benchmark datasets, namely CaliforniaND (ND means near duplicate), Mir-Flickr Near Duplicate, and TNO Multi-band Image Data Collection. The experimental results show that the proposed method can achieve superior performance compared with many state-of-the-art methods on both tasks.
APA, Harvard, Vancouver, ISO, and other styles
15

Bilá, Magdaléna, and Ingrida Vaňková. "Tourist Notices in the Spotlight of Linguistic Landscape and Translation Studies." Russian Journal of Linguistics 23, no. 3 (December 15, 2019): 681–97. http://dx.doi.org/10.22363/2312-9182-2019-23-3-681-697.

Full text
Abstract:
In the 21st century, even local tourist spots are globally accessible and need to be communicated in a globally shared language, a lingua franca (Ben-Rafael & Ben Rafael 2015). The language of most obvious choice among speakers from different linguacultural backgrounds is English. When translating notices in national parks into English, translators should predominantly consider the function of the TT (target text), the target audience (not exclusively L1 speakers of English but, the speakers of a variety of languacultures communicating in English as lingua franca (ELF) and opt for translation solutions that would account for visitors representing a diversity of languacultures. The present paper aims at finding out what modifications in translation of visitors’ rules may be necessary if the target readership is to be considered, and at explicating the translation process through applying a transdisciplinary perspective of ELF studies, linguistic landscape (LL) studies, cross-field studies on conceptualization, translanguaging and translation studies. The study shows that these modifications affect the significance and hierarchy of the four principles operating in LL (presentation-of-self, power-relations, good reasons and collective-identity) and are projected into specific LL-tailored translation solutions (shifts in modality, lexis, style and discourse markers). The modifications are achievable in ELF, which, as a form and function, a de-regionalized and de-culturalized artifact of global village, is capable of catering for a variety of languacultures with their specific societal conventions, practices, and the whole explicit and implicit axio-sphere.
APA, Harvard, Vancouver, ISO, and other styles
16

Ivaska, Laura, Sakari Katajamäki, Tiina Holopainen, Hanna Karhu, Lauri A. Niskanen, and Outi Paloposki. "Trekstuaaliset tilat." Mikael: Kääntämisen ja tulkkauksen tutkimuksen aikakauslehti 13 (April 1, 2020): 124–37. http://dx.doi.org/10.61200/mikael.129325.

Full text
Abstract:
Translation studies and textual scholarship have much common ground. In this paper, we set to explore how three of their central concepts, translation, transmission and text, can be approached from a multidisciplinary point of view. Combining these three central terms, we call this approach trextual. This paper is based on the workshop on trextuality held at the KäTu Symposium in Tampere in 2019. In the workshop, the discussion built upon an introduction, which examined the foundations of trextuality, and four papers, which addressed genetic translation criticism, retranslation, polyphony and translation of intertextuality, as well as multi-modality of audiovisual translation, respectively. The workshop revealed that the juxtaposing of two text-oriented disciplines, translation studies and textual studies, and comparing their similarities and differences reveals unexplored areas and weaknesses in their axiomatic fundaments. In particular, the papers presented in the workshop invited to reconsider transmission, definitions of text, and source text-target text pairs in different contexts. The workshop provided a starting point for further trextual studies exploring such cross-disciplinary questions.
APA, Harvard, Vancouver, ISO, and other styles
17

Wang, Suzhe, Xueying Zhang, Haisheng Hui, Fenglian Li, and Zelin Wu. "Multimodal CT Image Synthesis Using Unsupervised Deep Generative Adversarial Networks for Stroke Lesion Segmentation." Electronics 11, no. 16 (August 20, 2022): 2612. http://dx.doi.org/10.3390/electronics11162612.

Full text
Abstract:
Deep learning-based techniques can obtain high precision for multimodal stroke segmentation tasks. However, the performance often requires a large number of training examples. Additionally, existing data extension approaches for the segmentation are less efficient in creating much more realistic images. To overcome these limitations, an unsupervised adversarial data augmentation mechanism (UTC-GAN) is developed to synthesize multimodal computed tomography (CT) brain scans. In our approach, the CT samples generation and cross-modality translation differentiation are accomplished simultaneously by integrating a Siamesed auto-encoder architecture into the generative adversarial network. In addition, a Gaussian mixture translation module is further proposed, which incorporates a translation loss to learn an intrinsic mapping between the latent space and the multimodal translation function. Finally, qualitative and quantitative experiments show that UTC-GAN significantly improves the generation ability. The stroke dataset enriched by the proposed model also provides a superior improvement in segmentation accuracy, compared with the performance of current competing unsupervised models.
APA, Harvard, Vancouver, ISO, and other styles
18

Колосов, С. А., and Я. В. Туманов. "ONOMATOPOEIC LEXICAL UNITS IN MANGA COMICS: AUTHORIAL USE AND IMPLICATIONS FOR TRANSLATION." Вестник Тверского государственного университета. Серия: Филология, no. 2(77) (June 5, 2023): 182–90. http://dx.doi.org/10.26456/vtfilol/2023.2.182.

Full text
Abstract:
В статье обсуждаются содержательные, функциональные и стилистические аспекты использования ономатопеической лексики в японских комиксах манга.. Использование ономатопей в манге является эффективным приёмом не только для репрезентации звукообразного наполнения сюжета, но и для речевого портретирования персонажей, создания различных каламбуров, выстраивания языковых и смысловых параллелей, внутрисюжетных отсылок. The article discusses the meaning-related, functional and stylistic aspects of onomatopoeic lexical units in Japanese manga comics. The authorial use of onomatopoeia in manga and its implications for translation are of particular interest. Onomatopoeia is effectively used to verbally represent the sound modality as well as to enhance the speech portrayal of characters, produce wordplay, and generate cross references and semantic links in the storyline.
APA, Harvard, Vancouver, ISO, and other styles
19

McNaughton, Jake, Justin Fernandez, Samantha Holdsworth, Benjamin Chong, Vickie Shim, and Alan Wang. "Machine Learning for Medical Image Translation: A Systematic Review." Bioengineering 10, no. 9 (September 12, 2023): 1078. http://dx.doi.org/10.3390/bioengineering10091078.

Full text
Abstract:
Background: CT scans are often the first and only form of brain imaging that is performed to inform treatment plans for neurological patients due to its time- and cost-effective nature. However, MR images give a more detailed picture of tissue structure and characteristics and are more likely to pick up abnormalities and lesions. The purpose of this paper is to review studies which use deep learning methods to generate synthetic medical images of modalities such as MRI and CT. Methods: A literature search was performed in March 2023, and relevant articles were selected and analyzed. The year of publication, dataset size, input modality, synthesized modality, deep learning architecture, motivations, and evaluation methods were analyzed. Results: A total of 103 studies were included in this review, all of which were published since 2017. Of these, 74% of studies investigated MRI to CT synthesis, and the remaining studies investigated CT to MRI, Cross MRI, PET to CT, and MRI to PET. Additionally, 58% of studies were motivated by synthesizing CT scans from MRI to perform MRI-only radiation therapy. Other motivations included synthesizing scans to aid diagnosis and completing datasets by synthesizing missing scans. Conclusions: Considerably more research has been carried out on MRI to CT synthesis, despite CT to MRI synthesis yielding specific benefits. A limitation on medical image synthesis is that medical datasets, especially paired datasets of different modalities, are lacking in size and availability; it is therefore recommended that a global consortium be developed to obtain and make available more datasets for use. Finally, it is recommended that work be carried out to establish all uses of the synthesis of medical scans in clinical practice and discover which evaluation methods are suitable for assessing the synthesized images for these needs.
APA, Harvard, Vancouver, ISO, and other styles
20

Zhao, Rui, Liang Zhang, Biao Fu, Cong Hu, Jinsong Su, and Yidong Chen. "Conditional Variational Autoencoder for Sign Language Translation with Cross-Modal Alignment." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 17 (March 24, 2024): 19643–51. http://dx.doi.org/10.1609/aaai.v38i17.29937.

Full text
Abstract:
Sign language translation (SLT) aims to convert continuous sign language videos into textual sentences. As a typical multi-modal task, there exists an inherent modality gap between sign language videos and spoken language text, which makes the cross-modal alignment between visual and textual modalities crucial. However, previous studies tend to rely on an intermediate sign gloss representation to help alleviate the cross-modal problem thereby neglecting the alignment across modalities that may lead to compromised results. To address this issue, we propose a novel framework based on Conditional Variational autoencoder for SLT (CV-SLT) that facilitates direct and sufficient cross-modal alignment between sign language videos and spoken language text. Specifically, our CV-SLT consists of two paths with two Kullback-Leibler (KL) divergences to regularize the outputs of the encoder and decoder, respectively. In the prior path, the model solely relies on visual information to predict the target text; whereas in the posterior path, it simultaneously encodes visual information and textual knowledge to reconstruct the target text. The first KL divergence optimizes the conditional variational autoencoder and regularizes the encoder outputs, while the second KL divergence performs a self-distillation from the posterior path to the prior path, ensuring the consistency of decoder outputs.We further enhance the integration of textual information to the posterior path by employing a shared Attention Residual Gaussian Distribution (ARGD), which considers the textual information in the posterior path as a residual component relative to the prior path. Extensive experiments conducted on public datasets demonstrate the effectiveness of our framework, achieving new state-of-the-art results while significantly alleviating the cross-modal representation discrepancy. The code and models are available at https://github.com/rzhao-zhsq/CV-SLT.
APA, Harvard, Vancouver, ISO, and other styles
21

Zhulavska, Olha, and Alla Martynyuk. "Linguacultural isomorphism / anisomorphism and synesthetic metaphor translation procedures." International Journal of Translation and Interpreting Research 15, no. 1 (February 28, 2023): 275–87. http://dx.doi.org/10.12807/ti.115201.2023.a14.

Full text
Abstract:
In this paper, we combine analytical tools of conceptual metaphor theory with the affordances of corpus-based linguistics and quantitative analysis to investigate the translation of synesthetic metaphors found in Donna Tartt’s novels into Ukrainian. A synesthetic metaphor is addressed as a linguistic expression representing a sensation of one modality in terms of another. We claim that the choice of a translation procedure – retention, removal, omission, modification, or addition is partly determined by linguacultural similarity (i.e. isomorphism) or specificity (i.e. anisomorphism) of cross-sensory mappings that underlie the source-text and targettext linguistic expressions and partly – by the translator’s free choice, which cannot be explained by objective reasons. The obtained results show the following trends. Original metaphors as well as conventional metaphors based on isomorphic crosssensory mappings are mostly retained. Conventional metaphors that rest on anisomorphic mappings are mostly modified or removed/omitted. However, the translator can choose to remove/modify a synesthetic metaphor that rests on an isomorphic mapping. Added synesthetic metaphors usually root in isomorphic mappings. The applied methodology minimizes subjectivity of judgment in differentiating between the compulsory (i.e. imposed by the linguacultural specificity) and free strategic choices, which contributes to the potential impact of this research.
APA, Harvard, Vancouver, ISO, and other styles
22

Penney, Catherine G. "Interactions between Presentation Modality and Encoding Task in Frequency Judgements." Quarterly Journal of Experimental Psychology Section A 46, no. 3 (August 1993): 517–32. http://dx.doi.org/10.1080/14640749308401061.

Full text
Abstract:
In two experiments, presentation modality of a list of items and encoding task were varied, and subjects judged the frequency with which certain words had been presented in the list. In Experiment 1, auditory presentation led to higher judgements of frequency than did visual presentation when subjects counted the consonants in the words but not when they rated imageability or when they kept a running count of the number of presentations of each word. In Experiment 2, encoding questions about the rhyme or spelling patterns of target words produced opposite effects for auditory and visual items. The results are interpreted as indicating that cross-modal translation during encoding produces a bias towards higher-frequency judgements and may also produce better frequency discrimination.
APA, Harvard, Vancouver, ISO, and other styles
23

Shlesinger, Miriam, and Noam Ordan. "More spoken or more translated?" Target. International Journal of Translation Studies 24, no. 1 (September 7, 2012): 43–60. http://dx.doi.org/10.1075/target.24.1.04shl.

Full text
Abstract:
Since the early 1990s, with the advance of computerized corpora, translation scholars have been using corpus-based methodologies to look into the possible existence of overriding patterns (tentatively described as universals or as laws) in translated texts. The application of such methodologies to interpreted texts has been much slower in developing than in the case of translated ones, but significant progress has been made in recent years. After presenting the fundamental methodological hurdles—and advantages—of working on machine-readable (transcribed) oral corpora, we present and discuss several recent studies using cross-modal comparisons, and examine the viability of using interpreted outputs to explore the features that set simultaneous interpreting apart from other forms of translation. We then set out to test the hypothesis that modality may exert a stronger effect than ontology—i.e. that being oral (vs. written) is a more powerful influence than being translated (vs. original).
APA, Harvard, Vancouver, ISO, and other styles
24

Javadi, Mohammad Saleh, Zulaikha Kadim, Hon Hock Woon, Khairunnisa Mohamed Johari, and Norshuhada Samudin. "An Automatic Robust Image Registration Algorithm for Aerial Mapping." International Journal of Image and Graphics 15, no. 02 (April 2015): 1540002. http://dx.doi.org/10.1142/s0219467815400021.

Full text
Abstract:
Aerial mapping is attracting more attention due to the development in unmanned aerial vehicles (UAVs) and their availability and also vast applications that require a wide aerial photograph of a region in a specific time. The cross-modality as well as translation, rotation, scale change and illumination are the main challenges in aerial image registration. This paper concentrates on an algorithm for aerial image registration to overcome the aforementioned issues. The proposed method is able to sample automatically and align the sensed images to form the final map. The results are compared with satellite images that shows a reasonable performance with geometrically correct registration.
APA, Harvard, Vancouver, ISO, and other styles
25

MCSHANE, MARJORIE, SERGEI NIRENBURG, and RON ZACHARSKI. "Mood and modality: out of theory and into the fray." Natural Language Engineering 10, no. 1 (February 23, 2004): 57–89. http://dx.doi.org/10.1017/s1351324903003279.

Full text
Abstract:
The topic of mood and modality (MOD) is a difficult aspect of language description because, among other reasons, the inventory of modal meanings is not stable across languages, moods do not map neatly from one language to another, modality may be realised morphologically or by free-standing words, and modality interacts in complex ways with other modules of the grammar, like tense and aspect. Describing MOD is especially difficult if one attempts to develop a unified approach that not only provides cross-linguistic coverage, but is also useful in practical natural language processing systems. This article discusses an approach to MOD that was developed for and implemented in the Boas Knowledge-Elicitation (KE) system. Boas elicits knowledge about any language, L, from an informant who need not be a trained linguist. That knowledge then serves as the static resources for an L-to-English translation system. The KE methodology used throughout Boas is driven by a resident inventory of parameters, value sets, and means of their realisation for a wide range of language phenomena. MOD is one of those parameters, whose values are the inventory of attested and not yet attested moods (e.g. indicative, conditional, imperative), and whose realisations include flective morphology, agglutinating morphology, isolating morphology, words, phrases and constructions. Developing the MOD elicitation procedures for Boas amounted to wedding the extensive theoretical and descriptive research on MOD with practical approaches to guiding an untrained informant through this non-trivial task. We believe that our experience in building the MOD module of Boas offers insights not only into cross-linguistic aspects of MOD that have not previously been detailed in the natural language processing literature, but also into KE methodologies that could be applied more broadly.
APA, Harvard, Vancouver, ISO, and other styles
26

Sanfelici, Rachele, Dominic Dwyer, Linda A. Antonucci, and Nikolaos Koutsouleris. "T107. INDIVIDUALIZED DIAGNOSTIC AND PROGNOSTIC MODELS FOR PATIENTS WITH PSYCHOSIS RISK SYNDROMES: A META-ANALYTIC VIEW ON THE STATE-OF-THE-ART." Schizophrenia Bulletin 46, Supplement_1 (April 2020): S271—S272. http://dx.doi.org/10.1093/schbul/sbaa029.667.

Full text
Abstract:
Abstract Background The Clinical High Risk (CHR) paradigm has led research into the biological and clinical underpinnings of the risk for psychosis, aiming at predicting and possibly preventing transition to the disorder. Statistical methods like machine learning (ML) and Cox proportional hazard regression have enabled the construction of diagnostic and prognostic models based on different data modalities, e.g., clinical risk factors, neurocognitive performance, or neurobiological data. However, their translation to clinical practice is still hindered by the heterogeneity both of CHR populations and methodologies. One way to tackle this issue is to use a meta-analytic approach to quantitatively investigate models’ performance throughout different outcomes, algorithms and data modalities. The aim of this work was, thus, to investigate the effects of (I) data modality, (II) type of algorithm, and (III) validation paradigms on prognostic and diagnostic models’ performance. We expect our results to facilitate a deeper understanding of the state-of-the-art within the CHR research field and clarify the methodological bottlenecks that impede the clinical translation of diagnostic and prognostic tools. Methods We systematically reviewed the literature on diagnostic and prognostic models built on Cox regression and ML. Further, we conducted a meta-analysis on accuracy performances investigating effects of the following moderators: age, sex, data modality, algorithm, presence of cross-validation (CV), being a multisite study and year of publication. For prognostic studies we investigated also follow-up time and prognostic target. All analyses were conducted with R v3.6.0. and results were corrected for False Discovery Rate. Results 44 articles were included for a total of 3707 individuals for prognostic and 1052 for diagnostic studies (572 CHR and 480 healthy controls, HC). CHR could be classified against HC with 78% sensitivity (95%-CI: 63%-83%) and 77% specificity (95%-CI: 68%-84%). Across prognostic models, sensitivity reached 67% (95%-CI: 63%-70%) and specificity 78% (95%-CI: 73%-82%). Our results point to a higher sensitivity of ML models compared to Cox regression in prognostic studies (p = .009; χ2(2) = 6.96, p = 0.031). This effect was collinear with that of CV, due to the overlap of this factor with algorithm type. Notably, there was a publication bias for prognostic studies (R2 = 0.26, p < .001), yet no significant effects of data modality, CHR or CV type, prognostic target, or any other confounding variable (e.g., age distribution, sex, year of publication or follow-up interval time) on accuracy performance. Discussion Our results point to a good models’ performance overall and no effects of data modality or patient population. ML outperformed Cox regression in prognostic studies, these, however, showing a publication bias. These results may be driven by substantial clinical and methodological heterogeneity currently affecting several aspects of the CHR field. A comprehensive change within the current CHR paradigm is required to enable the clinical application of diagnostic and prognostic models for the at-risk state. First, the field requires study design harmonization, which demands, for instance, reliable methodological approaches like cross- or external validation to ensure generalizability. Second, efforts may be made in unifying the CHR definition, both theoretically and practically, and also embrace relevant non-transition outcomes to broaden the prognostic scope. Future studies are needed to investigate whether harmonising procedures within precision psychiatry will lead to more reliable and reproducible translational research in the field.
APA, Harvard, Vancouver, ISO, and other styles
27

Zhang, Ruien. "Research on the Principles of Translation and Dubbing of Translated Films under the Threshold of Intercultural Perspective." Studies in Linguistics and Literature 8, no. 2 (April 11, 2024): p48. http://dx.doi.org/10.22158/sll.v8n2p48.

Full text
Abstract:
Amidst the deepening tide of globalization, cultural exchange emerges as a pivotal conduit knitting together disparate nations and peoples. The art of dubbing, serving as a significant modality of cultural dissemination, finds its role in intercultural communication increasingly conspicuous. Dubbing transcends mere linguistic conversion; it engenders the transmission and reshaping of cultures, constituting a process of cultural creation in its own right. Consequently, the exploration of intercultural principles in dubbing assumes paramount significance in fostering global cultural exchange. Against the backdrop of globalization, the dubbing of films faces unprecedented challenges and opportunities. On one hand, audiences from diverse cultural backgrounds exhibit varying degrees of comprehension and acceptance towards works, necessitating dubbing to not only ensure linguistic accuracy but also embrace cultural adaptability and innovation. On the other hand, with technological advancements and the application of innovative strategies, the methods and means of dubbing are in constant evolution, offering myriad possibilities for cross-cultural dissemination.
APA, Harvard, Vancouver, ISO, and other styles
28

Prasad, Seema, Shivam Puri, Keerthana Kapiley, Riya Rafeekh, and Ramesh Mishra. "Looking Without Knowing: Evidence for Language-Mediated Eye Movements to Masked Words in Hindi-English Bilinguals." Languages 10, no. 2 (February 19, 2025): 32. https://doi.org/10.3390/languages10020032.

Full text
Abstract:
Cross-linguistic activation has been frequently demonstrated in bilinguals through eye movements using the visual world paradigm. In this study, we explored if such activations could operate below thresholds of awareness, at least in the visual modality. Participants listened to a spoken word in Hindi or English and viewed a display containing masked printed words. One of the printed words was a phonological cohort of the translation equivalent of the spoken word (TE cohort). Previous studies using this paradigm with clearly visible words on a similar sample have demonstrated robust activation of TE cohorts. We tracked eye movements to a blank screen where the masked written words had appeared accompanied by spoken words. Analyses of fixation proportions and dwell times revealed that participants looked more often and for longer duration at quadrants that contained the TE cohorts compared to distractors. This is one of the few studies to show that cross-linguistic activation occurs even with masked visual information. We discuss the implications for bilingual parallel activation and unconscious processing of habitual visual information.
APA, Harvard, Vancouver, ISO, and other styles
29

Klochkova, Elena, and Tatiana Evtushenko. "The peculiarities of realization of modal values of necessity and epistemic possibility in the Russian and Chinese languages." Litera, no. 2 (February 2021): 42–52. http://dx.doi.org/10.25136/2409-8698.2021.2.34928.

Full text
Abstract:
This article examines the linguistic-specific parameters of language means for expressing modal valies of necessity and epistemic possibility in the Russian and Chinese languages. Particular attention is given to the analysis of quantitative and functional-semantic characteristics of language means for expressing modality in the Russian and Chinese languages from the comparative perspective. The goal of this research lies on examination of functionality of the means of objective and subjective modality within the Russian language reflected in the Chinese language. The research is based on the material of the user parallel corpus, which contains Russian and Chinese literary texts with translation, as well as on the results of student poll conducted for the purpose of determination of meta-representations of the native speakers on functionality of a number of linguistic units of the corresponding microfields. The results of comprehensive analysis demonstrate that the core and periphery of the functional-semantic fields of necessity and epistemic possibility in the Russian and Chinese languages are similar with regards to the types of linguistic units that comprise the field (the core zone consists of modal verbs and modal words); however, the allocation of elements within the field differs. From the functional-semantic perspective, the author determines a group of modal values with accurate cross-lingual correspondences and a group of words with different meanings, as well as indicates the semantic lacunas. The survey results of the native speakers confirm varying degree of consolidation of the opinion on the value of modality markers.
APA, Harvard, Vancouver, ISO, and other styles
30

COSTA, Samantha, Tarryn GUINESS, and Anusch YAZDANI. "COVID 19- The Rise of Telehealth Within the Specialist Practice: Is it Here to Stay?" Fertility & Reproduction 04, no. 03n04 (September 2022): 130. http://dx.doi.org/10.1142/s2661318222740425.

Full text
Abstract:
Background: The World Health Organization declared COVID-19 a global pandemic on March 11, 2020 and the world as we knew it came to a stop as Australia entered “Lock Down”, forcing a change in how consultation was delivered to patient. Forcing the rise of Telehealth into Private Specialist practice. Aim: To investigate the impact of COVID-19 and Telehealth on patient access to care in the Specialist Private Practice Sector. Method: A retrospective single unit cross sectional analysis of patient consultation type, to compare patient access to telehealth pre pandemic (FY19) and subsequent pandemic financial years (FY20/21, 21/22). Results: Results showed an increase in initial telehealth appointments by 72% in the 20/21 Financial year and 288% increase during the 21/22 Financial year. Follow up appointments via Telehealth also increase by 106% and 160% respectively. Access to telehealth appointments has also shown a decreasing trend in face-to-face appointments both with initial consultations and follow up appointments. Conclusion: The use of telehealth appointments rose significantly during the timeframes where access to the clinic was reduced due to lockdown but continues to now be an important modality of access to care for both clinicians and patients. Further research needs to address the enablers and detractors for Telehealth and the translation of traditional face to face consultations and this modality.
APA, Harvard, Vancouver, ISO, and other styles
31

Ruskan, Anna, and Audronė Šolienė. "Evidential and epistemic adverbials in Lithuanian: evidence from intra-linguistic and cross-linguistic analysis." Kalbotyra 70, no. 70 (January 9, 2018): 127. http://dx.doi.org/10.15388/klbt.2017.11197.

Full text
Abstract:
In the recent decade the realisations of evidentiality and epistemic modality in European languages have received a great scholarly interest and resulted in important investigations concerning the relation between evidentiality and epistemic modality, their means of expression and meaning extensions in various types of discourse. The present paper deals with the adverbials akivaizdžiai ‘evidently’, aiškiai ‘clearly’, ryškiai ‘visibly, clearly’, matyt ‘apparently, evidently’ and regis ‘seemingly’, which derive from the source domain of perception, and the epistemic necessity adverbials tikriausiai/veikiausiai/greičiausiai ‘most probably’, būtinai ‘necessarily’ and neabejotinai ‘undoubtedly’. The aim of the paper is to explore the morphosyntactic properties of the adverbials when they are used as evidential or epistemic markers and compare the distribution of their evidential and epistemic functions in Lithuanian fiction, news and academic discourse. The data have been drawn from the Corpus of the Contemporary Lithuanian Language, the Corpus of Academic Lithuanian and the bidirectional translation corpus ParaCorpEN→LT→EN (Šolienė 2012, 2015). The quantitative findings reveal distributional differences of the adverbials under study across different types of discourse. Functional variation of the evidential perception-based adverbials is determined to a great extent by the degree of epistemic commitment, evidenced not only by intra-linguistic but also cross-linguistic data. The non-perception based adverbials tikriausiai/veikiausiai/greičiausiai ‘most probably’, būtinai ‘necessarily’ and neabejotinai ‘undoubtedly’ are the primary adverbial markers of epistemic necessity in Lithuanian, though some of them may have evidential meaning extensions. A parallel and comparable corpus-based analysis has once again proved to be a very efficient tool for diagnosing language-specific features and describing an inventory used to code language-specific evidential and epistemic meanings.
APA, Harvard, Vancouver, ISO, and other styles
32

Stanzione, Arnaldo, Roberta Galatola, Renato Cuocolo, Valeria Romeo, Francesco Verde, Pier Paolo Mainenti, Arturo Brunetti, and Simone Maurea. "Radiomics in Cross-Sectional Adrenal Imaging: A Systematic Review and Quality Assessment Study." Diagnostics 12, no. 3 (February 24, 2022): 578. http://dx.doi.org/10.3390/diagnostics12030578.

Full text
Abstract:
In this study, we aimed to systematically review the current literature on radiomics applied to cross-sectional adrenal imaging and assess its methodological quality. Scopus, PubMed and Web of Science were searched to identify original research articles investigating radiomics applications on cross-sectional adrenal imaging (search end date February 2021). For qualitative synthesis, details regarding study design, aim, sample size and imaging modality were recorded as well as those regarding the radiomics pipeline (e.g., segmentation and feature extraction strategy). The methodological quality of each study was evaluated using the radiomics quality score (RQS). After duplicate removal and selection criteria application, 25 full-text articles were included and evaluated. All were retrospective studies, mostly based on CT images (17/25, 68%), with manual (19/25, 76%) and two-dimensional segmentation (13/25, 52%) being preferred. Machine learning was paired to radiomics in about half of the studies (12/25, 48%). The median total and percentage RQS scores were 2 (interquartile range, IQR = −5–8) and 6% (IQR = 0–22%), respectively. The highest and lowest scores registered were 12/36 (33%) and −5/36 (0%). The most critical issues were the absence of proper feature selection, the lack of appropriate model validation and poor data openness. The methodological quality of radiomics studies on adrenal cross-sectional imaging is heterogeneous and lower than desirable. Efforts toward building higher quality evidence are essential to facilitate the future translation into clinical practice.
APA, Harvard, Vancouver, ISO, and other styles
33

Zhao, Zikang, Yujia Zhang, Tianjun Wu, Hao Guo, and Yao Li. "Emotionally Controllable Talking Face Generation from an Arbitrary Emotional Portrait." Applied Sciences 12, no. 24 (December 14, 2022): 12852. http://dx.doi.org/10.3390/app122412852.

Full text
Abstract:
With the continuous development of cross-modality generation, audio-driven talking face generation has made substantial advances in terms of speech content and mouth shape, but existing research on talking face emotion generation is still relatively unsophisticated. In this work, we present Emotionally Controllable Talking Face Generation from an Arbitrary Emotional Portrait to synthesize lip-sync and an emotionally controllable high-quality talking face. Specifically, we take a facial reenactment perspective, using facial landmarks as an intermediate representation driving the expression generation of talking faces through the landmark features of an arbitrary emotional portrait. Meanwhile, decoupled design ideas are used to divide the model into three sub-networks to improve emotion control. They are the lip-sync landmark animation generation network, the emotional landmark animation generation network, and the landmark-to-animation translation network. The two landmark animation generation networks are responsible for generating content-related lip area landmarks and facial expression landmarks to correct the landmark sequences of the target portrait. Following this, the corrected landmark sequences and the target portrait are fed into the translation network to generate an emotionally controllable talking face. Our method controls the expressions of talking faces by driving the emotional portrait images while ensuring the generation of animated lip-sync, and can handle new audio and portraits not seen during training. A multi-perspective user study and extensive quantitative and qualitative evaluations demonstrate the superiority of the system in terms of visual emotion representation and video authenticity.
APA, Harvard, Vancouver, ISO, and other styles
34

Varghese, Tomy, J. A. Zagzebski, P. Rahko, and C. S. Breburda. "Ultrasonic Imaging of Myocardial Strain Using Cardiac Elastography." Ultrasonic Imaging 25, no. 1 (January 2003): 1–16. http://dx.doi.org/10.1177/016173460302500101.

Full text
Abstract:
Clinical assessment of myocardial ischemia based on visually-assessed wall motion scoring from echocardiography is semiquantitative, operator dependent, and heavily weighted by operator experience and expertise. Cardiac motion estimation methods such as tissue Doppler imaging, used to assess myocardial muscle velocity, provides quantitative parameters such as the strain-rate and strain derived from Doppler velocity. However, tissue Doppler imaging does not differentiate between active contraction and simple rotation or translation of the heart wall, nor does it differentiate tethering (passively following) tissue from active contraction. In this paper, we present a strain imaging modality called cardiac elastography that provides two-dimensional strain information. A method for obtaining and displaying both directional and magnitude cardiac elastograms and displaying strain over the entire cross-section of the heart is described. Elastograms from a patient with coronary artery disease are compared with those from a healthy volunteer. Though observational, the differences suggest that cardiac elastography may be a useful tool for assessment of myocardial function. The method is two-dimensional, real time and avoids the disadvantage of observer-dependent judgment of myocardial contraction and relaxation estimated from conventional echocardiography.
APA, Harvard, Vancouver, ISO, and other styles
35

Martin-Lac, Victor, Jacques Petit-Frere, and Jean-Marc Le Caillec. "A Generic, Multimodal Geospatial Data Alignment System for Aerial Navigation." Remote Sensing 15, no. 18 (September 13, 2023): 4510. http://dx.doi.org/10.3390/rs15184510.

Full text
Abstract:
We present a template matching algorithm based on local descriptors for aligning two geospatial products of different modalities with a large area asymmetry. Our system is generic with regards to the modalities of the geospatial products and is applicable to the self-localization of aerial devices such as drones and missiles. This algorithm consists in finding a superposition such that the average dissimilarity of the superposed points is minimal. The dissimilarity of two points belonging to two different geospatial products is the distance between their respective local descriptors. These local descriptors are learned. We performed experiments consisting in estimating a translation between optical (Pléiades) and SAR (Miranda) images onto vector data (OpenStreetMap), onto optical images (DOP) and onto SAR images (KOMPSAT-5). Each remote sensing image to be aligned covered 0.64 km2, and each reference geospatial product spanned over 225 km2. We conducted a total of 381 alignment experiments, with six unique modality combinations. In aggregate, the precision reached was finer than 10 m with 72% probability and finer than 20 m with 96% probability. This is considerably more than with traditional methods such as normalized cross-correlation and mutual information.
APA, Harvard, Vancouver, ISO, and other styles
36

Bonde, Anders, and Allan Grutt Hansen. "Audio logo recognition, reduced articulation and coding orientation: Rudiments of quantitative research integrating branding theory, social semiotics and music psychology." SoundEffects - An Interdisciplinary Journal of Sound and Sound Experience 3, no. 1-2 (December 1, 2013): 112–35. http://dx.doi.org/10.7146/se.v3i1-2.15644.

Full text
Abstract:
In this paper we explore an interdisciplinary theoretical framework for the analysis of corporate audio logos and their effectiveness regarding recognisability and identification. This is done by combining three different academic disciplines: 1) social semiotics, 2) branding theory and 3) music psychology. Admittedly, the idea of integrating sonic semiotics with marketing or branding has been proposed elsewhere (cf. Jekosch, 2005; Arning & Gordon, 2006; Winter, 2011), though it appears novel to apply this cross-disciplinary field from a social-semiotic perspective while, at the same time, focusing on musicological descriptors. We consider as a starting point Kress and Van Leeuwen’s (1996, 2006) conceptualisation of ‘modality’, which is central to their ‘visual grammar’ theory and subsequently extended to auditory expressions such as spoken language, music and sound effects (Van Leeuwen, 1999). While originally developed on the basis of linguistics and systemic-functional grammar (Halliday, 1978, 1985) and further reinforced by theories of ‘intersemiotic translation’ (cf. Jakobson, 1959; Eco, 2001) and ‘coding orientation’ (Bernstein, 1971, 1981), Kress and Van Leeuwen’s idea of modality is in this paper connected to notions of brand recognisability and brand identification, thus resulting in the concept of ‘Reduced Articulation Form’ (RAF). The concept has been tested empirically through a survey of 137 upper secondary school students. On the basis of a conditioning experiment, manipulating five existing audio logos in terms of tempo, rhythm, pitch and timbre, the students filled out a structured questionnaire and assessed at which condition they were able to recognise the logos and the corresponding brands. The results indicated that pitch is a much more recognisable trait than rhythm. Also, while timbre turned out to be a decisive element, RAF did actually cause logo and brand recognition in a substantial way. Finally, there seems to be a connection between the level of melodic distinctiveness and logo and brand recognition. The empirical findings are interpreted and discussed in light of the theoretical framework and the concept of coding orientation.
APA, Harvard, Vancouver, ISO, and other styles
37

Hakiki, Bahia, Silvia Pancani, Agnese De Nisco, Anna Maria Romoli, Francesca Draghi, Daniela Maccanti, Anna Estraneo, et al. "Cross-cultural adaptation and multicentric validation of the Italian version of the Simplified Evaluation of CONsciousness Disorders (SECONDs)." PLOS ONE 20, no. 2 (February 10, 2025): e0317626. https://doi.org/10.1371/journal.pone.0317626.

Full text
Abstract:
Introduction The Coma Recovery Scale-Revised (CRS-R) is the recommended tool to assess consciousness in patients with prolonged Disorders of Consciousness (pDoC). However, the time needed to administer it may limit its use. A shorter tool has been validated: the Simplified Evaluation of CONsciousness Disorders (SECONDs). This multicentre study aimed to develop and validate a cross-cultural adaptation of the SECONDs into Italian. Methods An interdisciplinary expert team, from both Fondazione Don Carlo Gnocchi and Istituto Neurologico Carlo Besta, led the translation processes. Independent certified translators were also involved in a blinded modality. Patients diagnosed with Unresponsive Wakefulness Syndrome (UWS) or Minimally Conscious State (MCS) admitted to 3 Italian rehabilitation units were enrolled. The CRS-R and SECONDs were administered in 5 sessions over two weeks by 3 blinded examiners at each center (3 times, with 2 sessions conducted by the same examiner). Weighted Fleiss’ kappa and Spearman correlation coefficients were used to assess intrarater and interrater reliability and concurrent validity. Results Sixty adults with pDoC were assessed: 23 women; median age: 64 years; 14 trauma, median post-onset time: 2 months. Intrarater and interrater reliability showed almost perfect agreement (kappa coefficients 0.968 and 0.935, respectively; p<0.001). The comparison of CRS-R vs. SECONDs on the same day or the best out of 5 SECONDs/CRS-R led to a substantial to almost perfect agreement both for the total score of the CRS-R and the SECONDs’ Additional Index (ρ = 0.772–1.000; p<0.001) and for the consciousness diagnosis (k = 0.784–0.935; p<0.001). The disagreement rate between the overall best diagnosis of the SECONDs and the best CRS-R diagnosis was 6.7%. Conclusion The Italian version of the SECONDs has been cross-culturally adapted to serve as a shorter assessment tool for the diagnosis of pDoC. Our study shows its excellent reliability and concurrent validity when compared to the CRS-R.
APA, Harvard, Vancouver, ISO, and other styles
38

Forrin, Noah D., and Colin M. MacLeod. "Cross-modality translations improve recognition by reducing false alarms." Memory 26, no. 1 (May 2, 2017): 53–58. http://dx.doi.org/10.1080/09658211.2017.1321129.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Rao, Yi, Seth Gammon, Niki M. Zacharias, Tracy Liu, Travis Salzillo, Yuanxin Xi, Jing Wang, Pratip Bhattacharya, and David Piwnica-Worms. "Hyperpolarized [1-13C]pyruvate-to-[1-13C]lactate conversion is rate-limited by monocarboxylate transporter-1 in the plasma membrane." Proceedings of the National Academy of Sciences 117, no. 36 (August 24, 2020): 22378–89. http://dx.doi.org/10.1073/pnas.2003537117.

Full text
Abstract:
Hyperpolarized [1-13C]pyruvate magnetic resonance spectroscopic imaging (MRSI) is a noninvasive metabolic-imaging modality that probes carbon flux in tissues and infers the state of metabolic reprograming in tumors. Prevailing models attribute elevated hyperpolarized [1-13C]pyruvate-to-[1-13C]lactate conversion rates in aggressive tumors to enhanced glycolytic flux and lactate dehydrogenase A (LDHA) activity (Warburg effect). By contrast, we find by cross-sectional analysis using genetic and pharmacological tools in mechanistic studies applied to well-defined genetically engineered cell lines and tumors that initial hyperpolarized [1-13C]pyruvate-to-[1-13C]lactate conversion rates as well as global conversion were highly dependent on and critically rate-limited by the transmembrane influx of [1-13C]pyruvate mediated predominately by monocarboxylate transporter-1 (MCT1). Specifically, in a cell-encapsulated alginate bead model, induced short hairpin (shRNA) knockdown or overexpression of MCT1 quantitatively inhibited or enhanced, respectively, unidirectional pyruvate influxes and [1-13C]pyruvate-to-[1-13C]lactate conversion rates, independent of glycolysis or LDHA activity. Similarly, in tumor models in vivo, hyperpolarized [1-13C]pyruvate-to-[1-13C]lactate conversion was highly dependent on and critically rate-limited by the induced transmembrane influx of [1-13C]pyruvate mediated by MCT1. Thus, hyperpolarized [1-13C]pyruvate MRSI measures primarily MCT1-mediated [1-13C]pyruvate transmembrane influx in vivo, not glycolytic flux or LDHA activity, driving a reinterpretation of this maturing new technology during clinical translation. Indeed, Kaplan–Meier survival analysis for patients with pancreatic, renal, lung, and cervical cancers showed that high-level expression of MCT1 correlated with poor overall survival, and only in selected tumors, coincident with LDHA expression. Thus, hyperpolarized [1-13C]pyruvate MRSI provides a noninvasive functional assessment primarily of MCT1 as a clinical biomarker in relevant patient populations.
APA, Harvard, Vancouver, ISO, and other styles
40

Grooteman, M. P., M. J. Nubé, M. R. Daha, J. Van Limbeek, M. van Deuren, M. Schoorl, P. M. Bet, and A. J. Van Houte. "Cytokine profiles during clinical high-flux dialysis: no evidence for cytokine generation by circulating monocytes." Journal of the American Society of Nephrology 8, no. 11 (November 1997): 1745–54. http://dx.doi.org/10.1681/asn.v8111745.

Full text
Abstract:
Secretion of cytokines by monocytes has been implicated in the pathogenesis of dialysis-related morbidity. Cytokine generation is presumed to take place in two steps: induction of mRNA transcription for cytokines by C5a and direct membrane contact, followed by lipopolysaccharide (LPS)-induced translation of mRNA (priming/second signal theory, Kidney Int 37: 85-93, 1990). However, the in vitro conditions on which this theory was based differed markedly from clinical dialysis. To test this postulate for routine hemodialysis, 13 patients were studied cross-over with high-flux cuprammonium (CU), cellulose triacetate (CTA), and polysulfon dialyzers, using standard bicarbonate dialysate, as well as CTA with filtered dialysate (fCTA). Besides leukocytes, C3a, C5a, and limulus amebocyte lysate reactivity, tumor necrosis factor (TNF)-alpha, interleukin (IL)-1 beta, IL-6, IL-1RA, soluble TNF receptors, and IL-1 beta mRNA were assessed. Only during dialysis with CU did C5a increase significantly (561 to 8185 ng/ml, P < 0.001). Endotoxin content of standard bicarbonate was higher than filtered dialysate (median, 24.3 and < 5 pg/ml respectively, P = 0.002), whereas limulus amebocyte lysate reactivity was not detected in the blood, except in the case of CU. TNF-alpha levels were elevated before, and remained stable during, dialysis, independent of the modality used. IL-1 beta, IL-6, and mRNA coding for IL-1 beta could not be demonstrated. IL-1RA and soluble TNF receptors (p55/p75) were markedly elevated compared with normal control subjects, but showed no differences between fCTA and CTA. To summarize, no evidence was found for production and release of cytokines by monocytes during clinical high-flux bicarbonate hemodialysis, neither with complement-activating membranes nor with unfiltered dialysate. Therefore, this study sheds some doubt on the relevance of the "priming/second signal" theory for clinical practice. The data presented suggest that reluctance to prescribe the use of high-flux dialyzers, as advocated in many reports, may not be warranted.
APA, Harvard, Vancouver, ISO, and other styles
41

Sidiropoulou, Maria. "Translanguaging aspects of modality." Translation and Translanguaging in Multilingual Contexts 1, no. 1 (March 30, 2015): 27–48. http://dx.doi.org/10.1075/ttmc.1.1.02sid.

Full text
Abstract:
This article explores aspects of modal marker use in English and Greek and suggests that parallel data may significantly contribute to raising learners’ intercultural sensitivity in the FL classroom, as an instance of TOLC (Translation in Other Language Contexts). Parallel data seem to assume a dynamic potential (privileging learner autonomy and developing self-study skills), which other traditional approaches to the use of the modal system lack, leaving important aspects of cross-cultural variation out of the perspective of the learner. The study focuses on two aspects of intercultural variation in the use of the modal systems of English and Greek, namely shifting degrees of possibility-certainty and the shift across epistemic-deontic, as manifested through a 2013–2014 sample of parallel data from newspapers. It offers a set of sample exercises highlighting the potential of translation to contribute valuable insights to L2/additional language learning (ALL) and syllabus design, assuming an ecological ethic in acknowledging the primacy of context, including L1, especially if L1 is a less widely spoken language.
APA, Harvard, Vancouver, ISO, and other styles
42

Charles, Jocelyn, Einat Danieli, Kiara Fine, Jaipreet Kohli, Stacy Landau, Jagger Smith, and Naomi Ziegler. "Neighbourhood Care Teams: Integrating Health Care and Social Services for Seniors in Toronto Community Housing." International Journal of Integrated Care 23, S1 (December 28, 2023): 428. http://dx.doi.org/10.5334/ijic.icic23501.

Full text
Abstract:
Toronto Seniors Housing Corporation (TSHC), owned by the City of Toronto, provides housing for 15,000 low-income seniors in 83 seniors-designated buildings across Toronto. The North Toronto Ontario Health Team (including primary care, hospital, community and home-care) partnered with one of the TSHC buildings in North Toronto to develop and implement a Neighbourhood Care Team (NCT) model to support TSHC’s Integrated Service Model to address tenants' health and social needs, co-designed with the tenants. The goal of the NCT is to provide an integrated model of care that is accountable to meeting the needs of people living within a specific neighbourhood so that people experience one system that provides simple access to service, and care that is coordinated with streamlined communication of health care providers. The NCT objectives include: Increasing primary care provider connections Increasing mental health & addictions care access and support options Increase Digital Health access and literacy to support primary care and specialist access, reduce social isolation and increase wellness Reduce avoidable ED and hospital use. The service design is guided by a co-design process with the tenants as follows: Door to door survey to engage tenants in identifying their barriers and the services and supports most meaningful to them Eliciting and voting on key education and support initiatives at an influenza vaccination clinic Communication back to tenants regarding the results of the survey and how the strategies/activities planned for the building have been prioritized based on their feedback. Multi-organization Education Fair focusing on the top issues addressed during the vaccination clinic survey which was well attended Regular educational sessions in response to tenant interest, combined with a self-screening component to help link the information to a concrete service/intervention to promote better health. Providing translation support to enable access and engagement by tenants from a variety of cultural backgrounds. Ongoing commitment to continue and co-design services and elicit tenants’ feedback. The team has worked to design structures to strengthen coordination and collaboration among the various delivery partners: Multi-organizational bi-weekly huddles to discuss residents identified with unmet needs (with consent or anonymized without consent) and identify options for improving their access to health care/social services and respond to their needs in a timely manner Designed pathways for ensuring attachment to primary care, access to primary care and specialist support, access to home care services and assistance with social determinants of health Established mechanisms to obtain informed consent and enable information sharing between delivery partners. Multi-modality tenant engagement to tailor services and supports to a TSHC building has led to increased involvement by tenants and a growing interest by tenants in strategies to improve their health and social inclusion. Cross-sector collaboration is an efficient and effective way to establish needs-based integration of health and social care services in this setting. Strong leadership as well as co-developed processes, frequent building meetings and cross-sector huddles were effective ways of sharing innovative ways of meeting needs with limited resources.
APA, Harvard, Vancouver, ISO, and other styles
43

Hirao, Yutaro, and Takashi Kawai. "Augmented Cross-modality: Translating the Physiological Responses, Knowledge and Impression to Audio-visual Information in Virtual Reality." Electronic Imaging 2019, no. 2 (January 13, 2019): 60402–1. http://dx.doi.org/10.2352/j.imagingsci.technol.2018.62.6.060402.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Rajaram, Sara, and Cassie S. Mitchell. "Data Augmentation with Cross-Modal Variational Autoencoders (DACMVA) for Cancer Survival Prediction." Information 15, no. 1 (December 21, 2023): 7. http://dx.doi.org/10.3390/info15010007.

Full text
Abstract:
The ability to translate Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs) into different modalities and data types is essential to improve Deep Learning (DL) for predictive medicine. This work presents DACMVA, a novel framework to conduct data augmentation in a cross-modal dataset by translating between modalities and oversampling imputations of missing data. DACMVA was inspired by previous work on the alignment of latent spaces in Autoencoders. DACMVA is a DL data augmentation pipeline that improves the performance in a downstream prediction task. The unique DACMVA framework leverages a cross-modal loss to improve the imputation quality and employs training strategies to enable regularized latent spaces. Oversampling of augmented data is integrated into the prediction training. It is empirically demonstrated that the new DACMVA framework is effective in the often-neglected scenario of DL training on tabular data with continuous labels. Specifically, DACMVA is applied towards cancer survival prediction on tabular gene expression data where there is a portion of missing data in a given modality. DACMVA significantly (p << 0.001, one-sided Wilcoxon signed-rank test) outperformed the non-augmented baseline and competing augmentation methods with varying percentages of missing data (4%, 90%, 95% missing). As such, DACMVA provides significant performance improvements, even in very-low-data regimes, over existing state-of-the-art methods, including TDImpute and oversampling alone.
APA, Harvard, Vancouver, ISO, and other styles
45

Bai, Jing, Mengjie Wang, and Dexin Kong. "Deep Common Semantic Space Embedding for Sketch-Based 3D Model Retrieval." Entropy 21, no. 4 (April 4, 2019): 369. http://dx.doi.org/10.3390/e21040369.

Full text
Abstract:
Sketch-based 3D model retrieval has become an important research topic in many applications, such as computer graphics and computer-aided design. Although sketches and 3D models have huge interdomain visual perception discrepancies, and sketches of the same object have remarkable intradomain visual perception diversity, the 3D models and sketches of the same class share common semantic content. Motivated by these findings, we propose a novel approach for sketch-based 3D model retrieval by constructing a deep common semantic space embedding using triplet network. First, a common data space is constructed by representing every 3D model as a group of views. Second, a common modality space is generated by translating views to sketches according to cross entropy evaluation. Third, a common semantic space embedding for two domains is learned based on a triplet network. Finally, based on the learned features of sketches and 3D models, four kinds of distance metrics between sketches and 3D models are designed, and sketch-based 3D model retrieval results are achieved. The experimental results using the Shape Retrieval Contest (SHREC) 2013 and SHREC 2014 datasets reveal the superiority of our proposed method over state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
46

Ortega, Gerardo, and Gary Morgan. "Input processing at first exposure to a sign language." Second Language Research 31, no. 4 (March 26, 2015): 443–63. http://dx.doi.org/10.1177/0267658315576822.

Full text
Abstract:
There is growing interest in learners’ cognitive capacities to process a second language (L2) at first exposure to the target language. Evidence suggests that L2 learners are capable of processing novel words by exploiting phonological information from their first language (L1). Hearing adult learners of a sign language, however, cannot fall back on their L1 to process novel signs because the modality differences between speech (aural–oral) and sign (visual-manual) do not allow for direct cross-linguistic influence. Sign language learners might use alternative strategies to process input expressed in the manual channel. Learners may rely on iconicity, the direct relationship between a sign and its referent. Evidence up to now has shown that iconicity facilitates learning in non-signers, but it is unclear whether it also facilitates sign production. In order to fill this gap, the present study investigated how iconicity influenced articulation of the phonological components of signs. In Study 1, hearing non-signers viewed a set of iconic and arbitrary signs along with their English translations and repeated the signs as accurately as possible immediately after. The results show that participants imitated iconic signs significantly less accurately than arbitrary signs. In Study 2, a second group of hearing non-signers imitated the same set of signs but without the accompanying English translations. The same lower accuracy for iconic signs was observed. We argue that learners rely on iconicity to process manual input because it brings familiarity to the target (sign) language. However, this reliance comes at a cost as it leads to a more superficial processing of the signs’ full phonetic form. The present findings add to our understanding of learners’ cognitive capacities at first exposure to a signed L2, and raises new theoretical questions in the field of second language acquisition.
APA, Harvard, Vancouver, ISO, and other styles
47

Wang, Heng, Jianbo Ma, Santiago Pascual, Richard Cartwright, and Weidong Cai. "V2A-Mapper: A Lightweight Solution for Vision-to-Audio Generation by Connecting Foundation Models." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 14 (March 24, 2024): 15492–501. http://dx.doi.org/10.1609/aaai.v38i14.29475.

Full text
Abstract:
Building artificial intelligence (AI) systems on top of a set of foundation models (FMs) is becoming a new paradigm in AI research. Their representative and generative abilities learnt from vast amounts of data can be easily adapted and transferred to a wide range of downstream tasks without extra training from scratch. However, leveraging FMs in cross-modal generation remains under-researched when audio modality is involved. On the other hand, automatically generating semantically-relevant sound from visual input is an important problem in cross-modal generation studies. To solve this vision-to-audio (V2A) generation problem, existing methods tend to design and build complex systems from scratch using modestly sized datasets. In this paper, we propose a lightweight solution to this problem by leveraging foundation models, specifically CLIP, CLAP, and AudioLDM. We first investigate the domain gap between the latent space of the visual CLIP and the auditory CLAP models. Then we propose a simple yet effective mapper mechanism (V2A-Mapper) to bridge the domain gap by translating the visual input between CLIP and CLAP spaces. Conditioned on the translated CLAP embedding, pretrained audio generative FM AudioLDM is adopted to produce high-fidelity and visually-aligned sound. Compared to previous approaches, our method only requires a quick training of the V2A-Mapper. We further analyze and conduct extensive experiments on the choice of the V2A-Mapper and show that a generative mapper is better at fidelity and variability (FD) while a regression mapper is slightly better at relevance (CS). Both objective and subjective evaluation on two V2A datasets demonstrate the superiority of our proposed method compared to current state-of-the-art approaches - trained with 86% fewer parameters but achieving 53% and 19% improvement in FD and CS, respectively. Supplementary materials such as audio samples are provided at our demo website: https://v2a-mapper.github.io/.
APA, Harvard, Vancouver, ISO, and other styles
48

Du Bois, A., and J. Pfisterer. "Future options for first-line therapy of advanced ovarian cancer." International Journal of Gynecologic Cancer 15, Suppl 1 (2005): 42–50. http://dx.doi.org/10.1136/ijgc-00009577-200505001-00008.

Full text
Abstract:
The current standard of treatment for patients with advanced ovarian cancer has been established in light of the results from various clinical trials. After debulking surgery, a combination of carboplatin and paclitaxel is considered to be the best treatment option in terms of survival and quality of life. However, since most patients on this chemotherapy modality will experience relapse, several studies have explored, and continue to do so, various modifications and alternatives to standard therapy in order to attain improved efficacy. Various modifications of dose, schedule, or route of standard regimens have shown no benefit, apart from intraperitoneal therapy, which has produced mixed results and would benefit from a definitive trial. Studies of maintenance/consolidation therapy have been mainly negative, although a small number of trials have produced enough positive data to prompt two new studies powered to detect survival benefits. Various phase II trials have investigated “targeted therapies,” but until now no positive results have been recorded. Translational studies are needed to identify patients who will benefit from such specific treatment strategies. The current most evaluated modification of standard therapy is the addition of a third non–cross-resistant drug to carboplatin and paclitaxel. Data for the addition of anthracyclines have either been negative (epirubicin) or not yet analyzed (pegylated liposomal doxorubicin), while evaluable data are shortly expected for the addition of topotecan. Data on the addition of gemcitabine are eagerly awaited from two phase III trials.
APA, Harvard, Vancouver, ISO, and other styles
49

Prada, Sergio Iván, José Joaquín Toro, Evelyn E. Peña-Zárate, Laura Libreros-Peña, Juliana Alarcón, and María Fernanda Escobar. "Impact of a teaching hospital-based multidisciplinary telemedicine programme in Southwestern Colombia: a cross-sectional resource analysis." BMJ Open 14, no. 5 (May 2024): e084447. http://dx.doi.org/10.1136/bmjopen-2024-084447.

Full text
Abstract:
BackgroundTelemedicine, a method of healthcare service delivery bridging geographic distances between patients and providers, has gained prominence. This modality is particularly advantageous for outpatient consultations, addressing inherent barriers of travel time and cost.ObjectiveWe aim to describe economical outcomes towards the implementation of a multidisciplinary telemedicine service in a high-complexity hospital in Latin America, from the perspective of patients.DesignA cross-sectional study was conducted, analysing the institutional data obtained over a period of 9 months, between April 2020 and December 2020.SettingA high-complexity teaching hospital located in Cali, Colombia.ParticipantsIndividuals who received care via telemedicine. The population was categorised into three groups based on their place of residence: Cali, Valle del Cauca excluding Cali and Outside of Valle del Cauca.Outcome measuresTravel distance, time, fuel and public round-trip cost savings, and potential loss of productivity were estimated from the patient’s perspective.ResultsA total of 62 258 teleconsultations were analysed. Telemedicine led to a total distance savings of 4 514 903 km, and 132 886 hours. The estimated cost savings were US$680 822 for private transportation and US$1 087 821 for public transportation. Patients in the Outside of Valle del Cauca group experienced an estimated average time savings of 21.2 hours, translating to an average fuel savings of US$149.02 or an average savings of US$156.62 in public transportation costs. Areas with exclusive air access achieved a mean cost savings of US$362.9 per teleconsultation, specifically related to transportation costs.ConclusionTelemedicine emerges as a powerful tool for achieving substantial travel savings for patients, especially in regions confronting geographical and socioeconomic obstacles. These findings underscore the potential of telemedicine to bridge healthcare accessibility gaps in low-income and middle-income countries, calling for further investment and expansion of telemedicine services in such areas.
APA, Harvard, Vancouver, ISO, and other styles
50

Yang, Qianye, Nannan Li, Zixu Zhao, Xingyu Fan, Eric I.-Chao Chang, and Yan Xu. "MRI Cross-Modality Image-to-Image Translation." Scientific Reports 10, no. 1 (February 28, 2020). http://dx.doi.org/10.1038/s41598-020-60520-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography