Journal articles on the topic 'Machine translations'

To see the other types of publications on this topic, follow the link: Machine translations.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Machine translations.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Ardi, Havid, Muhd Al Hafizh, Iftahur Rezqi, and Raihana Tuzzikriah. "CAN MACHINE TRANSLATIONS TRANSLATE HUMOROUS TEXTS?" Humanus 21, no. 1 (May 11, 2022): 99. http://dx.doi.org/10.24036/humanus.v21i1.115698.

Full text
Abstract:
Machine translation (MT) have attracted many researchers’attention in various ways. Although the advanced of technology brings development to the result of MT, the quality are still criticized. One of the texts that has great challenges and translation problems is humorous text. Humorous texts that trigger a smile or laugh should have the same effect in another language. Humor uses linguistic, cultural, and universal aspects to create joke or humor. These raise questions how do machines translate humorous texts from English into Indonesian? This article aimed at comparing the translation result and error made by three prominent Machine Translations (Google Translate, Yandex Translate, and Bing Microsoft Translator) in translating humorous texts. This research applied qualitative descriptive method. The data were taken by comparing the translation results produced by 3 online Machine Translations in translating four humorous texts. The findings show that Google Translate produced better translation result. There are some errors related to lexical, syntaxis, semantics, and pragmatics errors in the. The implication of this finding shows that machine translation still need human in post editing to produce similar effect to preserve the humor.
APA, Harvard, Vancouver, ISO, and other styles
2

Jiang, Yue, and Jiang Niu. "A corpus-based search for machine translationese in terms of discourse coherence." Across Languages and Cultures 23, no. 2 (November 7, 2022): 148–66. http://dx.doi.org/10.1556/084.2022.00182.

Full text
Abstract:
AbstractEarlier studies have corroborated that human translation exhibits unique linguistic features, usually referred to as translationese. However, research on machine translationese, in spite of some sparse efforts, is still in its infancy. By comparing machine translation with human translation and original target language texts, this study aims to investigate if machine translation has unique linguistic features of its own too, to what extent machine translations are different from human translations and target-language originals, and what characteristics are typical of machine translations. To this end, we collected a corpus containing English translations of modern Chinese literary texts produced by neural machine translation systems and human professional translators and comparable original texts in the target language. Based on the corpus, a quantitative study of discourse coherence was conducted by observing metrics in three dimensions borrowed from Coh-Metrix, including connectives, latent semantic analysis and the situation/mental model. The results support the existence of translationese in both human and machine translations when they are compared with original texts. However, machine translationese is not the same as human translationese in some metrics of discourse coherence. Additionally, machine translation systems, such as Google and DeepL, when compared with each other, show unique features in some coherence metrics, although on the whole they are not significantly different from each other in those coherence metrics.
APA, Harvard, Vancouver, ISO, and other styles
3

Halimah, Halimah. "COMPARISON OF HUMAN TRANSLATION WITH GOOGLE TRANSLATION OF IMPERATIVE SENTENCES IN PROCEDURES TEXT." BAHTERA : Jurnal Pendidikan Bahasa dan Sastra 17, no. 1 (January 31, 2018): 11–29. http://dx.doi.org/10.21009/bahtera.171.2.

Full text
Abstract:
AbstractThis study aims to analyze the similarity between human translation and machine translation to translate procedural text. This research uses Content Analysis approach (Content Analysis). The analysis was performed on English procedural text on a "VIXAL Lebih Wangi" cleanliness product translated into Indonesian by Nia Kurniawati (representing human translation). Meanwhile Google translation is used to represent machine translation. The study of the equations compared in this study is from the aspect of the phrase and the meaning of the whole sentence in the results of the two translations. The result of the discussion shows that the equation between human translation and machine translation in translating procedural text is low, i.e 29%. Machine translation still requires manpower to produce better translations. Keywords: equality aspect, human translation, machine translation, text procedure
APA, Harvard, Vancouver, ISO, and other styles
4

Wang, Lan. "The Impacts and Challenges of Artificial Intelligence Translation Tool on Translation Professionals." SHS Web of Conferences 163 (2023): 02021. http://dx.doi.org/10.1051/shsconf/202316302021.

Full text
Abstract:
Machine translation, especially translation based on neural network technology has made a major breakthrough and is increasingly accepted and widely used. The development of artificial intelligence (AI) translation has had a definite impact on translation jobs. People, even professional translators, are relying on AI translation. But There is no research on whether machine translation software is superior to professional translators in translating various types of documents.. In this study, we design an experiment to determine the advantages and disadvantages between AI translations and human translations. The result shows the impact of the development of AI on the translation industry. To achieve better translation results and output highquality translations in the era of rapid development of AI, it will contribute to Human-AI partnerships.
APA, Harvard, Vancouver, ISO, and other styles
5

Persaud, Ajax, and Steven O'Brien. "Quality and Acceptance of Crowdsourced Translation of Web Content." International Journal of Technology and Human Interaction 13, no. 1 (January 2017): 100–115. http://dx.doi.org/10.4018/ijthi.2017010106.

Full text
Abstract:
Organizations make extensive use of websites to communicate with people. Often, visitors to their sites speak many different languages and expect that they will be served in their native language. Translation of web content is a major challenge for many organizations because of high costs and frequent changes in the content. Currently, organizations rely on professional translators or machines to translate their content. The challenge is that professional translations is costly and too slow while machine translations do not produce high quality or accurate translations even though they may be faster and less expensive. Crowdsourcing has emerged as a technique with many applications. The purpose of this research is to test whether crowdsourcing can produce equivalent or better quality translations than professional or machine translators. A crowdsourcing study was undertaken and the results indicate that the quality of crowdsourced translations was equivalent to professional translations and far better than machine translations. The research and managerial implications are discussed.
APA, Harvard, Vancouver, ISO, and other styles
6

Tímea Kovács. "A Comparative Analysis of the Use of ‘Thereof’ in an English Non-translated Text and the English Machine- and Human-translated Versions of the Hungarian Criminal Code." International Journal of Law, Language & Discourse 10, no. 2 (October 14, 2022): 43–54. http://dx.doi.org/10.56498/1022022411.

Full text
Abstract:
Owing to the recent rise of neural language translation, a paradigm shift has been witnessed regarding the role of translators and reviewers. As neural machine translation is increasingly more capable of modelling how natural languages work, the traditional tasks of translators are being gradually replaced by new challenges. More emphasis is placed on pre- and post-editing (revision) skills and competences, presumably enabling the production of higher quality and near human-made translations. In my paper, I attempt to demonstrate through the qualitative and quantitative comparison of machine-translated legal texts (acts) to human-translated ones the relevant challenges and dynamic contrasts arising in the process of translating. Through the qualitative and quantitative analysis of the original Hungarian (source language) Criminal Code and its English (target language) machine and human translations, I aim to highlight the peculiar challenges emerging in the process of translation. I also aim to demonstrate what patterns can be observed in translations produced by human and non-human translators.
APA, Harvard, Vancouver, ISO, and other styles
7

Luo, Jinru, and Dechao Li. "Universals in machine translation?" International Journal of Corpus Linguistics 27, no. 1 (February 14, 2022): 31–58. http://dx.doi.org/10.1075/ijcl.19127.luo.

Full text
Abstract:
Abstract By examining and comparing the linguistic patterns in a self-built corpus of Chinese-English translations produced by WeChat Translate, the latest online machine translation app from the most popular social media platform (WeChat) in China, this study explores such questions as whether or not and to what extent simplification and normalization (hypothesized Translation Universals) exhibit themselves in these translations. The results show that, whereas simplification cannot be substantiated, the tendency of normalization to occur in the WeChat translations can be confirmed. The research finds that these results are caused by the operating mechanism of machine translation (MT) systems. Certain salient words tend to prime WeChat’s MT system to repetitively resort to typical language patterns, which leads to a significant overuse of lexical chunks. It is hoped that the present study can shed new light on the development of MT systems and encourage more corpus-based product-oriented research on MT.
APA, Harvard, Vancouver, ISO, and other styles
8

Al-Shalabi, Riyad, Ghassan Kanaan, Huda Al-Sarhan, Alaa Drabsh, and Islam Al-Husban. "Evaluating Machine Translations from Arabic into English and Vice Versa." International Research Journal of Electronics and Computer Engineering 3, no. 2 (June 24, 2017): 1. http://dx.doi.org/10.24178/irjece.2017.3.2.01.

Full text
Abstract:
Abstract—Machine translation (MT) allows direct communication between two persons without the need for the third party or via dictionary in your pocket, which could bring significant and per formative improvement. Since most traditional translational way is a word-sensitive, it is very important to consider the word order in addition to word selection in the evaluation of any machine translation. To evaluate the MT performance, it is necessary to dynamically observe the translation in the machine translator tool according to word order, and word selection and furthermore the sentence length. However, applying a good evaluation with respect to all previous points is a very challenging issue. In this paper, we first summarize various approaches to evaluate machine translation. We propose a practical solution by selecting an appropriate powerful tool called iBLEU to evaluate the accuracy degree of famous MT tools (i.e. Google, Bing, Systranet and Babylon). Based on the solution structure, we further discuss the performance order for these tools in both directions Arabic to English and English to Arabic. After extensive testing, we can decide that any direction gives more accurate results in translation based on the selected machine translations MTs. Finally, we proved the choosing of Google as best system performance and Systranet as the worst one. Index Terms: Machine Translation, MTs, Evaluation for Machine Translation, Google, Bing, Systranet and Babylon, Machine Translation tools, BLEU, iBLEU.
APA, Harvard, Vancouver, ISO, and other styles
9

Pathak, Amarnath, and Partha Pakray. "Neural Machine Translation for Indian Languages." Journal of Intelligent Systems 28, no. 3 (July 26, 2019): 465–77. http://dx.doi.org/10.1515/jisys-2018-0065.

Full text
Abstract:
Abstract Machine Translation bridges communication barriers and eases interaction among people having different linguistic backgrounds. Machine Translation mechanisms exploit a range of techniques and linguistic resources for translation prediction. Neural machine translation (NMT), in particular, seeks optimality in translation through training of neural network, using a parallel corpus having a considerable number of instances in the form of a parallel running source and target sentences. Easy availability of parallel corpora for major Indian language forms and the ability of NMT systems to better analyze context and produce fluent translation make NMT a prominent choice for the translation of Indian languages. We have trained, tested, and analyzed NMT systems for English to Tamil, English to Hindi, and English to Punjabi translations. Predicted translations have been evaluated using Bilingual Evaluation Understudy and by human evaluators to assess the quality of translation in terms of its adequacy, fluency, and correspondence with human-predicted translation.
APA, Harvard, Vancouver, ISO, and other styles
10

Wang, Yiren, Fei Tian, Di He, Tao Qin, ChengXiang Zhai, and Tie-Yan Liu. "Non-Autoregressive Machine Translation with Auxiliary Regularization." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 5377–84. http://dx.doi.org/10.1609/aaai.v33i01.33015377.

Full text
Abstract:
As a new neural machine translation approach, NonAutoregressive machine Translation (NAT) has attracted attention recently due to its high efficiency in inference. However, the high efficiency has come at the cost of not capturing the sequential dependency on the target side of translation, which causes NAT to suffer from two kinds of translation errors: 1) repeated translations (due to indistinguishable adjacent decoder hidden states), and 2) incomplete translations (due to incomplete transfer of source side information via the decoder hidden states). In this paper, we propose to address these two problems by improving the quality of decoder hidden representations via two auxiliary regularization terms in the training process of an NAT model. First, to make the hidden states more distinguishable, we regularize the similarity between consecutive hidden states based on the corresponding target tokens. Second, to force the hidden states to contain all the information in the source sentence, we leverage the dual nature of translation tasks (e.g., English to German and German to English) and minimize a backward reconstruction error to ensure that the hidden states of the NAT decoder are able to recover the source side sentence. Extensive experiments conducted on several benchmark datasets show that both regularization strategies are effective and can alleviate the issues of repeated translations and incomplete translations in NAT models. The accuracy of NAT models is therefore improved significantly over the state-of-the-art NAT models with even better efficiency for inference.
APA, Harvard, Vancouver, ISO, and other styles
11

Banik, Debajyoty, Asif Ekbal, Pushpak Bhattacharyya, and Siddhartha Bhattacharyya. "Assembling translations from multi-engine machine translation outputs." Applied Soft Computing 78 (May 2019): 230–39. http://dx.doi.org/10.1016/j.asoc.2019.02.031.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Han, Dong, Junhui Li, Yachao Li, Min Zhang, and Guodong Zhou. "Explicitly Modeling Word Translations in Neural Machine Translation." ACM Transactions on Asian and Low-Resource Language Information Processing 19, no. 1 (January 9, 2020): 1–17. http://dx.doi.org/10.1145/3342353.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Kozhirbayev, Zhanibek, and Talgat Islamgozhayev. "Cascade Speech Translation for the Kazakh Language." Applied Sciences 13, no. 15 (August 2, 2023): 8900. http://dx.doi.org/10.3390/app13158900.

Full text
Abstract:
Speech translation systems have become indispensable in facilitating seamless communication across language barriers. This paper presents a cascade speech translation system tailored specifically for translating speech from the Kazakh language to Russian. The system aims to enable effective cross-lingual communication between Kazakh and Russian speakers, addressing the unique challenges posed by these languages. To develop the cascade speech translation system, we first created a dedicated speech translation dataset ST-kk-ru based on the ISSAI Corpus. The ST-kk-ru dataset comprises a large collection of Kazakh speech recordings along with their corresponding Russian translations. The automatic speech recognition (ASR) module of the system utilizes deep learning techniques to convert spoken Kazakh input into text. The machine translation (MT) module employs state-of-the-art neural machine translation methods, leveraging the parallel Kazakh-Russian translations available in the dataset to generate accurate translations. By conducting extensive experiments and evaluations, we have thoroughly assessed the performance of the cascade speech translation system on the ST-kk-ru dataset. The outcomes of our evaluation highlight the effectiveness of incorporating additional datasets for both the ASR and MT modules. This augmentation leads to a significant improvement in the performance of the cascade speech translation system, increasing the BLEU score by approximately 2 points when translating from Kazakh to Russian. These findings underscore the importance of leveraging supplementary data to enhance the capabilities of speech translation systems.
APA, Harvard, Vancouver, ISO, and other styles
14

AMANCIO, DIEGO R., LUCAS ANTIQUEIRA, THIAGO A. S. PARDO, LUCIANO da F. COSTA, OSVALDO N. OLIVEIRA, and MARIA G. V. NUNES. "COMPLEX NETWORKS ANALYSIS OF MANUAL AND MACHINE TRANSLATIONS." International Journal of Modern Physics C 19, no. 04 (April 2008): 583–98. http://dx.doi.org/10.1142/s0129183108012285.

Full text
Abstract:
Complex networks have been increasingly used in text analysis, including in connection with natural language processing tools, as important text features appear to be captured by the topology and dynamics of the networks. Following previous works that apply complex networks concepts to text quality measurement, summary evaluation, and author characterization, we now focus on machine translation (MT). In this paper we assess the possible representation of texts as complex networks to evaluate cross-linguistic issues inherent in manual and machine translation. We show that different quality translations generated by MT tools can be distinguished from their manual counterparts by means of metrics such as in- (ID) and out-degrees (OD), clustering coefficient (CC), and shortest paths (SP). For instance, we demonstrate that the average OD in networks of automatic translations consistently exceeds the values obtained for manual ones, and that the CC values of source texts are not preserved for manual translations, but are for good automatic translations. This probably reflects the text rearrangements humans perform during manual translation. We envisage that such findings could lead to better MT tools and automatic evaluation metrics.
APA, Harvard, Vancouver, ISO, and other styles
15

Song, Yonsuk. "Ethics of journalistic translation and its implications for machine translation." APTIF 9 - Reality vs. Illusion 66, no. 4-5 (October 2, 2020): 829–46. http://dx.doi.org/10.1075/babel.00188.son.

Full text
Abstract:
Abstract Journalistic translation is governed by a target-oriented norm that allows varying degrees of intervention by journalists. Given the public’s expectations for the fidelity of translated news, this norm entails ethical issues. This paper examines the ethical dimensions of journalistic translation through a case study of political news translation in the South Korean context. It investigates how newspapers translated a US president’s references to two South Korean presidents in accordance with the newspapers’ ideologies and then came to apply the translations as negative labels as the political situation evolved over time. The study demonstrates how even word-level translation can require an intricate understanding of the sociopolitical context and cumulative meanings of a word. It then draws its implications for machine translation by comparing the human translations with machine translations of the references in question. It concludes by discussing why machine translation cannot yet replace human translation, at least between Korean and English, and what translation studies should do regarding the ethics of journalistic translation.
APA, Harvard, Vancouver, ISO, and other styles
16

Aida, Taichi, and Kazuhide Yamamoto. "Estimating Machine Translation Quality of Any Input Sentence." International Journal of Asian Language Processing 30, no. 01 (March 2020): 2050002. http://dx.doi.org/10.1142/s2717554520500022.

Full text
Abstract:
Current methods of neural machine translation may generate sentences with different levels of quality. Methods for automatically evaluating translation output from machine translation can be broadly classified into two types: a method that uses human post-edited translations for training an evaluation model, and a method that uses a reference translation that is the correct answer during evaluation. On the one hand, it is difficult to prepare post-edited translations because it is necessary to tag each word in comparison with the original translated sentences. On the other hand, users who actually employ the machine translation system do not have a correct reference translation. Therefore, we propose a method that trains the evaluation model without using human post-edited sentences and in the test set, estimates the quality of output sentences without using reference translations. We define some indices and predict the quality of translations with a regression model. For the quality of the translated sentences, we employ the BLEU score calculated from the number of word [Formula: see text]-gram matches between the translated sentence and the reference translation. After that, we compute the correlation between quality scores predicted by our method and BLEU actually computed from references. According to the experimental results, the correlation with BLEU is the highest when XGBoost uses all the indices. Moreover, looking at each index, we find that the sentence log-likelihood and the model uncertainty, which are based on the joint probability of generating the translated sentence, are important in BLEU estimation.
APA, Harvard, Vancouver, ISO, and other styles
17

Kaka-Khan, Kanaan Mikael, and Fatima Jalal Taher. "Evaluation of inkurdish Machine Translation System." Journal of University of Human Development 3, no. 2 (June 30, 2017): 862. http://dx.doi.org/10.21928/juhd.v3n2y2017.pp862-868.

Full text
Abstract:
Lack of having a perfect machine translation for Kurdish language is a huge gap in Kurdish Language processing (KNLP). inkurdish is a first machine translation system for Kurdish language which is capable of translating English into Kurdish sentences. Building "inkurdish" machine translation system was a great point regarding Kurdish language processing, but like any other translation system has strengths as well as many shortcomings and issues. This paper tries to evaluate inkurdish machine translation system according to both linguistics and computational issues. It might help any other researchers interested in doing research in this field. It attempts to evaluate inKurdish from different perspectives, such as, giving un common words, sentences, phrases and paragraphs in this machine to check whether it provides the correct translation or not. A general evaluation can be done after getting a valid sample with their translations from the machine and compared to the meanings of the words outside the machine.
APA, Harvard, Vancouver, ISO, and other styles
18

Rikters, Matīss, Mark Fishel, and Ondřej Bojar. "Visualizing Neural Machine Translation Attention and Confidence." Prague Bulletin of Mathematical Linguistics 109, no. 1 (October 1, 2017): 39–50. http://dx.doi.org/10.1515/pralin-2017-0037.

Full text
Abstract:
Abstract In this article, we describe a tool for visualizing the output and attention weights of neural machine translation systems and for estimating confidence about the output based on the attention. Our aim is to help researchers and developers better understand the behaviour of their NMT systems without the need for any reference translations. Our tool includes command line and web-based interfaces that allow to systematically evaluate translation outputs from various engines and experiments. We also present a web demo of our tool with examples of good and bad translations: http://ej.uz/nmt-attention.
APA, Harvard, Vancouver, ISO, and other styles
19

Garcia, Ignacio. "Is machine translation ready yet?" Target. International Journal of Translation Studies 22, no. 1 (June 30, 2010): 7–21. http://dx.doi.org/10.1075/target.22.1.02gar.

Full text
Abstract:
The default option of the Google Translator Toolkit (GTT), released in June 2009, is to “pre-fill with machine translation” all segments for which a ‘no match’ has been returned by the memories, while the Settings window clearly advises that “[m]ost users should not modify this”. To confirm whether this approach indeed benefits translators and translation quality, we designed and performed tests whereby trainee translators used the GTT to translate passages from English into Chinese either entirely from the source text, or after seeding of empty segments by the Google Translate engine as recommended. The translations were timed, and their quality assessed by independent experienced markers following Australian NAATI test criteria. Our results show that, while time differences were not significant, the machine translation seeded passages were more favourably assessed by the markers in thirty three of fifty six cases. This indicates that, at least for certain tasks and language combinations—and against the received wisdom of translation professionals and translator trainers—translating by proofreading machine translation may be advantageous.
APA, Harvard, Vancouver, ISO, and other styles
20

Biçici, Ergun, and Lucia Specia. "QuEst for High Quality Machine Translation." Prague Bulletin of Mathematical Linguistics 103, no. 1 (April 1, 2015): 43–64. http://dx.doi.org/10.1515/pralin-2015-0003.

Full text
Abstract:
Abstract In this paper we describe the use of QuEst, a framework that aims to obtain predictions on the quality of translations, to improve the performance of machine translation (MT) systems without changing their internal functioning. We apply QuEst to experiments with: i. multiple system translation ranking, where translations produced by different MT systems are ranked according to their estimated quality, leading to gains of up to 2:72 BLEU, 3:66 BLEUs, and 2:17 F1 points; ii. n-best list re-ranking, where n-best list translations produced by an MT system are reranked based on predicted quality scores to get the best translation ranked top, which lead to improvements on sentence NIST score by 0:41 points; iii. n-best list combination, where segments from an n-best list are combined using a latticebased re-scoring approach that minimize word error, obtaining gains of 0:28 BLEU points; and iv. the ITERPE strategy, which attempts to identify translation errors regardless of prediction errors (ITERPE) and build sentence-specific SMT systems (SSSS) on the ITERPE sorted instances identified as having more potential for improvement, achieving gains of up to 1:43 BLEU, 0:54 F1, 2:9 NIST, 0:64 sentence BLEU, and 4:7 sentence NIST points in English to German over the top 100 ITERPE sorted instances.
APA, Harvard, Vancouver, ISO, and other styles
21

Streiter, Oliver, and Leonid L. Iomdin. "Learning Lessons from Bilingual Corpora: Benefits for Machine Translation." International Journal of Corpus Linguistics 5, no. 2 (December 31, 2000): 199–230. http://dx.doi.org/10.1075/ijcl.5.2.06str.

Full text
Abstract:
The research described in this paper is rooted in the endeavors to combine the advantages of corpus-based and rule-based MT approaches in order to improve the performance of MT systems—most importantly, the quality of translation. The authors review the ongoing activities in the field and present a case study, which shows how translation knowledge can be drawn from parallel corpora and compiled into the lexicon of a rule-based MT system. These data are obtained with the help of three procedures: (1) identification of hence unknown one-word translations, (2) statistical rating of the known one-word translations, and (3) extraction of new translations of multiword expressions (MWEs) followed by compilation steps which create new rules for the MT engine. As a result, the lexicon is enriched with translation equivalents attested for different subject domains, which facilitates the tuning of the MT system to a specific subject domain and improves the quality and adequacy of translation.
APA, Harvard, Vancouver, ISO, and other styles
22

Lai, Tzu-Yun. "Reliability and Validity of a Scale-based Assessment for Translation Tests." Meta 56, no. 3 (March 6, 2012): 713–22. http://dx.doi.org/10.7202/1008341ar.

Full text
Abstract:
Are assessment tools for machine-generated translations applicable to human translations? To address this question, the present study compares two assessments used in translation tests: the first is the error-analysis-based method applied by most schools and institutions, the other a scale-based method proposed by Liu, Chang et al. (2005). They have adapted Carroll’s scales developed for quality assessment of machine-generated translations. In the present study, twelve graders were invited to re-grade the test papers in Liu, Chang et al. (2005)’s experiment by different methods. Based on the results and graders’ feedback, a number of modifications of the measuring procedure as well as the scales were provided. The study showed that the scale method mostly used to assess machine-generated translations is also a reliable and valid tool to assess human translations. The measurement was accepted by the Ministry of Education in Taiwan and applied in the 2007 public translation proficiency test.
APA, Harvard, Vancouver, ISO, and other styles
23

Xiu, Peng, and Liming Xeauyin. "Human translation vs machine translation: The practitioner phenomenology." Linguistics and Culture Review 2, no. 1 (May 9, 2018): 13–23. http://dx.doi.org/10.21744/lingcure.v2n1.8.

Full text
Abstract:
The paper aimed at exploring the current phenomenon regarding human translation with machine translation. Human translation (HT), by definition, is when a human translator—rather than a machine—translate text. It's the oldest form of translation, relying on pure human intelligence to convert one way of saying things to another. The person who performs language translation. Learn more about using technology to reduce healthcare disparity. A person who performs language translation. The translation is necessary for the spread of information, knowledge, and ideas. It is absolutely necessary for effective and empathetic communication between different cultures. Translation, therefore, is critical for social harmony and peace. Only a human translation can tell the difference because the machine translator will just do the direct word to word translation. This is a hindrance to machines because they are not advanced to the level of rendering these nuances accurately, but they can only do word to word translations. There are different translation techniques, diverse theories about translation and eight different translation services types, including technical translation, judicial translation and certified translation. The translation is the process of translating the sequence of a messenger RNA (mRNA) molecule to a sequence of amino acids during protein synthesis. The genetic code describes the relationship between the sequence of base pairs in a gene and the corresponding amino acid sequence that it encodes.
APA, Harvard, Vancouver, ISO, and other styles
24

Mohar, Tjaša, Sara Orthaber, and Tomaž Onič. "Machine Translated Atwood: Utopia or Dystopia?" ELOPE: English Language Overseas Perspectives and Enquiries 17, no. 1 (May 26, 2020): 125–41. http://dx.doi.org/10.4312/elope.17.1.125-141.

Full text
Abstract:
Margaret Atwood’s masterful linguistic creativity exceeds the limits of ordinary discourse. Her elliptical language contributes to interpretative gaps, while the ambiguity and openness of her texts intentionally deceive the reader. The translator of Atwood’s texts therefore faces the challenge of identifying the rich interpretative potential of the original, as well as of preserving it in the target language. Witnessing the rise of artificial intelligence, a natural question arises whether a human translator could ever be replaced by a machine in translating such challenging texts. This article aims to contribute to the ongoing debate on literary machine translation by examining the translations of Atwood’s “Life Stories” generated by two neural machine translation (NMT) systems and comparing them to those produced by translation students. We deliberately chose a literary text where the aesthetic value depends mostly on the author’s personal style, and which we had presumed would be problematic to translate.
APA, Harvard, Vancouver, ISO, and other styles
25

Zhang, Chun Xiang, Long Deng, Xue Yao Gao, and Li Li Guo. "Word Sense Disambiguation for Improving the Quality of Machine Translation." Advanced Materials Research 981 (July 2014): 153–56. http://dx.doi.org/10.4028/www.scientific.net/amr.981.153.

Full text
Abstract:
Word sense disambiguation is key to many application problems in natural language processing. In this paper, a specific classifier of word sense disambiguation is introduced into machine translation system in order to improve the quality of the output translation. Firstly, translation of ambiguous word is deleted from machine translation of Chinese sentence. Secondly, ambiguous word is disambiguated and the classification labels are translations of ambiguous word. Thirdly, these two translations are combined. 50 Chinese sentences including ambiguous words are collected for test experiments. Experimental results show that the translation quality is improved after the proposed method is applied.
APA, Harvard, Vancouver, ISO, and other styles
26

Sipayung, Kammer Tuahman, Novdin Manoktong Sianturi, I. Made Dwipa Arta, Yeti Rohayati, and Diani Indah. "Comparison of Translation Techniques by Google Translate and U-Dictionary: How Differently Does Both Machine Translation Tools Perform in Translating?" Elsya : Journal of English Language Studies 3, no. 3 (October 20, 2021): 236–45. http://dx.doi.org/10.31849/elsya.v3i3.7517.

Full text
Abstract:
Better translation produced by computation linguistics should be evaluated through linguistics theory. This research aims to describe translation techniques between Google Translate and U-Dictionary. The study used a qualitative research method with a descriptive design. This design was used to describe the occurrences of translation techniques in both translation machine, with the researchers serving as an instrument to compare translation techniques which is produced on machine. The data are from expository text entitled “Importance of Good Manners in Every Day Life”. The total data are 122 words/phrases which are pairs of translations, English as source language and Indonesia as target language. The result shows that Google Translate apply five of Molina & Albir’s (2002) eighteen translation techniques, while U-dictionary apply seven techniques. Google Translate dominantly apply literal translation techniques (86,8%) followed by reduction translation techniques (4,9%). U-dictionary also dominantly apply literal translation techniques (75,4%), but follows with the variation translation techniques (13,1%). This study showed that both machines produced different target texts for the same source language due to different applications of techniques, with U-dictionary proven to apply more variety of translation techniques than Google Translate. The researcher hopes this study can be used as an evaluation for improving the performance of machine translations.
APA, Harvard, Vancouver, ISO, and other styles
27

Bertallot, A. "Machine language translations - Letters." Computer 37, no. 5 (2004): 7. http://dx.doi.org/10.1109/mc.2004.1297223.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Tezcan, Arda, and Bram Bulté. "Evaluating the Impact of Integrating Similar Translations into Neural Machine Translation." Information 13, no. 1 (January 4, 2022): 19. http://dx.doi.org/10.3390/info13010019.

Full text
Abstract:
Previous research has shown that simple methods of augmenting machine translation training data and input sentences with translations of similar sentences (or fuzzy matches), retrieved from a translation memory or bilingual corpus, lead to considerable improvements in translation quality, as assessed by a limited set of automatic evaluation metrics. In this study, we extend this evaluation by calculating a wider range of automated quality metrics that tap into different aspects of translation quality and by performing manual MT error analysis. Moreover, we investigate in more detail how fuzzy matches influence translations and where potential quality improvements could still be made by carrying out a series of quantitative analyses that focus on different characteristics of the retrieved fuzzy matches. The automated evaluation shows that the quality of NFR translations is higher than the NMT baseline in terms of all metrics. However, the manual error analysis did not reveal a difference between the two systems in terms of total number of translation errors; yet, different profiles emerged when considering the types of errors made. Finally, in our analysis of how fuzzy matches influence NFR translations, we identified a number of features that could be used to improve the selection of fuzzy matches for NFR data augmentation.
APA, Harvard, Vancouver, ISO, and other styles
29

Mariño, José B., Rafael E. Banchs, Josep M. Crego, Adrià de Gispert, Patrik Lambert, José A. R. Fonollosa, and Marta R. Costa-jussà. "N-gram-based Machine Translation." Computational Linguistics 32, no. 4 (December 2006): 527–49. http://dx.doi.org/10.1162/coli.2006.32.4.527.

Full text
Abstract:
This article describes in detail an n-gram approach to statistical machine translation. This approach consists of a log-linear combination of a translation model based on n-grams of bilingual units, which are referred to as tuples, along with four specific feature functions. Translation performance, which happens to be in the state of the art, is demonstrated with Spanish-to-English and English-to-Spanish translations of the European Parliament Plenary Sessions (EPPS).
APA, Harvard, Vancouver, ISO, and other styles
30

Qizi, Yunusova Nilufar Maxmudjon. "Productivity of Machine Translation." International Journal of Teaching, Learning and Education 2, no. 1 (2023): 01–02. http://dx.doi.org/10.22161/ijtle.2.1.1.

Full text
Abstract:
This article is about the field of technical translation which is called machine translation (MT), or machine-assisted translation. This method of translation uses various types of computer software to generate translations from a source language to a target language without the assistance of a human. There are different methods of machine translation. A plethora of machine translators in the form of free search engines are available online. However, within the field of technical communication, there are two basic types of machine translators, which are able to translate massive amounts of text at a time. There are transfer-based and data-driven machine translators. Transfer-based machine translation systems, which are quite costly to develop, are built by linguists who determine the grammar rules for the source and target languages. The machine works within the rules and guidelines developed by the linguist. Due to the nature of developing rules for the system, this can be very time-consuming and requires an extensive knowledge base about the structures of the languages on the part of the linguist; nonetheless, the majority of commercial machine translators are transfer-based machines. .
APA, Harvard, Vancouver, ISO, and other styles
31

Sokolova, Natalia. "Machine vs Human Translation in the Synergetic Translation Space." Vestnik Volgogradskogo gosudarstvennogo universiteta. Serija 2. Jazykoznanije, no. 6 (February 2021): 89–98. http://dx.doi.org/10.15688/jvolsu2.2021.6.8.

Full text
Abstract:
The paper focuses on English-to-Russian translations of patent applications on the website of the World Intellectual Property Organization (WIPO). A comparative analysis of patent applications is performed by using translations made with the help of the WIPO Translate tool and human translators within the framework of the synergetic translation space concept encompassing the domains of the author's intensions, text content and composition, energy, translator, recipient, and the translation acceptability notion. The translation erratology aspects were considered from the point of view of the semantic, referential, and syntactic ambiguity within the domains of content-composition and energy space. In the domain of the author, the intention to convey some technical information is revealed, while its rendering in the content-composition and energy domains depends on whether the translation is made by a person or a machine. Genre- and composition-related specifics have been rendered in both cases while machine translation errors have been proven to result from the semantic, referential, or syntactic ambiguity, and this is when the translated output is generally considered unacceptable by the recipient. The results obtained can be used for editing machine translations of patent documentation, assessing the quality of technical documentation translation that is referred to other specific genre conventions.
APA, Harvard, Vancouver, ISO, and other styles
32

Ulitkin, Ilya, Irina Filippova, Natalia Ivanova, and Alexey Poroykov. "Automatic evaluation of the quality of machine translation of a scientific text: the results of a five-year-long experiment." E3S Web of Conferences 284 (2021): 08001. http://dx.doi.org/10.1051/e3sconf/202128408001.

Full text
Abstract:
We report on various approaches to automatic evaluation of machine translation quality and describe three widely used methods. These methods, i.e. methods based on string matching and n-gram models, make it possible to compare the quality of machine translation to reference translation. We employ modern metrics for automatic evaluation of machine translation quality such as BLEU, F-measure, and TER to compare translations made by Google and PROMT neural machine translation systems with translations obtained 5 years ago, when statistical machine translation and rule-based machine translation algorithms were employed by Google and PROMT, respectively, as the main translation algorithms [6]. The evaluation of the translation quality of candidate texts generated by Google and PROMT with reference translation using an automatic translation evaluation program reveal significant qualitative changes as compared with the results obtained 5 years ago, which indicate a dramatic improvement in the work of the above-mentioned online translation systems. Ways to improve the quality of machine translation are discussed. It is shown that modern systems of automatic evaluation of translation quality allow errors made by machine translation systems to be identified and systematized, which will enable the improvement of the quality of translation by these systems in the future.
APA, Harvard, Vancouver, ISO, and other styles
33

Humblé, Philippe. "Tradução automática e poesia: o caso do inglês e do português." Ilha do Desterro A Journal of English Language, Literatures in English and Cultural Studies 72, no. 2 (May 31, 2019): 41–52. http://dx.doi.org/10.5007/2175-8026.2019v72n2p41.

Full text
Abstract:
This article sets out to analyse the translations into Portuguese of three poems written I English, “A Letter is a Joy of Earth” by Emily Dickinson, “To a Stranger” by Walt Whitman and “Sandra” by Charles Bukowski. Three translations by three human translators, Geir Campos, Pedro Gonzaga and Jorge de Sena, are compared to the translations made by Google Translate in order to evaluate machine translation quality. This research shows that machine translations are less “ludicrous” than some would think and are in fact quite acceptable. In the cases investigated, machine translations are sometimes as acceptable as the ones made by the professional translators, and they could even help them to make mistakes through a lack of attention or by ignoring all the possibilities in the case of polysemous words. The Google translations are obviously plainer and there are a number of mistakes in them of the kind one expects: wrong concordances, wrong interpretations of polysemous words, wrong interpretation of gendered words. However, overall the results are far more satisfying than forecast. Google Translate, or similar programmes, may help translators with different, albeit sound alternatives. Additionally, machine translations provide a useful tool to analyse the idiosyncrasies of translators.
APA, Harvard, Vancouver, ISO, and other styles
34

Omar, Abdulfattah, and Yasser A. Gomaa. "The Machine Translation of Literature: Implications for Translation Pedagogy." International Journal of Emerging Technologies in Learning (iJET) 15, no. 11 (June 12, 2020): 228. http://dx.doi.org/10.3991/ijet.v15i11.13275.

Full text
Abstract:
The recent years have witnessed an increasing importance of machine translation systems due to the prolific production on online texts in different disciplines and furthermore, the inability of traditional translation methods in addressing translation needs all over the world. It is even argued that training on translation tools should be integrated into translation pedagogies and ultimately, courses should be provided for students and professionals. In spite of the effectiveness of translation tools and systems in providing solutions in relation to different disciplines and text genres, the usability and reliability of such systems in terms of literary texts, however, is still highly controversial. Many critics and educators still underestimate the usefulness of the machine translation systems in literature, which could be partially attributed to the unique nature of the language of the literary texts. The issue has its pedagogical implications to translation instruction due to the needs to integrate emerging technologies in teaching and learning practices. For proper use of translation technologies in educational contexts, these need to be well evaluated. For this purpose, this study evaluates the usefulness of applying machine translation systems to literature with the purpose of identifying the challenges that may have negative impacts on the reliability of machine translation systems. In order to do this this, two translation systems are selected, namely, Google Translate and Q Translate. By way of illustration, the study is based on a corpus of two English short stories. The study is based on two prose fiction texts. The first is J. K. Rowling’s novel Harry Potter and the Philosopher’s Stone. The second is Edgar Allan Poe’s short story The Black Cat. Automatic translations generated by the two machine translation systems were compared to human made Arabic translations with the purpose of identifying the problems within these translations. Results indicate that different lexical, structural, and pragmatic errors are encountered by users which negatively impact the reliability of these translations. Educators and translation instructors need to reflect on the challenges of machine translation systems in relation to literature. Software developers need also to address the problems faced by users and students in the translation from and into the Arabic language.
APA, Harvard, Vancouver, ISO, and other styles
35

Webster, Rebecca, Margot Fonteyne, Arda Tezcan, Lieve Macken, and Joke Daems. "Gutenberg Goes Neural: Comparing Features of Dutch Human Translations with Raw Neural Machine Translation Outputs in a Corpus of English Literary Classics." Informatics 7, no. 3 (August 28, 2020): 32. http://dx.doi.org/10.3390/informatics7030032.

Full text
Abstract:
Due to the growing success of neural machine translation (NMT), many have started to question its applicability within the field of literary translation. In order to grasp the possibilities of NMT, we studied the output of the neural machine system of Google Translate (GNMT) and DeepL when applied to four classic novels translated from English into Dutch. The quality of the NMT systems is discussed by focusing on manual annotations, and we also employed various metrics in order to get an insight into lexical richness, local cohesion, syntactic, and stylistic difference. Firstly, we discovered that a large proportion of the translated sentences contained errors. We also observed a lower level of lexical richness and local cohesion in the NMTs compared to the human translations. In addition, NMTs are more likely to follow the syntactic structure of a source sentence, whereas human translations can differ. Lastly, the human translations deviate from the machine translations in style.
APA, Harvard, Vancouver, ISO, and other styles
36

Baihaqi, Akhmad. "THE TRANSLATION RESULTS OF CHILDREN BILINGUAL STORY BOOK BETWEEN HUMAN AND MACHINE TRANSLATION: A COMPARATIVE MODEL." Cakrawala Pedagogik 5, no. 2 (October 1, 2021): 149–59. http://dx.doi.org/10.51499/cp.v5i2.260.

Full text
Abstract:
The purpose of this current study is to analyze the differences of translation results of children story books between human and machine translation, especially in terms of accuracy, readability, and understandability. The method used in this work was qualitative content analysis. The children story book entitled Cindelaras served as a source of data. The original book was written in Indonesian Language, and it was published in 2001 by Gramedia Widiasarana Indonesia. The result shows that both human and machine translations deliver different lexical, grammatical, semantic, and stylistic versions in their translation results. These differences occur since the machine translation has not been able to well-recognize the context of situation and culture. This is a weakness and limitation of machine translation. Such machines cannot replace human translation. Nevertheless, the machine can serve as a pre-translation to help human translation work faster and better in producing more accurate, readable, and understandable versions.
APA, Harvard, Vancouver, ISO, and other styles
37

Srivastava, Ankit, Georg Rehm, and Felix Sasaki. "Improving Machine Translation through Linked Data." Prague Bulletin of Mathematical Linguistics 108, no. 1 (June 1, 2017): 355–66. http://dx.doi.org/10.1515/pralin-2017-0033.

Full text
Abstract:
Abstract With the ever increasing availability of linked multilingual lexical resources, there is a renewed interest in extending Natural Language Processing (NLP) applications so that they can make use of the vast set of lexical knowledge bases available in the Semantic Web. In the case of Machine Translation, MT systems can potentially benefit from such a resource. Unknown words and ambiguous translations are among the most common sources of error. In this paper, we attempt to minimise these types of errors by interfacing Statistical Machine Translation (SMT) models with Linked Open Data (LOD) resources such as DBpedia and BabelNet. We perform several experiments based on the SMT system Moses and evaluate multiple strategies for exploiting knowledge from multilingual linked data in automatically translating named entities. We conclude with an analysis of best practices for multilingual linked data sets in order to optimise their benefit to multilingual and cross-lingual applications.
APA, Harvard, Vancouver, ISO, and other styles
38

Eria, Kamya, and Manoj Jayabalan. "Neural Machine Translation: A Review of the Approaches." Journal of Computational and Theoretical Nanoscience 16, no. 8 (August 1, 2019): 3596–602. http://dx.doi.org/10.1166/jctn.2019.8331.

Full text
Abstract:
Neural Machine Translation (NMT) has presented promising results in Machine translation, convincingly replacing the traditional Statistical Machine Translation (SMT). This success of NMT in machine translation tasks therefore projects to more translation tasks using NMT. This paper systematically reviews the hitherto proposed NMT systems since 2014. 86 NMT papers have been selected and reviewed. The peak of NMT systems were proposed in 2016 and the same was the case for many machine translation workshops who provided datasets for NMT tasks. Most of the proposed systems covered English, German, French and Chinese translation tasks. BLEU score accompanied by significance tests has been seen to be the best metric for NMT systems evaluation. Human judgement for fluency and adequacy is also important to support the metrics. There is still room for further improvement in translations regarding rich source translations and rare words. There is also need for extensive NMT works in other languages to maximize the apparent capabilities of NMT systems. RNN Search and Moses are basically used to develop SMT baselines for model comparisons. Results provide futuristic and directional insights into further translation tasks.
APA, Harvard, Vancouver, ISO, and other styles
39

Lalrempuii, Candy, Badal Soni, and Partha Pakray. "An Improved English-to-Mizo Neural Machine Translation." ACM Transactions on Asian and Low-Resource Language Information Processing 20, no. 4 (May 26, 2021): 1–21. http://dx.doi.org/10.1145/3445974.

Full text
Abstract:
Machine Translation is an effort to bridge language barriers and misinterpretations, making communication more convenient through the automatic translation of languages. The quality of translations produced by corpus-based approaches predominantly depends on the availability of a large parallel corpus. Although machine translation of many Indian languages has progressively gained attention, there is very limited research on machine translation and the challenges of using various machine translation techniques for a low-resource language such as Mizo. In this article, we have implemented and compared statistical-based approaches with modern neural-based approaches for the English–Mizo language pair. We have experimented with different tokenization methods, architectures, and configurations. The performance of translations predicted by the trained models has been evaluated using automatic and human evaluation measures. Furthermore, we have analyzed the prediction errors of the models and the quality of predictions based on variations in sentence length and compared the model performance with the existing baselines.
APA, Harvard, Vancouver, ISO, and other styles
40

Aiken, Milam, Jamison Posey, Bart Garner, and Brian Reithel. "Predicting Machine Translation Comprehension with a Neural Network." INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY 15, no. 2 (December 8, 2015): 6546–54. http://dx.doi.org/10.24297/ijct.v15i2.3980.

Full text
Abstract:
Comprehension of natural language translations is dependent upon several factors including textual variables (grammatical, spelling, and word choice errors, sentence complexity, etc.) and human variables (language fluency, topic knowledge, motivation, dyslexia, etc.). An individual reader’s understanding of machine-generated translations can vary widely because of the lower accuracy usually associated with this technology. Prior studies have had mixed results in predicting which variables have the greatest influence on translation comprehension. In the current study, we employ an artificial neural network to analyze survey responses and reading test scores, resulting in a significantly correlated forecast of reading comprehension. Thus, we are able to offer better predictions to identify which readers might have a better grasp of content from garbled translations.
APA, Harvard, Vancouver, ISO, and other styles
41

Munkova, Dasa, Michal Munk, Ľubomír Benko, and Petr Hajek. "The role of automated evaluation techniques in online professional translator training." PeerJ Computer Science 7 (October 4, 2021): e706. http://dx.doi.org/10.7717/peerj-cs.706.

Full text
Abstract:
The rapid technologisation of translation has influenced the translation industry’s direction towards machine translation, post-editing, subtitling services and video content translation. Besides, the pandemic situation associated with COVID-19 has rapidly increased the transfer of business and education to the virtual world. This situation has motivated us not only to look for new approaches to online translator training, which requires a different method than learning foreign languages but in particular to look for new approaches to assess translator performance within online educational environments. Translation quality assessment is a key task, as the concept of quality is closely linked to the concept of optimization. Automatic metrics are very good indicators of quality, but they do not provide sufficient and detailed linguistic information about translations or post-edited machine translations. However, using their residuals, we can identify the segments with the largest distances between the post-edited machine translations and machine translations, which allow us to focus on a more detailed textual analysis of suspicious segments. We introduce a unique online teaching and learning system, which is specifically “tailored” for online translators’ training and subsequently we focus on a new approach to assess translators’ competences using evaluation techniques—the metrics of automatic evaluation and their residuals. We show that the residuals of the metrics of accuracy (BLEU_n) and error rate (PER, WER, TER, CDER, and HTER) for machine translation post-editing are valid for translator assessment. Using the residuals of the metrics of accuracy and error rate, we can identify errors in post-editing (critical, major, and minor) and subsequently utilize them in more detailed linguistic analysis.
APA, Harvard, Vancouver, ISO, and other styles
42

Knap-Dlouhá, Pavlína. "Automatische vertaling: een levensvatbare oplossing voor het recht?" Brünner Beiträge zur Germanistik und Nordistik, no. 1 (2022): 35–46. http://dx.doi.org/10.5817/bbgn2022-1-4.

Full text
Abstract:
The translation of legal information constitutes a special branch of translation practice, particularly due to the systemic nature of legal terminology, as well as complicated syntax and the use of fixed word associations and language templates. Professional human translators have always been the preferred way to translate legal documents because they have the linguistic expertise, a good understanding of the cultural context and capacity to make decisions when translating. As more and more international companies enter new markets, the need for translating legal texts increases, which often involves large amounts of data that need to be translated quickly. The possibilities and limits of machine translation (MT) are being frequently tested. Since the legal sector traditionally requires high-quality translations, the question arises whether machine translation can also work well in this field?
APA, Harvard, Vancouver, ISO, and other styles
43

Park, Sue Jin. "L2 Writing in a Machine Translation-based Korean Writing Class -Learner Perceptions and Characteristics of Translated Texts." Korean Association of General Education 17, no. 3 (June 30, 2023): 139–53. http://dx.doi.org/10.46392/kjge.2023.17.3.139.

Full text
Abstract:
This study examined how Korean learners perceive machine translation in Korean writing class, what the characteristics of machine-translated texts are, and what patterns appear depending on the level of Korean proficiency. Based on these results, this study aimed to suggest how machine translation in Korean writing class would help both of instructors and students. According to a survey of 77 Korean learners, 96% use machine translation and about 90% find it convenient. For beginners, most used machine translation when translating their native language into Korean, while intermediate and advanced learners used machine translation when translating Korean into their native language. Machine translation was mainly used for learning written language. In the second survey of same population, more than 98% of learners recognized that machine translation was convenient but inaccurate, and 97% required that there would be activities to use machine translation which could also provide feedback during class time. In sum, advanced level learners reviewed and modified machine-translated results more carefully than beginners and intermediate level learners, while beginners reviewed and modified less carefully than intermediate and advanced level learners. Thus based on this study, the teaching and learning methods for using machine translation in the writing class were presented as ‘1) finding problems and correcting one’s own language knowledge through self-correction after using machine translation, 2) discovering the differences between one’s mother tongue and Korean through back-translation activities, 3) discovering and using ways to reduce machine translation errors, where 4) the instructor should guide learners to discover cultural elements and provide explicit feedback., discovering various translations according to translation purpose and intention through cooperative activities.’
APA, Harvard, Vancouver, ISO, and other styles
44

Macketanz, Vivien, Eleftherios Avramidis, Aljoscha Burchardt, Jindrich Helcl, and Ankit Srivastava. "Machine Translation: Phrase-Based, Rule-Based and Neural Approaches with Linguistic Evaluation." Cybernetics and Information Technologies 17, no. 2 (June 1, 2017): 28–43. http://dx.doi.org/10.1515/cait-2017-0014.

Full text
Abstract:
Abstract In this article we present a novel linguistically driven evaluation method and apply it to the main approaches of Machine Translation (Rule-based, Phrase-based, Neural) to gain insights into their strengths and weaknesses in much more detail than provided by current evaluation schemes. Translating between two languages requires substantial modelling of knowledge about the two languages, about translation, and about the world. Using English-German IT-domain translation as a case-study, we also enhance the Phrase-based system by exploiting parallel treebanks for syntax-aware phrase extraction and by interfacing with Linked Open Data (LOD) for extracting named entity translations in a post decoding framework.
APA, Harvard, Vancouver, ISO, and other styles
45

Tsai, Yvonne. "Linguistic evaluation of translation errors in Chinese–English machine translations of patent titles." FORUM / Revue internationale d’interprétation et de traduction / International Journal of Interpretation and Translation 15, no. 1 (August 19, 2017): 142–56. http://dx.doi.org/10.1075/forum.15.1.08tsa.

Full text
Abstract:
Abstract The title of an invention allows the reader to understand the significance of a patent claim, and the wording of the title recurs throughout the subsequent patent documentation. If the translation of the title is erroneous, the quality of the translation in other parts of the patent documentation also suffers. This research involved using linguistic evaluation to identify common translation errors in Chinese–English machine translations of patent titles and examine the quality of machine-translated patent titles. Special focus was placed on orthographic, morphological, lexical, semantic, and syntactic errors found in patent titles. We sought to answer the following questions: (1) What are the trends in the application of machine translation in the Taiwan Intellectual Property Office (TIPO)? (2) How is the quality of machine translation controlled at TIPO? (3) What are common errors in machine-translated patent titles? Through analysis of our findings, it is possible to estimate the level of effort required from a posteditor following translation, and to suggest methods of improving machine translations of patent titles. This study also generates information applicable to the training of patent translators and posteditors.
APA, Harvard, Vancouver, ISO, and other styles
46

Marie, Benjamin, and Atsushi Fujita. "Synthesizing Parallel Data of User-Generated Texts with Zero-Shot Neural Machine Translation." Transactions of the Association for Computational Linguistics 8 (November 2020): 710–25. http://dx.doi.org/10.1162/tacl_a_00341.

Full text
Abstract:
Neural machine translation (NMT) systems are usually trained on clean parallel data. They can perform very well for translating clean in-domain texts. However, as demonstrated by previous work, the translation quality significantly worsens when translating noisy texts, such as user-generated texts (UGT) from online social media. Given the lack of parallel data of UGT that can be used to train or adapt NMT systems, we synthesize parallel data of UGT, exploiting monolingual data of UGT through crosslingual language model pre-training and zero-shot NMT systems. This paper presents two different but complementary approaches: One alters given clean parallel data into UGT-like parallel data whereas the other generates translations from monolingual data of UGT. On the MTNT translation tasks, we show that our synthesized parallel data can lead to better NMT systems for UGT while making them more robust in translating texts from various domains and styles.
APA, Harvard, Vancouver, ISO, and other styles
47

Toral, Antonio, and Andy Way. "Machine-assisted translation of literary text." Culture & Society issue 4, no. 2 (December 31, 2015): 240–67. http://dx.doi.org/10.1075/ts.4.2.04tor.

Full text
Abstract:
Contrary to perceived wisdom, we explore the role of machine translation (MT) in assisting with the translation of literary texts, considering both its limitations and its potential. Our motivations to explore this subject are twofold, arising from: (1) recent research advances in MT, and (2) the recent emergence of the ebook, which together allow us for the first time to build literature-specific MT systems by training statistical MT models on novels and their professional translations. A key challenge in literary translation is that one needs to preserve not only the meaning (as in other domains such as technical translation) but also the reading experience, so a literary translator needs to carefully select from the possible translation options. We explore the role of translation options in literary translation, especially in the context of the relatedness of the languages involved. We take Camus’ L’Étranger in the original French language and provide qualitative and quantitative analyses for its translations into English (a less-related language) and Italian (more closely related). Unsurprisingly, the MT output for Italian seems more straightforward to be post-edited. We also show that the performance of MT has improved over the last two years for this particular book, and that the applicability of MT does not only depend on the text to be translated but also on the type of translation that we are trying to produce. We then translate a novel from Spanish-to-Catalan with a literature-specific MT system. We assess the potential of this approach by discussing the translation quality of several representative passages.
APA, Harvard, Vancouver, ISO, and other styles
48

Sánchez-Cartagena, Víctor M., Juan Antonio Pérez-Ortiz, and Felipe Sánchez-Martínez. "Integrating Rules and Dictionaries from Shallow-Transfer Machine Translation into Phrase-Based Statistical Machine Translation." Journal of Artificial Intelligence Research 55 (January 13, 2016): 17–61. http://dx.doi.org/10.1613/jair.4761.

Full text
Abstract:
We describe a hybridisation strategy whose objective is to integrate linguistic resources from shallow-transfer rule-based machine translation (RBMT) into phrase-based statistical machine translation (PBSMT). It basically consists of enriching the phrase table of a PBSMT system with bilingual phrase pairs matching transfer rules and dictionary entries from a shallow-transfer RBMT system. This new strategy takes advantage of how the linguistic resources are used by the RBMT system to segment the source-language sentences to be translated, and overcomes the limitations of existing hybrid approaches that treat the RBMT systems as a black box. Experimental results confirm that our approach delivers translations of higher quality than existing ones, and that it is specially useful when the parallel corpus available for training the SMT system is small or when translating out-of-domain texts that are well covered by the RBMT dictionaries. A combination of this approach with a recently proposed unsupervised shallow-transfer rule inference algorithm results in a significantly greater translation quality than that of a baseline PBSMT; in this case, the only hand-crafted resource used are the dictionaries commonly used in RBMT. Moreover, the translation quality achieved by the hybrid system built with automatically inferred rules is similar to that obtained by those built with hand-crafted rules.
APA, Harvard, Vancouver, ISO, and other styles
49

Nakagawa, Hiroshi. "Disambiguation of single noun translations extracted from bilingual comparable corpora." Terminology 7, no. 1 (December 7, 2001): 63–83. http://dx.doi.org/10.1075/term.7.1.06nak.

Full text
Abstract:
Bilingual machine readable dictionaries are important and indispensable resources of information for cross-language information retrieval, and machine translation. Recently, these cross-language informational activities have begun to focus on specific academic or technological domains. In this paper, we describe a bilingual dictionary acquisition system which extracts translations from non-parallel but comparable corpora of a specific academic domain and disambiguates the extracted translations. The proposed method is two-fold. At the first stage, candidate terms are extracted from a Japanese and English corpus, respectively, and ranked according to their importance as terms. At the second stage, ambiguous translations are resolved by selecting the target language translation which is the nearest in rank to the source language term. Finally, we evaluate the proposed method in an experiment.
APA, Harvard, Vancouver, ISO, and other styles
50

Screen, Benjamin. "Productivity and quality when editing machine translation and translation memory outputs: an empirical analysis of English to Welsh translation." Studia Celtica Posnaniensia 2, no. 119 (September 26, 2017): 142–24. http://dx.doi.org/10.1515/scp-2017-0007.

Full text
Abstract:
AbstractThis article reports on a controlled study carried out to examine the possible benefits of editing Machine Translation and Translation Memory outputs when translating from English to Welsh. Using software capable of timing the translation process per segment, 8 professional translators each translated 75 sentences of differing match percentage, and post- edited a further 25 segments of Machine Translation. Basing the final analysis on 800 sentences and 17,440 words, the use of Fuzzy Matches in the 70-99% match range, Exact Matches and Statistical Machine Translation was found to significantly speed up the translation process. Significant correlations were also found between the processing time data of Exact Matches and Machine Translation post-editing, rather than between Fuzzy Matches and Machine Translation as expected. Two experienced translators were then asked to rate all translations for fidelity, grammaticality and style, whereby it was found that the use of translation technology either did not negatively affect translation quality compared to manual translation, or its use actually improved final quality in some cases. As well as confirming the findings of research in relation to translation technology, these findings also contradict supposed similarities between translation quality in terms of style and post-editing Machine Translation.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography