Academic literature on the topic 'Machine translations'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Machine translations.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Machine translations"

1

Ardi, Havid, Muhd Al Hafizh, Iftahur Rezqi, and Raihana Tuzzikriah. "CAN MACHINE TRANSLATIONS TRANSLATE HUMOROUS TEXTS?" Humanus 21, no. 1 (May 11, 2022): 99. http://dx.doi.org/10.24036/humanus.v21i1.115698.

Full text
Abstract:
Machine translation (MT) have attracted many researchers’attention in various ways. Although the advanced of technology brings development to the result of MT, the quality are still criticized. One of the texts that has great challenges and translation problems is humorous text. Humorous texts that trigger a smile or laugh should have the same effect in another language. Humor uses linguistic, cultural, and universal aspects to create joke or humor. These raise questions how do machines translate humorous texts from English into Indonesian? This article aimed at comparing the translation result and error made by three prominent Machine Translations (Google Translate, Yandex Translate, and Bing Microsoft Translator) in translating humorous texts. This research applied qualitative descriptive method. The data were taken by comparing the translation results produced by 3 online Machine Translations in translating four humorous texts. The findings show that Google Translate produced better translation result. There are some errors related to lexical, syntaxis, semantics, and pragmatics errors in the. The implication of this finding shows that machine translation still need human in post editing to produce similar effect to preserve the humor.
APA, Harvard, Vancouver, ISO, and other styles
2

Jiang, Yue, and Jiang Niu. "A corpus-based search for machine translationese in terms of discourse coherence." Across Languages and Cultures 23, no. 2 (November 7, 2022): 148–66. http://dx.doi.org/10.1556/084.2022.00182.

Full text
Abstract:
AbstractEarlier studies have corroborated that human translation exhibits unique linguistic features, usually referred to as translationese. However, research on machine translationese, in spite of some sparse efforts, is still in its infancy. By comparing machine translation with human translation and original target language texts, this study aims to investigate if machine translation has unique linguistic features of its own too, to what extent machine translations are different from human translations and target-language originals, and what characteristics are typical of machine translations. To this end, we collected a corpus containing English translations of modern Chinese literary texts produced by neural machine translation systems and human professional translators and comparable original texts in the target language. Based on the corpus, a quantitative study of discourse coherence was conducted by observing metrics in three dimensions borrowed from Coh-Metrix, including connectives, latent semantic analysis and the situation/mental model. The results support the existence of translationese in both human and machine translations when they are compared with original texts. However, machine translationese is not the same as human translationese in some metrics of discourse coherence. Additionally, machine translation systems, such as Google and DeepL, when compared with each other, show unique features in some coherence metrics, although on the whole they are not significantly different from each other in those coherence metrics.
APA, Harvard, Vancouver, ISO, and other styles
3

Halimah, Halimah. "COMPARISON OF HUMAN TRANSLATION WITH GOOGLE TRANSLATION OF IMPERATIVE SENTENCES IN PROCEDURES TEXT." BAHTERA : Jurnal Pendidikan Bahasa dan Sastra 17, no. 1 (January 31, 2018): 11–29. http://dx.doi.org/10.21009/bahtera.171.2.

Full text
Abstract:
AbstractThis study aims to analyze the similarity between human translation and machine translation to translate procedural text. This research uses Content Analysis approach (Content Analysis). The analysis was performed on English procedural text on a "VIXAL Lebih Wangi" cleanliness product translated into Indonesian by Nia Kurniawati (representing human translation). Meanwhile Google translation is used to represent machine translation. The study of the equations compared in this study is from the aspect of the phrase and the meaning of the whole sentence in the results of the two translations. The result of the discussion shows that the equation between human translation and machine translation in translating procedural text is low, i.e 29%. Machine translation still requires manpower to produce better translations. Keywords: equality aspect, human translation, machine translation, text procedure
APA, Harvard, Vancouver, ISO, and other styles
4

Wang, Lan. "The Impacts and Challenges of Artificial Intelligence Translation Tool on Translation Professionals." SHS Web of Conferences 163 (2023): 02021. http://dx.doi.org/10.1051/shsconf/202316302021.

Full text
Abstract:
Machine translation, especially translation based on neural network technology has made a major breakthrough and is increasingly accepted and widely used. The development of artificial intelligence (AI) translation has had a definite impact on translation jobs. People, even professional translators, are relying on AI translation. But There is no research on whether machine translation software is superior to professional translators in translating various types of documents.. In this study, we design an experiment to determine the advantages and disadvantages between AI translations and human translations. The result shows the impact of the development of AI on the translation industry. To achieve better translation results and output highquality translations in the era of rapid development of AI, it will contribute to Human-AI partnerships.
APA, Harvard, Vancouver, ISO, and other styles
5

Persaud, Ajax, and Steven O'Brien. "Quality and Acceptance of Crowdsourced Translation of Web Content." International Journal of Technology and Human Interaction 13, no. 1 (January 2017): 100–115. http://dx.doi.org/10.4018/ijthi.2017010106.

Full text
Abstract:
Organizations make extensive use of websites to communicate with people. Often, visitors to their sites speak many different languages and expect that they will be served in their native language. Translation of web content is a major challenge for many organizations because of high costs and frequent changes in the content. Currently, organizations rely on professional translators or machines to translate their content. The challenge is that professional translations is costly and too slow while machine translations do not produce high quality or accurate translations even though they may be faster and less expensive. Crowdsourcing has emerged as a technique with many applications. The purpose of this research is to test whether crowdsourcing can produce equivalent or better quality translations than professional or machine translators. A crowdsourcing study was undertaken and the results indicate that the quality of crowdsourced translations was equivalent to professional translations and far better than machine translations. The research and managerial implications are discussed.
APA, Harvard, Vancouver, ISO, and other styles
6

Tímea Kovács. "A Comparative Analysis of the Use of ‘Thereof’ in an English Non-translated Text and the English Machine- and Human-translated Versions of the Hungarian Criminal Code." International Journal of Law, Language & Discourse 10, no. 2 (October 14, 2022): 43–54. http://dx.doi.org/10.56498/1022022411.

Full text
Abstract:
Owing to the recent rise of neural language translation, a paradigm shift has been witnessed regarding the role of translators and reviewers. As neural machine translation is increasingly more capable of modelling how natural languages work, the traditional tasks of translators are being gradually replaced by new challenges. More emphasis is placed on pre- and post-editing (revision) skills and competences, presumably enabling the production of higher quality and near human-made translations. In my paper, I attempt to demonstrate through the qualitative and quantitative comparison of machine-translated legal texts (acts) to human-translated ones the relevant challenges and dynamic contrasts arising in the process of translating. Through the qualitative and quantitative analysis of the original Hungarian (source language) Criminal Code and its English (target language) machine and human translations, I aim to highlight the peculiar challenges emerging in the process of translation. I also aim to demonstrate what patterns can be observed in translations produced by human and non-human translators.
APA, Harvard, Vancouver, ISO, and other styles
7

Luo, Jinru, and Dechao Li. "Universals in machine translation?" International Journal of Corpus Linguistics 27, no. 1 (February 14, 2022): 31–58. http://dx.doi.org/10.1075/ijcl.19127.luo.

Full text
Abstract:
Abstract By examining and comparing the linguistic patterns in a self-built corpus of Chinese-English translations produced by WeChat Translate, the latest online machine translation app from the most popular social media platform (WeChat) in China, this study explores such questions as whether or not and to what extent simplification and normalization (hypothesized Translation Universals) exhibit themselves in these translations. The results show that, whereas simplification cannot be substantiated, the tendency of normalization to occur in the WeChat translations can be confirmed. The research finds that these results are caused by the operating mechanism of machine translation (MT) systems. Certain salient words tend to prime WeChat’s MT system to repetitively resort to typical language patterns, which leads to a significant overuse of lexical chunks. It is hoped that the present study can shed new light on the development of MT systems and encourage more corpus-based product-oriented research on MT.
APA, Harvard, Vancouver, ISO, and other styles
8

Al-Shalabi, Riyad, Ghassan Kanaan, Huda Al-Sarhan, Alaa Drabsh, and Islam Al-Husban. "Evaluating Machine Translations from Arabic into English and Vice Versa." International Research Journal of Electronics and Computer Engineering 3, no. 2 (June 24, 2017): 1. http://dx.doi.org/10.24178/irjece.2017.3.2.01.

Full text
Abstract:
Abstract—Machine translation (MT) allows direct communication between two persons without the need for the third party or via dictionary in your pocket, which could bring significant and per formative improvement. Since most traditional translational way is a word-sensitive, it is very important to consider the word order in addition to word selection in the evaluation of any machine translation. To evaluate the MT performance, it is necessary to dynamically observe the translation in the machine translator tool according to word order, and word selection and furthermore the sentence length. However, applying a good evaluation with respect to all previous points is a very challenging issue. In this paper, we first summarize various approaches to evaluate machine translation. We propose a practical solution by selecting an appropriate powerful tool called iBLEU to evaluate the accuracy degree of famous MT tools (i.e. Google, Bing, Systranet and Babylon). Based on the solution structure, we further discuss the performance order for these tools in both directions Arabic to English and English to Arabic. After extensive testing, we can decide that any direction gives more accurate results in translation based on the selected machine translations MTs. Finally, we proved the choosing of Google as best system performance and Systranet as the worst one. Index Terms: Machine Translation, MTs, Evaluation for Machine Translation, Google, Bing, Systranet and Babylon, Machine Translation tools, BLEU, iBLEU.
APA, Harvard, Vancouver, ISO, and other styles
9

Pathak, Amarnath, and Partha Pakray. "Neural Machine Translation for Indian Languages." Journal of Intelligent Systems 28, no. 3 (July 26, 2019): 465–77. http://dx.doi.org/10.1515/jisys-2018-0065.

Full text
Abstract:
Abstract Machine Translation bridges communication barriers and eases interaction among people having different linguistic backgrounds. Machine Translation mechanisms exploit a range of techniques and linguistic resources for translation prediction. Neural machine translation (NMT), in particular, seeks optimality in translation through training of neural network, using a parallel corpus having a considerable number of instances in the form of a parallel running source and target sentences. Easy availability of parallel corpora for major Indian language forms and the ability of NMT systems to better analyze context and produce fluent translation make NMT a prominent choice for the translation of Indian languages. We have trained, tested, and analyzed NMT systems for English to Tamil, English to Hindi, and English to Punjabi translations. Predicted translations have been evaluated using Bilingual Evaluation Understudy and by human evaluators to assess the quality of translation in terms of its adequacy, fluency, and correspondence with human-predicted translation.
APA, Harvard, Vancouver, ISO, and other styles
10

Wang, Yiren, Fei Tian, Di He, Tao Qin, ChengXiang Zhai, and Tie-Yan Liu. "Non-Autoregressive Machine Translation with Auxiliary Regularization." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 5377–84. http://dx.doi.org/10.1609/aaai.v33i01.33015377.

Full text
Abstract:
As a new neural machine translation approach, NonAutoregressive machine Translation (NAT) has attracted attention recently due to its high efficiency in inference. However, the high efficiency has come at the cost of not capturing the sequential dependency on the target side of translation, which causes NAT to suffer from two kinds of translation errors: 1) repeated translations (due to indistinguishable adjacent decoder hidden states), and 2) incomplete translations (due to incomplete transfer of source side information via the decoder hidden states). In this paper, we propose to address these two problems by improving the quality of decoder hidden representations via two auxiliary regularization terms in the training process of an NAT model. First, to make the hidden states more distinguishable, we regularize the similarity between consecutive hidden states based on the corresponding target tokens. Second, to force the hidden states to contain all the information in the source sentence, we leverage the dual nature of translation tasks (e.g., English to German and German to English) and minimize a backward reconstruction error to ensure that the hidden states of the NAT decoder are able to recover the source side sentence. Extensive experiments conducted on several benchmark datasets show that both regularization strategies are effective and can alleviate the issues of repeated translations and incomplete translations in NAT models. The accuracy of NAT models is therefore improved significantly over the state-of-the-art NAT models with even better efficiency for inference.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Machine translations"

1

Ilisei, Iustina-Narcisa. "A machine learning approach to the identification of translational language : an inquiry into translationese learning models." Thesis, University of Wolverhampton, 2012. http://hdl.handle.net/2436/299371.

Full text
Abstract:
In the world of Descriptive Translation Studies, translationese refers to the specific traits that characterise the language used in translations. While translationese has been often investigated to illustrate that translational language is different from non-translational language, scholars have also proposed a set of hypotheses which may characterise such di erences. In the quest for the validation of these hypotheses, embracing corpus-based techniques had a well-known impact in the domain, leading to several advances in the past twenty years. Despite extensive research, however, there are no universally recognised characteristics of translational language, nor universally recognised patterns likely to occur within translational language. This thesis addresses these issues, with a less used approach in the eld of Descriptive Translation Studies, by investigating the nature of translational language from a machine learning perspective. While the main focus is on analysing translationese, this thesis investigates two related sub-hypotheses: simplication and explicitation. To this end, a multilingual learning framework is designed and implemented for the identification of translational language. The framework is modelled as a categorisation task, the learning techniques having the major goal to automatically learn to distinguish between translated and non-translated texts. The second and third major goals of this research are the retrieval of the recurring patterns that are revealed in the process of solving the task of categorisation, as well as the ranking of the most in uential characteristics used to accomplish the learning task. These aims are ful lled by implementing a system that adopts the machine learning methodology proposed in this research. The learning framework proves to be an adaptable multilingual framework for the investigation of the nature of translational language, its adaptability being illustrated in this thesis by applying it to the investigation of two languages: Spanish and Romanian. In this thesis, di erent research scenarios and learning models are experimented with in order to assess to what extent translated texts can be diff erentiated from non-translated texts in certain contexts. The findings show that machine learning algorithms, aggregating a large set of potentially discriminative characteristics for translational language, are able to diff erentiate translated texts from non-translated ones with high scores. The evaluation experiments report performance values such as accuracy, precision, recall, and F-measure on two datasets. The present research is situated at the con uence of three areas, more precisely: Descriptive Translation Studies, Machine Learning and Natural Language Processing, justifying the need to combine these elds for the investigation of translationese and translational hypotheses.
APA, Harvard, Vancouver, ISO, and other styles
2

Tirnauca, Catalin Ionut. "Syntax-directed translations, tree transformations and bimorphisms." Doctoral thesis, Universitat Rovira i Virgili, 2016. http://hdl.handle.net/10803/381246.

Full text
Abstract:
La traducció basada en la sintaxi va sorgir en l'àmbit de la traducció automàtica dels llenguatges naturals. Els sistemes han de modelar les transformacions d'arbres, reordenar parts d'oracions, ser simètrics i posseir propietats com la componibilitat o simetria. Existeixen diverses maneres de definir transformacions d'arbres: gramàtiques síncrones, transductors d'arbres i bimorfismes d'arbres. Les gramàtiques síncrones fan tot tipus de rotacions, però les propietats matemàtiques són més difícils de provar. Els transductors d'arbres són operacionals i fàcils d'implementar, però les classes principals no són tancades sota la composició. Els bimorfismes d'arbres són difícils d'implementar, però proporcionen una eina natural per provar componibilitat o simetria. Per millorar el procés de traducció, les gramàtiques síncrones es relacionen amb els bimorfismes d'arbres i amb els transductors d'arbres. En aquesta tesi es duu a terme un ampli estudi de la teoria i les propietats dels sistemes de traducció dirigides per la sintaxi, des d'aquestes tres perspectives molt diferents que es complementen perfectament entre si: com a dispositius generatius (gramàtiques síncrones), com a màquines acceptadores (transductors) i com a estructures algebraiques (bimorfismes). S'investiguen i comparen al nivell de la transformació d'arbres i com a dispositius que defineixen translacions. L'estudi es centra en bimorfismes, amb especial èmfasi en les seves aplicacions per al processament del llenguatge natural. També es proposa una completa i actualitzada visió general sobre les classes de transformacions d'arbres definits per bimorfismes, vinculant-los amb els tipus coneguts de gramàtiques síncrones i transductors d'arbres. Provem o recordem totes les propietats interessants que les esmentades classes posseeixen, millorant així els coneixements matemàtics previs. A més, s'exposen les relacions d'inclusió entre les principals classes de bimorfismes mitjançant un diagrama Hasse, com a dispositius de traducció i com a mecanismes de transformació d'arbres.
La traducción basada en la sintaxis surgió en el ámbito de la traducción automática de los lenguajes naturales. Los sistemas deben modelar las transformaciones de árboles, reordenar partes de oraciones, ser simétricos y poseer propiedades como la composición o simetría. Existen varias maneras de definir transformaciones de árboles: gramáticas síncronas, transductores de árboles y bimorfismos de árboles. Las gramáticas síncronas hacen todo tipo de rotaciones, pero las propiedades matemáticas son más difíciles de probar. Los transductores de árboles son operacionales y fáciles de implementar pero las clases principales no son cerradas bajo la composición. Los bimorfismos de árboles son difíciles de implementar, pero proporcionan una herramienta natural para probar composición o simetría. Para mejorar el proceso de traducción, las gramáticas síncronas se relacionan con los bimorfismos de árboles y con los transductores de árboles. En esta tesis se lleva a cabo un amplio estudio de la teoría y las propiedades de los sistemas de traducción dirigidas por la sintaxis, desde estas tres perspectivas muy diferentes que se complementan perfectamente entre sí: como dispositivos generativos (gramáticas síncronas), como máquinas aceptadores (transductores) y como estructuras algebraicas (bimorfismos). Se investigan y comparan al nivel de la transformación de árboles y como dispositivos que definen translaciones. El estudio se centra en bimorfismos, con especial énfasis en sus aplicaciones para el procesamiento del lenguaje natural. También se propone una completa y actualizada visión general sobre las clases de transformaciones de árboles definidos por bimorfismos, vinculándolos con los tipos conocidos de gramáticas síncronas y transductores de árboles. Probamos o recordamos todas las propiedades interesantes que tales clases poseen, mejorando así los previos conocimientos matemáticos. Además, se exponen las relaciones de inclusión entre las principales clases de bimorfismos a través de un diagrama Hasse, como dispositivos de traducción y como mecanismos de transformación de árboles.
Syntax-based machine translation was established by the demanding need of systems used in practical translations between natural languages. Such systems should, among others, model tree transformations, re-order parts of sentences, be symmetric and possess composability or forward and backward application. There are several formal ways to define tree transformations: synchronous grammars, tree transducers and tree bimorphisms. The synchronous grammars do all kind of rotations, but mathematical properties are harder to prove. The tree transducers are operational and easy to implement, but closure under composition does not hold for the main types. The tree bimorphisms are difficult to implement, but they provide a natural tool for proving composability or symmetry. To improve the translation process, synchronous grammars were related to tree bimorphisms and tree transducers. Following this lead, we give a comprehensive study of the theory and properties of syntax-directed translation systems seen from these three very different perspectives that perfectly complement each other: as generating devices (synchronous grammars), as acceptors (transducer machines) and as algebraic structures (bimorphisms). They are investigated and compared both as tree transformation and translation defining devices. The focus is on bimorphisms as they only recently got again into the spotlight especially given their applications to natural language processing. Moreover, we propose a complete and up-to-date overview on tree transformations classes defined by bimorphisms, linking them with well-known types of synchronous grammars and tree transducers. We prove or recall all the interesting properties such classes possess improving thus the mathematical knowledge on synchronous grammars and/or tree transducers. Also, inclusion relations between the main classes of bimorphisms both as translation devices and as tree transformation mechanisms are given for the first time through a Hasse diagram. Directions for future work are suggested by exhibiting how to extend previous results to more general classes of bimorphisms and synchronous grammars.
APA, Harvard, Vancouver, ISO, and other styles
3

Al, Batineh Mohammed S. "Latent Semantic Analysis, Corpus stylistics and Machine Learning Stylometry for Translational and Authorial Style Analysis: The Case of Denys Johnson-Davies’ Translations into English." Kent State University / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=kent1429300641.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Tebbifakhr, Amirhossein. "Machine Translation For Machines." Doctoral thesis, Università degli studi di Trento, 2021. http://hdl.handle.net/11572/320504.

Full text
Abstract:
Traditionally, Machine Translation (MT) systems are developed by targeting fluency (i.e. output grammaticality) and adequacy (i.e. semantic equivalence with the source text) criteria that reflect the needs of human end-users. However, recent advancements in Natural Language Processing (NLP) and the introduction of NLP tools in commercial services have opened new opportunities for MT. A particularly relevant one is related to the application of NLP technologies in low-resource language settings, for which the paucity of training data reduces the possibility to train reliable services. In this specific condition, MT can come into play by enabling the so-called “translation-based” workarounds. The idea is simple: first, input texts in the low-resource language are translated into a resource-rich target language; then, the machine-translated text is processed by well-trained NLP tools in the target language; finally, the output of these downstream components is projected back to the source language. This results in a new scenario, in which the end-user of MT technology is no longer a human but another machine. We hypothesize that current MT training approaches are not the optimal ones for this setting, in which the objective is to maximize the performance of a downstream tool fed with machine-translated text rather than human comprehension. Under this hypothesis, this thesis introduces a new research paradigm, which we named “MT for machines”, addressing a number of questions that raise from this novel view of the MT problem. Are there different quality criteria for humans and machines? What makes a good translation from the machine standpoint? What are the trade-offs between the two notions of quality? How to pursue machine-oriented objectives? How to serve different downstream components with a single MT system? How to exploit knowledge transfer to operate in different language settings with a single MT system? Elaborating on these questions, this thesis: i) introduces a novel and challenging MT paradigm, ii) proposes an effective method based on Reinforcement Learning analysing its possible variants, iii) extends the proposed method to multitask and multilingual settings so as to serve different downstream applications and languages with a single MT system, iv) studies the trade-off between machine-oriented and human-oriented criteria, and v) discusses the successful application of the approach in two real-world scenarios.
APA, Harvard, Vancouver, ISO, and other styles
5

Tiedemann, Jörg. "Recycling Translations : Extraction of Lexical Data from Parallel Corpora and their Application in Natural Language Processing." Doctoral thesis, Uppsala University, Department of Linguistics, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-3791.

Full text
Abstract:

The focus of this thesis is on re-using translations in natural language processing. It involves the collection of documents and their translations in an appropriate format, the automatic extraction of translation data, and the application of the extracted data to different tasks in natural language processing.

Five parallel corpora containing more than 35 million words in 60 languages have been collected within co-operative projects. All corpora are sentence aligned and parts of them have been analyzed automatically and annotated with linguistic markup.

Lexical data are extracted from the corpora by means of word alignment. Two automatic word alignment systems have been developed, the Uppsala Word Aligner (UWA) and the Clue Aligner. UWA implements an iterative "knowledge-poor" word alignment approach using association measures and alignment heuristics. The Clue Aligner provides an innovative framework for the combination of statistical and linguistic resources in aligning single words and multi-word units. Both aligners have been applied to several corpora. Detailed evaluations of the alignment results have been carried out for three of them using fine-grained evaluation techniques.

A corpus processing toolbox, Uplug, has been developed. It includes the implementation of UWA and is freely available for research purposes. A new version, Uplug II, includes the Clue Aligner. It can be used via an experimental web interface (UplugWeb).

Lexical data extracted by the word aligners have been applied to different tasks in computational lexicography and machine translation. The use of word alignment in monolingual lexicography has been investigated in two studies. In a third study, the feasibility of using the extracted data in interactive machine translation has been demonstrated. Finally, extracted lexical data have been used for enhancing the lexical components of two machine translation systems.

APA, Harvard, Vancouver, ISO, and other styles
6

Joelsson, Jakob. "Translationese and Swedish-English Statistical Machine Translation." Thesis, Uppsala universitet, Institutionen för lingvistik och filologi, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-305199.

Full text
Abstract:
This thesis investigates how well machine learned classifiers can identify translated text, and the effect translationese may have in Statistical Machine Translation -- all in a Swedish-to-English, and reverse, context. Translationese is a term used to describe the dialect of a target language that is produced when a source text is translated. The systems trained for this thesis are SVM-based classifiers for identifying translationese, as well as translation and language models for Statistical Machine Translation. The classifiers successfully identified translationese in relation to non-translated text, and to some extent, also what source language the texts were translated from. In the SMT experiments, variation of the translation model was whataffected the results the most in the BLEU evaluation. Systems configured with non-translated source text and translationese target text performed better than their reversed counter parts. The language model experiments showed that those trained on known translationese and classified translationese performed better than known non-translated text, though classified translationese did not perform as well as the known translationese. Ultimately, the thesis shows that translationese can be identified by machine learned classifiers and may affect the results of SMT systems.
APA, Harvard, Vancouver, ISO, and other styles
7

Karlbom, Hannes. "Hybrid Machine Translation : Choosing the best translation with Support Vector Machines." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-304257.

Full text
Abstract:
In the field of machine translation there are various systems available which have different strengths and weaknesses. This thesis investigates the combination of two systems, a rule based one and a statistical one, to see if such a hybrid system can provide higher quality translations. The classification approach was taken, where a support vector machine is used to choose which sentences from each of the two systems result in the best translation. To label the sentences from the collected data a new method of simulated annealing was applied and compared to previously tried heuristics. The results show that a hybrid system has an increased average BLEU score of 6.10% or 1.86 points over the single best system, and that using the labels created through simulated annealing, over heuristic rules, gives a significant improvement in classifier performance.
APA, Harvard, Vancouver, ISO, and other styles
8

Ahmadniaye, Bosari Benyamin. "Reliable training scenarios for dealing with minimal parallel-resource language pairs in statistical machine translation." Doctoral thesis, Universitat Autònoma de Barcelona, 2017. http://hdl.handle.net/10803/461204.

Full text
Abstract:
La tesis trata sobre sistemas de traducción automática estadística (SMT) de alta calidad, para trabajar con pares de lenguajes con recursos paralelos mínimos, titulado “Reliable Training Scenarios for Dealing with Minimal Parallel-Resource Language Pairs in Statistical Machine Translation”. El desafío principal que abordamos en nuestro enfoque es la carencia de datos paralelos y este se enfrenta en diferentes escenarios. SMT es uno de los enfoques preferidos para traducción automática (MT), y se podrían detectar varias mejoras en este enfoque, específicamente en la calidad de salida en una serie de sistemas para pares de idiomas, desde los avances en potencia computacional, junto con la exploración llevada a cabo de nuevos métodos y algoritmos. Cuando reflexionamos sobre el desarrollo de sistemas SMT para muchos idiomas pares, el principal cuello de botella que encontraremos es la falta de datos paralelos de entrenamiento. Debido al hecho de que se requiere mucho tiempo y esfuerzo para crear estos corpus, están disponibles en cantidad, género e idioma limitados. Los modelos de SMT aprenden cómo podrían hacer la traducción a través del proceso de examen de un corpus paralelo bilingüe que contenga las oraciones alineadas con sus traducciones producidas por humanos. Sin embargo, la calidad de salida de los sistemas de SMT es depende de la disponibilidad de cantidades masivas de texto paralelo dentro de los idiomas de origen y destino. Por lo tanto, los recursos paralelos juegan un papel importante en la mejora de la calidad de los sistemas de SMT. Definimos la mínima configuración de los recursos paralelos de SMT que poseen solo pequeñas cantidades de datos paralelos, que también se puede apreciar en varios pares de idiomas. El rendimiento logrado por el mínimo recurso paralelo en SMT en el estado del arte es altamente apreciable, pero generalmente usan el texto monolingüe y no abordan fundamentalmente la escasez de entrenamiento de textos paralelos. Cuando creamos la ampliación en los datos de entrenamiento paralelos, sin proporcionar ningún tipo de garantía sobre la calidad de los pares de oraciones bilingües que se han generado recientemente, también aumentan las preocupaciones. Las limitaciones que surgen durante el entrenamiento de la SMT de recursos paralelos mínimos demuestran que los sistemas actuales son incapaces de producir resultados de traducción de alta calidad. En esta tesis, hemos propuesto dos escenarios, uno de “direct-bridge combination” y otro escenario de “round-trip training”. El primero se basa en la técnica de “bridge language”, mientras que el segundo se basa en el enfoque de “retraining”, para tratar con SMT de recursos paralelos mínimos. Nuestro objetivo principal para presentar el escenario de “direct-bridge combination” es que podamos acercarlo al rendimiento existente en el estado del arte. Este escenario se ha propuesto para maximizar la ganancia de información, eligiendo las partes apropiadas del sistema de traducción basado en “bridge” que no interfieran con el sistema de traducción directa en el que se confía más. Además, el escenario de “round trip training” ha sido propuesto para aprovechar la fácil disponibilidad del par de frases bilingües generadas para construir un sistema de SMT de alta calidad en un comportamiento iterativo, seleccionando el subconjunto de alta calidad de los pares de oraciones generados en el lado del objetivo, preparando sus oraciones adecuadas correspondientes de origen y juntándolas con los pares de oraciones originales para re-entrenar el sistema de SMT. Los métodos propuestos se evalúan intrínsecamente, y su comparación se realiza en base a los sistemas de traducción de referencia. También hemos llevado a cabo los experimentos en los escenarios propuestos antes mencionados con datos bilingües iniciales mínimos. Hemos demostrado la mejora en el rendimiento a través del uso de los métodos propuestos al construir sistemas de SMT de alta calidad sobre la línea de base que involucra a cada escenario.
The thesis is about the topic of high-quality Statistical Machine Translation (SMT) systems for working with minimal parallel-resource language pairs entitled “Reliable Training Scenarios for Dealing with Minimal Parallel-Resource Language Pairs in Statistical Machine Translation”. Then main challenge we targeted in our approaches is parallel data scarcity, and this challenge is faced in different solution scenarios. SMT is one of the preferred approaches to Machine Translation (MT), and various improvements could be detected in this approach, specifically in the output quality in a number of systems for language pairs since the advances in computational power, together with the exploration of new methods and algorithms have been made. When we ponder over the development of SMT systems for many language pairs, the major bottleneck that we will find is the lack of training parallel data. Due to the fact that lots of time and effort is required to create these corpora, they are available in limited quantity, genre, and language. SMT models learn that how they could do translation through the process of examining a bilingual parallel corpus that contains the sentences aligned with their human-produced translations. However, the output quality of SMT systems is heavily dependent on the availability of massive amounts of parallel text within the source and target languages. Hence, an important role is played by the parallel resources so that the quality of SMT systems could be improved. We define minimal parallel-resource SMT settings possess only small amounts of parallel data, which can also be seen in various pairs of languages. The performance achieved by current state-of-the-art minimal parallel-resource SMT is highly appreciable, but they usually use the monolingual text and do not fundamentally address the shortage of parallel training text. Creating enlargement in the parallel training data without providing any sort of guarantee on the quality of the bilingual sentence pairs that have been newly generated, is also raising concerns. The limitations that emerge during the training of the minimal parallel- resource SMT prove that the current systems are incapable of producing the high- quality translation output. In this thesis, we have proposed the “direct-bridge combination” scenario as well as the “round-trip training” scenario, that the former is based on bridge language technique while the latter one is based on retraining approach, for dealing with minimal parallel-resource SMT systems. Our main aim for putting forward the direct-bridge combination scenario is that we might bring it closer to state-of-the-art performance. This scenario has been proposed to maximize the information gain by choosing the appropriate portions of the bridge-based translation system that do not interfere with the direct translation system which is trusted more. Furthermore, the round-trip training scenario has been proposed to take advantage of the readily available generated bilingual sentence pairs to build high-quality SMT system in an iterative behavior; by selecting high- quality subset of generated sentence pairs in target side, preparing their suitable correspond source sentences, and using them together with the original sentence pairs to retrain the SMT system. The proposed methods are intrinsically evaluated, and their comparison is made against the baseline translation systems. We have also conducted the experiments in the aforementioned proposed scenarios with minimal initial bilingual data. We have demonstrated improvement made in the performance through the use of proposed methods while building high-quality SMT systems over the baseline involving each scenario.
APA, Harvard, Vancouver, ISO, and other styles
9

Davis, Paul C. "Stone Soup Translation: The Linked Automata Model." Connect to this title online, 2002. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1023806593.

Full text
Abstract:
Thesis (Ph. D.)--Ohio State University, 2002.
Title from first page of PDF file. Document formatted into pages; contains xvi, 306 p.; includes graphics. Includes abstract and vita. Advisor: Chris Brew, Dept. of Linguistics. Includes indexes. Includes bibliographical references (p. 284-293).
APA, Harvard, Vancouver, ISO, and other styles
10

Martínez, Garcia Eva. "Document-level machine translation : ensuring translational consistency of non-local phenomena." Doctoral thesis, Universitat Politècnica de Catalunya, 2019. http://hdl.handle.net/10803/668473.

Full text
Abstract:
In this thesis, we study the automatic translation of documents by taking into account cross-sentence phenomena. This document-level information is typically ignored by most of the standard state-of-the-art Machine Translation (MT) systems, which focus on translating texts processing each of their sentences in isolation. Translating each sentence without looking at its surrounding context can lead to certain types of translation errors, such as inconsistent translations for the same word or for elements in a coreference chain. We introduce methods to attend to document-level phenomena in order to avoid those errors, and thus, reach translations that properly convey the original meaning. Our research starts by identifying the translation errors related to such document-level phenomena that commonly appear in the output of state-of-the-art Statistical Machine Translation (SMT) systems. For two of those errors, namely inconsistent word translations as well as gender and number disagreements among words, we design simple and yet effective post-processing techniques to tackle and correct them. Since these techniques are applied a posteriori, they can access the whole source and target documents, and hence, they are able to perform a global analysis and improve the coherence and consistency of the translation. Nevertheless, since following such a two-pass decoding strategy is not optimal in terms of efficiency, we also focus on introducing the context-awareness during the decoding process itself. To this end, we enhance a document-oriented SMT system with distributional semantic information in the form of bilingual and monolingual word embeddings. In particular, these embeddings are used as Semantic Space Language Models (SSLMs) and as a novel feature function. The goal of the former is to promote word translations that are semantically close to their preceding context, whereas the latter promotes the lexical choice that is closest to its surrounding context, for those words that have varying translations throughout the document. In both cases, the context extends beyond sentence boundaries. Recently, the MT community has transitioned to the neural paradigm. The finalstep of our research proposes an extension of the decoding process for a Neural Machine Translation (NMT) framework, independent of the model architecture, by shallow fusing the information from a neural translation model and the context semantics enclosed in the previously studied SSLMs. The aim of this modification is to introduce the benefits of context information also into the decoding process of NMT systems, as well as to obtain an additional validation for the techniques we explored. The automatic evaluation of our approaches does not reflect significant variations. This is expected since most automatic metrics are neither context-nor semantic-aware and because the phenomena we tackle are rare, leading to few modifications with respect to the baseline translations. On the other hand, manual evaluations demonstrate the positive impact of our approaches since human evaluators tend to prefer the translations produced by our document-aware systems. Therefore, the changes introduced by our enhanced systems are important since they are related to how humans perceive translation quality for long texts.
En esta tesis se estudia la traducción automática de documentos teniendo en cuenta fenómenos que ocurren entre oraciones. Típicamente, esta información a nivel de documento se ignora por la mayoría de los sistemas de Traducción Automática (MT), que se centran en traducir los textos procesando cada una de las frases que los componen de manera aislada. Traducir cada frase sin mirar al contexto que la rodea puede llevar a generar cierto tipo de errores de traducción, como pueden ser traducciones inconsistentes para la misma palabra o para elementos que aparecen en la misma cadena de correferencia. En este trabajo se presentan métodos para prestar atención a fenómenos a nivel de documento con el objetivo de evitar este tipo de errores y así llegar a generar traducciones que transmitan correctamente el significado original del texto. Nuestra investigación empieza por identificar los errores de traducción relacionados con los fenómenos a nivel de documento que aparecen de manera común en la salida de los sistemas Estadísticos del Traducción Automática (SMT). Para dos de estos errores, la traducción inconsistente de palabras, así como los desacuerdos en género y número entre palabras, diseñamos técnicas simples pero efectivas como post-procesos para tratarlos y corregirlos. Como estas técnicas se aplican a posteriori, pueden acceder a los documentos enteros tanto del origen como la traducción generada, y así son capaces de hacer un análisis global y mejorar la coherencia y la consistencia de la traducción. Sin embargo, como seguir una estrategia de traducción en dos pasos no es óptima en términos de eficiencia, también nos centramos en introducir la conciencia del contexto durante el propio proceso de generación de la traducción. Para esto, extendemos un sistema SMT orientado a documentos incluyendo información semántica distribucional en forma de word embeddings bilingües y monolingües. En particular, estos embeddings se usan como un Modelo de Lenguaje de Espacio Semántico (SSLM) y como una nueva función característica del sistema. La meta del primero es promover traducciones de palabras que sean semánticamente cercanas a su contexto precedente, mientras que la segunda quiere promover la selección léxica que es más cercana a su contexto para aquellas palabras que tienen diferentes traducciones a lo largo de un documento. En ambos casos, el contexto que se tiene en cuenta va más allá de los límites de una frase u oración. Recientemente, la comunidad MT ha hecho una transición hacia el paradigma neuronal. El paso final de nuestra investigación propone una extensión del proceso de decodificación de un sistema de Traducción Automática Neuronal (NMT), independiente de la arquitectura del modelo de traducción, aplicando la técnica de Shallow Fusion para combinar la información del modelo de traducción neuronal y la información semántica del contexto encerrada en los modelos SSLM estudiados previamente. La motivación de esta modificación está en introducir los beneficios de la información del contexto también en el proceso de decodificación de los sistemas NMT, así como también obtener una validación adicional para las técnicas que se han ido explorando a lo largo de esta tesis. La evaluación automática de nuestras propuestas no refleja variaciones significativas. Esto es un comportamiento esperado ya que la mayoría de las métricas automáticas no se diseñan para ser sensibles al contexto o a la semántica, y además los fenómenos que tratamos son escasos, llevando a pocas modificaciones con respecto a las traducciones de partida. Por otro lado, las evaluaciones manuales demuestran el impacto positivo de nuestras propuestas ya que los evaluadores humanos tienen a preferir las traducciones generadas por nuestros sistemas a nivel de documento. Entonces, los cambios introducidos por nuestros sistemas extendidos son importantes porque están relacionados con la forma en que los humanos perciben la calidad de la traducción de textos largos.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Machine translations"

1

The naked machine: Selected poems. Reykjavík: Almenna Bókafélagiđ, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Johannessen, Matthías. The naked machine: Selected poems. Reykjavík: Almenna Bókafélagiđ, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Johannessen, Matthías. The naked machine: Selected poems of Matthías Johannessen. Reykjav ́k: Almenna Bókafélagid, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Christa, Hauenschild, and Heizmann Susanne 1963-, eds. Machine translation and translation theory. Berlin: Mouton de Gruyter, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

The Ghost in the Shell 2: Man-Machine Interface. New York, USA: Kodansha America, Incorporated, 2016.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Su, Jinsong, and Rico Sennrich, eds. Machine Translation. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-7512-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Shi, Xiaodong, and Yidong Chen, eds. Machine Translation. Berlin, Heidelberg: Springer Berlin Heidelberg, 2014. http://dx.doi.org/10.1007/978-3-662-45701-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Chen, Jiajun, and Jiajun Zhang, eds. Machine Translation. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-13-3083-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Yang, Muyun, and Shujie Liu, eds. Machine Translation. Singapore: Springer Singapore, 2016. http://dx.doi.org/10.1007/978-981-10-3635-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Huang, Shujian, and Kevin Knight, eds. Machine Translation. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-15-1721-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Machine translations"

1

Daems, Joke, and Lieve Macken. "Post-Editing Human Translations and Revising Machine Translations." In Translation Revision and Post-Editing, 50–70. London ; New York : Rutledge, 2020.: Routledge, 2020. http://dx.doi.org/10.4324/9781003096962-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kumar, Ritesh. "Making Machine Translations Polite: The Problematic Speech Acts." In Information Systems for Indian Languages, 185–90. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-19403-0_29.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Greiner-Petter, André. "From LaTeX to Computer Algebra System." In Making Presentation Math Computable, 95–112. Wiesbaden: Springer Fachmedien Wiesbaden, 2023. http://dx.doi.org/10.1007/978-3-658-40473-4_4.

Full text
Abstract:
AbstractThis chapter addresses research tasks III and IV, i.e., implementing a system for automated semantification and translation of mathematical expressions to CAS syntax. In the previous chapter, we laid the foundation for a novel context-sensitive semantification approach that extracts the semantic information from a textual context and semantically enriches a formula with semantic LATEX macros. In this chapter, we realize this proposed semantification approach on 104 English Wikipedia articles with 6,337 mathematical expressions. However, before we continue with this main track, we first apply a novel context-agnostic machine translation approach for translations from LATEX to Mathematica.
APA, Harvard, Vancouver, ISO, and other styles
4

Sun, Juan, Zhi Lu, Isabel Lacruz, Lijun Ma, Lin Fan, Xiuhua Huang, and Bo Zhou. "Chapter 4. An eye-tracking study of productivity and effort in Chinese-to-English translation and post-editing." In American Translators Association Scholarly Monograph Series, 57–82. Amsterdam: John Benjamins Publishing Company, 2023. http://dx.doi.org/10.1075/ata.xx.04sun.

Full text
Abstract:
For several language pairs, an emerging consensus finds that post-editing of machine translations is faster and less cognitively effortful than from-scratch human translation, resulting in increased translator productivity and decreased translator fatigue. These benefits have yet to be robustly established in some language pairs that are linguistically and culturally remote with very different writing systems. We carry out a systematic Chinese-to-English study using keystroke logger timing measures and eye-tracking measures of cognitive effort, taking into account translator education levels, different source text domains, and quality of the translation product. We observe significant post-editing productivity gains for more highly educated participants and for more straightforward and less technical texts. Measures of cognitive effort show significantly reduced cognitive effort in post-editing.
APA, Harvard, Vancouver, ISO, and other styles
5

Carter, Dave, and Diana Inkpen. "Searching for Poor Quality Machine Translated Text: Learning the Difference between Human Writing and Machine Translations." In Advances in Artificial Intelligence, 49–60. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-30353-1_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Song, Yuting, Biligsaikhan Batjargal, and Akira Maeda. "A Preliminary Attempt to Evaluate Machine Translations of Ukiyo-e Metadata Records." In Digital Libraries at Times of Massive Societal Transition, 262–68. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-64452-9_24.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Weber, Jutta. "Black-Boxing Organisms, Exploiting the Unpredictable: Control Paradigms in Human–Machine Translations." In Science in the Context of Application, 409–29. Dordrecht: Springer Netherlands, 2010. http://dx.doi.org/10.1007/978-90-481-9051-5_24.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Domingo, Miguel, and Francisco Casacuberta. "A Comparison of Character-Based Neural Machine Translations Techniques Applied to Spelling Normalization." In Pattern Recognition. ICPR International Workshops and Challenges, 326–38. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-68787-8_24.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

El-Haj, Mahmoud, Paul Rayson, and David Hall. "Language Independent Evaluation of Translation Style and Consistency: Comparing Human and Machine Translations of Camus’ Novel “The Stranger”." In Text, Speech and Dialogue, 116–24. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-10816-2_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Chiang, David. "Machine Translation." In Grammars for Language and Genes, 51–67. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-20444-9_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Machine translations"

1

XU, Jitao, Josep Crego, and Jean Senellart. "Boosting Neural Machine Translation with Similar Translations." In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, PA, USA: Association for Computational Linguistics, 2020. http://dx.doi.org/10.18653/v1/2020.acl-main.144.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zhang, Wu, Tung Yeung Lam, and Mee Yee Chan. "Using Translation Memory to Improve Neural Machine Translations." In ICDLT 2022: 2022 6th International Conference on Deep Learning Technologies. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3556677.3556691.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Meng, Fandong, Zhaopeng Tu, Yong Cheng, Haiyang Wu, Junjie Zhai, Yuekui Yang, and Di Wang. "Neural Machine Translation with Key-Value Memory-Augmented Attention." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/357.

Full text
Abstract:
Although attention-based Neural Machine Translation (NMT) has achieved remarkable progress in recent years, it still suffers from issues of repeating and dropping translations. To alleviate these issues, we propose a novel key-value memory-augmented attention model for NMT, called KVMEMATT. Specifically, we maintain a timely updated keymemory to keep track of attention history and a fixed value-memory to store the representation of source sentence throughout the whole translation process. Via nontrivial transformations and iterative interactions between the two memories, the decoder focuses on more appropriate source word(s) for predicting the next target word at each decoding step, therefore can improve the adequacy of translations. Experimental results on Chinese)English and WMT17 German,English translation tasks demonstrate the superiority of the proposed model.
APA, Harvard, Vancouver, ISO, and other styles
4

Marie, Benjamin, and Atsushi Fujita. "Unsupervised Extraction of Partial Translations for Neural Machine Translation." In Proceedings of the 2019 Conference of the North. Stroudsburg, PA, USA: Association for Computational Linguistics, 2019. http://dx.doi.org/10.18653/v1/n19-1384.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Inkova, O., and V. Nuriev. "Divergent translation of connectives in human and machine translations." In Computational Linguistics and Intellectual Technologies. Russian State University for the Humanities, 2021. http://dx.doi.org/10.28995/2075-7182-2021-20-339-348.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Chen, Shizhe, Qin Jin, and Jianlong Fu. "From Words to Sentences: A Progressive Learning Approach for Zero-resource Machine Translation with Visual Pivots." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/685.

Full text
Abstract:
The neural machine translation model has suffered from the lack of large-scale parallel corpora. In contrast, we humans can learn multi-lingual translations even without parallel texts by referring our languages to the external world. To mimic such human learning behavior, we employ images as pivots to enable zero-resource translation learning. However, a picture tells a thousand words, which makes multi-lingual sentences pivoted by the same image noisy as mutual translations and thus hinders the translation model learning. In this work, we propose a progressive learning approach for image-pivoted zero-resource machine translation. Since words are less diverse when grounded in the image, we first learn word-level translation with image pivots, and then progress to learn the sentence-level translation by utilizing the learned word translation to suppress noises in image-pivoted multi-lingual sentences. Experimental results on two widely used image-pivot translation datasets, IAPR-TC12 and Multi30k, show that the proposed approach significantly outperforms other state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
7

Eriguchi, Akiko, Shufang Xie, Tao Qin, and Hany Hassan. "Building Multilingual Machine Translation Systems That Serve Arbitrary XY Translations." In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Stroudsburg, PA, USA: Association for Computational Linguistics, 2022. http://dx.doi.org/10.18653/v1/2022.naacl-main.44.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Navlea, Mirabela. "IMPACT OF ONLINE MACHINE TRANSLATION SYSTEMS ON LIFELONG LEARNERS." In eLSE 2015. Carol I National Defence University Publishing House, 2015. http://dx.doi.org/10.12753/2066-026x-15-082.

Full text
Abstract:
The economic growth, but also the provision and storage of multilingual texts, in considerable quantity, by the development of the Web and powerful computers, have increased the demand for automatic translations for different pairs of languages, within companies and administrations worldwide. The automatic translation is an entirely automatically process to translate a text from a source language to a text in a target language. No human intervention is required. However, current machine translation systems generate a lot of errors and obviously the human output intervention is still necessary in order to obtain a high quality for a given translation. If the purpose of the translation is not its high quality, but rather the comprehension of the transmitted message, online machine translation systems are tough very effective for quick translations, in real time, within companies and administrations. Thus, these systems can help professionals to improve their knowledge in their various areas of activity, from multilingual Web content, but also to learn foreign languages or to write technical documents in several languages. Machine translation systems are based on linguistic approaches or on purely statistical methods. The latest systems are hybrid. They combine statistical techniques and linguistic information. If linguistic systems provide effective results by requiring tough a major human and material effort, the hybrid systems can provide similar results by using less expensive resources. In this paper, we present the state of the art in machine translation field. The advantages and the disadvantages of these methods are also presented. The particular case of the Romanian which is a language less rich in electronic resources is also discussed.
APA, Harvard, Vancouver, ISO, and other styles
9

Bizzoni, Yuri, Tom S. Juzek, Cristina España-Bonet, Koel Dutta Chowdhury, Josef van Genabith, and Elke Teich. "How Human is Machine Translationese? Comparing Human and Machine Translations of Text and Speech." In Proceedings of the 17th International Conference on Spoken Language Translation. Stroudsburg, PA, USA: Association for Computational Linguistics, 2020. http://dx.doi.org/10.18653/v1/2020.iwslt-1.34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Sun, Liqun, and Zhi Quan Zhou. "Metamorphic Testing for Machine Translations: MT4MT." In 2018 25th Australasian Software Engineering Conference (ASWEC). IEEE, 2018. http://dx.doi.org/10.1109/aswec.2018.00021.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Machine translations"

1

Walrath, James D. Evidence for Increased Discriminability in Judging the Acceptability of Machine Translations: The Case for Magnitude Estimation. Fort Belvoir, VA: Defense Technical Information Center, May 2009. http://dx.doi.org/10.21236/ada499858.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Morgan, John J. Project-specific Machine Translation. Fort Belvoir, VA: Defense Technical Information Center, December 2011. http://dx.doi.org/10.21236/ada554967.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hobbs, Jerry R., and Megumi Kameyama. Machine Translation Using Abductive Inference. Fort Belvoir, VA: Defense Technical Information Center, January 1990. http://dx.doi.org/10.21236/ada259458.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Dorr, Bonnie J. Principle-Based Parsing for Machine Translation. Fort Belvoir, VA: Defense Technical Information Center, December 1987. http://dx.doi.org/10.21236/ada199183.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Church, Kenneth W., and Eduard H. Hovy. Good Applications for Crummy Machine Translation. Fort Belvoir, VA: Defense Technical Information Center, January 1993. http://dx.doi.org/10.21236/ada278689.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Lee, Young-Suk. Morphological Analysis for Statistical Machine Translation. Fort Belvoir, VA: Defense Technical Information Center, January 2004. http://dx.doi.org/10.21236/ada460276.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Lopez, Adam. A Survey of Statistical Machine Translation. Fort Belvoir, VA: Defense Technical Information Center, April 2007. http://dx.doi.org/10.21236/ada466330.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Turian, Joseph P., Luke Shea, and I. D. Melamed. Evaluation of Machine Translation and its Evaluation. Fort Belvoir, VA: Defense Technical Information Center, January 2006. http://dx.doi.org/10.21236/ada453509.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Russo-Lassner, Grazia, Jimmy Lin, and Philip Resnik. A Paraphrase-Based Approach to Machine Translation Evaluation. Fort Belvoir, VA: Defense Technical Information Center, August 2005. http://dx.doi.org/10.21236/ada448032.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Germann, Ulrich, Michael Jahr, Kevin Knight, Daniel Marcu, and Kenji Yamada. Fast Decoding and Optimal Decoding for Machine Translation. Fort Belvoir, VA: Defense Technical Information Center, January 2001. http://dx.doi.org/10.21236/ada459945.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography