Auswahl der wissenschaftlichen Literatur zum Thema „Non-autoregressive Machine Translation“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Non-autoregressive Machine Translation" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Non-autoregressive Machine Translation"

1

Wang, Yiren, Fei Tian, Di He, Tao Qin, ChengXiang Zhai und Tie-Yan Liu. „Non-Autoregressive Machine Translation with Auxiliary Regularization“. Proceedings of the AAAI Conference on Artificial Intelligence 33 (17.07.2019): 5377–84. http://dx.doi.org/10.1609/aaai.v33i01.33015377.

Der volle Inhalt der Quelle
Annotation:
As a new neural machine translation approach, NonAutoregressive machine Translation (NAT) has attracted attention recently due to its high efficiency in inference. However, the high efficiency has come at the cost of not capturing the sequential dependency on the target side of translation, which causes NAT to suffer from two kinds of translation errors: 1) repeated translations (due to indistinguishable adjacent decoder hidden states), and 2) incomplete translations (due to incomplete transfer of source side information via the decoder hidden states). In this paper, we propose to address these two problems by improving the quality of decoder hidden representations via two auxiliary regularization terms in the training process of an NAT model. First, to make the hidden states more distinguishable, we regularize the similarity between consecutive hidden states based on the corresponding target tokens. Second, to force the hidden states to contain all the information in the source sentence, we leverage the dual nature of translation tasks (e.g., English to German and German to English) and minimize a backward reconstruction error to ensure that the hidden states of the NAT decoder are able to recover the source side sentence. Extensive experiments conducted on several benchmark datasets show that both regularization strategies are effective and can alleviate the issues of repeated translations and incomplete translations in NAT models. The accuracy of NAT models is therefore improved significantly over the state-of-the-art NAT models with even better efficiency for inference.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Wang, Shuheng, Shumin Shi, Heyan Huang und Wei Zhang. „Improving Non-Autoregressive Machine Translation via Autoregressive Training“. Journal of Physics: Conference Series 2031, Nr. 1 (01.09.2021): 012045. http://dx.doi.org/10.1088/1742-6596/2031/1/012045.

Der volle Inhalt der Quelle
Annotation:
Abstract In recent years, non-autoregressive machine translation has attracted many researchers’ attentions. Non-autoregressive translation (NAT) achieves faster decoding speed at the cost of translation accuracy compared with autoregressive translation (AT). Since NAT and AT models have similar architecture, a natural idea is to use AT task assisting NAT task. Previous works use curriculum learning or distillation to improve the performance of NAT model. However, they are complex to follow and diffucult to be integrated into some new works. So in this paper, to make it easy, we introduce a multi-task framework to improve the performance of NAT task. Specially, we use a fully shared encoder-decoder network to train NAT task and AT task simultaneously. To evaluate the performance of our model, we conduct experiments on serval benchmask tasks, including WMT14 EN-DE, WMT16 EN-RO and IWSLT14 DE-EN. The experimental results demonstrate that our model achieves improvements but still keeps simple.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Shao, Chenze, Jinchao Zhang, Jie Zhou und Yang Feng. „Rephrasing the Reference for Non-autoregressive Machine Translation“. Proceedings of the AAAI Conference on Artificial Intelligence 37, Nr. 11 (26.06.2023): 13538–46. http://dx.doi.org/10.1609/aaai.v37i11.26587.

Der volle Inhalt der Quelle
Annotation:
Non-autoregressive neural machine translation (NAT) models suffer from the multi-modality problem that there may exist multiple possible translations of a source sentence, so the reference sentence may be inappropriate for the training when the NAT output is closer to other translations. In response to this problem, we introduce a rephraser to provide a better training target for NAT by rephrasing the reference sentence according to the NAT output. As we train NAT based on the rephraser output rather than the reference sentence, the rephraser output should fit well with the NAT output and not deviate too far from the reference, which can be quantified as reward functions and optimized by reinforcement learning. Experiments on major WMT benchmarks and NAT baselines show that our approach consistently improves the translation quality of NAT. Specifically, our best variant achieves comparable performance to the autoregressive Transformer, while being 14.7 times more efficient in inference.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Ran, Qiu, Yankai Lin, Peng Li und Jie Zhou. „Guiding Non-Autoregressive Neural Machine Translation Decoding with Reordering Information“. Proceedings of the AAAI Conference on Artificial Intelligence 35, Nr. 15 (18.05.2021): 13727–35. http://dx.doi.org/10.1609/aaai.v35i15.17618.

Der volle Inhalt der Quelle
Annotation:
Non-autoregressive neural machine translation (NAT) generates each target word in parallel and has achieved promising inference acceleration. However, existing NAT models still have a big gap in translation quality compared to autoregressive neural machine translation models due to the multimodality problem: the target words may come from multiple feasible translations. To address this problem, we propose a novel NAT framework ReorderNAT which explicitly models the reordering information to guide the decoding of NAT. Specially, ReorderNAT utilizes deterministic and non-deterministic decoding strategies that leverage reordering information as a proxy for the final translation to encourage the decoder to choose words belonging to the same translation. Experimental results on various widely-used datasets show that our proposed model achieves better performance compared to most existing NAT models, and even achieves comparable translation quality as autoregressive translation models with a significant speedup.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Wang, Shuheng, Shumin Shi und Heyan Huang. „Enhanced encoder for non-autoregressive machine translation“. Machine Translation 35, Nr. 4 (16.11.2021): 595–609. http://dx.doi.org/10.1007/s10590-021-09285-x.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Shao, Chenze, Jinchao Zhang, Yang Feng, Fandong Meng und Jie Zhou. „Minimizing the Bag-of-Ngrams Difference for Non-Autoregressive Neural Machine Translation“. Proceedings of the AAAI Conference on Artificial Intelligence 34, Nr. 01 (03.04.2020): 198–205. http://dx.doi.org/10.1609/aaai.v34i01.5351.

Der volle Inhalt der Quelle
Annotation:
Non-Autoregressive Neural Machine Translation (NAT) achieves significant decoding speedup through generating target words independently and simultaneously. However, in the context of non-autoregressive translation, the word-level cross-entropy loss cannot model the target-side sequential dependency properly, leading to its weak correlation with the translation quality. As a result, NAT tends to generate influent translations with over-translation and under-translation errors. In this paper, we propose to train NAT to minimize the Bag-of-Ngrams (BoN) difference between the model output and the reference sentence. The bag-of-ngrams training objective is differentiable and can be efficiently calculated, which encourages NAT to capture the target-side sequential dependency and correlates well with the translation quality. We validate our approach on three translation tasks and show that our approach largely outperforms the NAT baseline by about 5.0 BLEU scores on WMT14 En↔De and about 2.5 BLEU scores on WMT16 En↔Ro.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Li, Feng, Jingxian Chen und Xuejun Zhang. „A Survey of Non-Autoregressive Neural Machine Translation“. Electronics 12, Nr. 13 (06.07.2023): 2980. http://dx.doi.org/10.3390/electronics12132980.

Der volle Inhalt der Quelle
Annotation:
Non-autoregressive neural machine translation (NAMT) has received increasing attention recently in virtue of its promising acceleration paradigm for fast decoding. However, these splendid speedup gains are at the cost of accuracy, in comparison to its autoregressive counterpart. To close this performance gap, many studies have been conducted for achieving a better quality and speed trade-off. In this paper, we survey the NAMT domain from two new perspectives, i.e., target dependency management and training strategies arrangement. Proposed approaches are elaborated at length, involving five model categories. We then collect extensive experimental data to present abundant graphs for quantitative evaluation and qualitative comparison according to the reported translation performance. Based on that, a comprehensive performance analysis is provided. Further inspection is conducted for two salient problems: target sentence length prediction and sequence-level knowledge distillation. Accumulative reinvestigation of translation quality and speedup demonstrates that non-autoregressive decoding may not run fast as it seems and still lacks authentic surpassing for accuracy. We finally prospect potential work from inner and outer facets and call for more practical and warrantable studies for the future.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Liu, Min, Yu Bao, Chengqi Zhao und Shujian Huang. „Selective Knowledge Distillation for Non-Autoregressive Neural Machine Translation“. Proceedings of the AAAI Conference on Artificial Intelligence 37, Nr. 11 (26.06.2023): 13246–54. http://dx.doi.org/10.1609/aaai.v37i11.26555.

Der volle Inhalt der Quelle
Annotation:
Benefiting from the sequence-level knowledge distillation, the Non-Autoregressive Transformer (NAT) achieves great success in neural machine translation tasks. However, existing knowledge distillation has side effects, such as propagating errors from the teacher to NAT students, which may limit further improvements of NAT models and are rarely discussed in existing research. In this paper, we introduce selective knowledge distillation by introducing an NAT evaluator to select NAT-friendly targets that are of high quality and easy to learn. In addition, we introduce a simple yet effective progressive distillation method to boost NAT performance. Experiment results on multiple WMT language directions and several representative NAT models show that our approach can realize a flexible trade-off between the quality and complexity of training data for NAT models, achieving strong performances. Further analysis shows that distilling only 5% of the raw translations can help an NAT outperform its counterpart trained on raw data by about 2.4 BLEU.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Du, Quan, Kai Feng, Chen Xu, Tong Xiao und Jingbo Zhu. „Non-autoregressive neural machine translation with auxiliary representation fusion“. Journal of Intelligent & Fuzzy Systems 41, Nr. 6 (16.12.2021): 7229–39. http://dx.doi.org/10.3233/jifs-211105.

Der volle Inhalt der Quelle
Annotation:
Recently, many efforts have been devoted to speeding up neural machine translation models. Among them, the non-autoregressive translation (NAT) model is promising because it removes the sequential dependence on the previously generated tokens and parallelizes the generation process of the entire sequence. On the other hand, the autoregressive translation (AT) model in general achieves a higher translation accuracy than the NAT counterpart. Therefore, a natural idea is to fuse the AT and NAT models to seek a trade-off between inference speed and translation quality. This paper proposes an ARF-NAT model (NAT with auxiliary representation fusion) to introduce the merit of a shallow AT model to an NAT model. Three functions are designed to fuse the auxiliary representation into the decoder of the NAT model. Experimental results show that ARF-NAT outperforms the NAT baseline by 5.26 BLEU scores on the WMT’14 German-English task with a significant speedup (7.58 times) over several strong AT baselines.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Xinlu, Zhang, Wu Hongguan, Ma Beijiao und Zhai Zhengang. „Research on Low Resource Neural Machine Translation Based on Non-autoregressive Model“. Journal of Physics: Conference Series 2171, Nr. 1 (01.01.2022): 012045. http://dx.doi.org/10.1088/1742-6596/2171/1/012045.

Der volle Inhalt der Quelle
Annotation:
Abstract The autoregressive model can’t make full use of context information because of its single direction of generation, and the autoregressive method can’t perform parallel computation in decoding, which affects the efficiency of translation generation. Therefore, we explore a non-autoregressive translation generation method based on insertion and deletion in low-resource languages, which decomposes translation generation into three steps: deletion-insertion-generation. Therefore, the dynamic editing of the translation can be realized in the iterative updating process. At the same time, each step can be calculated in parallel, which improves the decoding efficiency. In order to reduce the complexity of data sets in non-autoregressive model training, we have trained Uyghur-Chinese training data with sequence-level knowledge distillation. Experiments on Uyghur-Chinese, English-Romanian distilled data sets and standard data sets verify the effectiveness of the non-autoregressive method.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Dissertationen zum Thema "Non-autoregressive Machine Translation"

1

Xu, Jitao. „Writing in two languages : Neural machine translation as an assistive bilingual writing tool“. Electronic Thesis or Diss., université Paris-Saclay, 2022. http://www.theses.fr/2022UPASG078.

Der volle Inhalt der Quelle
Annotation:
Dans un monde de plus en plus globalisé, il est de plus en plus courant d'avoir à s'exprimer dans une langue étrangère ou dans plusieurs langues. Cependant, pour de nombreuses personnes, parler ou écrire dans une langue étrangère n'est pas une tâche facile. Les outils de traduction automatique peuvent aider à générer des textes en plusieurs langues. Grâce aux progrès récents de la traduction automatique neuronale (NMT), les technologies de traduction fournissent en effet des traductions utilisables dans un nombre croissant de contextes. Pour autant, il n'est pas encore réaliste d'attendre des systèmes NMT qu'ils produisent des traductions sans erreur. En revanche, les utilisateurs ayant une bonne maîtrise d'une langue étrangère donnée peuvent trouver des aides auprès des technologies de traduction aidé par ordinateur. Lorsqu'ils rencontrent des difficulté, les utilisateurs écrivant dans une langue étrangère peuvent accéder à des ressources externes telles que des dictionnaires, des terminologies ou des concordanciers bilingues. Cependant, la consultation de ces ressources provoque une interruption du processus de rédaction et déclenche une autre activité cognitive. Pour rendre le processus plus fluide, il est possible d'étendre les systèmes d'aide à la rédaction afin de prendre en charge la composition de textes bilingues. Cependant, les études existantes se sont principalement concentrées sur la génération de textes dans une langue étrangère. Nous suggérons que l'affichage de textes correspondants dans la langue maternelle de l'utilisateur peut également aider les utilisateurs à vérifier les textes composés à partir d'entrées bilingues. Dans cette thèse, nous étudions des techniques pour construire des systèmes d'aide à la rédaction bilingues qui permettent la composition libre dans les deux langues et affichent des textes monolingues synchronisés dans les deux langues. Nous présentons deux types de systèmes interactifs simulés. La première solution permet aux utilisateurs de composer des textes dans un mélange de langues, qui sont ensuite traduits dans leurs équivalents monolingues. Nous étendons le modèle Transformer pour la traduction en ajoutant décodeur duel: notre modèle comprend un encodeur partagé et deux décodeurs pour produire simultanément des textes en deux langues. Nous explorons également le modèle de décodeur duel pour plusieurs autres tâches, telles que la traduction multi-cible, la traduction bidirectionnelle, la génération de variantes de traduction et le sous-titrage multilingue. La deuxième contribution vise à étendre les systèmes de traduction commerciaux disponibles en ligne en permettant aux utilisateurs d'alterner librement entre les deux langues, en changeant la boîte de saisie du texte à leur volonté. Dans ce scénario, le défi technique consiste à maintenir la synchronisation des deux textes d'entrée tout en tenant compte des entrées des utilisateurs, toujours dans le but de créer deux versions également bonnes du texte. Pour cela, nous introduisons une tâche générale de synchronisation bilingue et nous implémentons et expérimentons des systèmes de synchronisation auto-régressifs et non-autorégressifs. Nous étudions également l'utilisation de modèles de synchronisation bilingue pour d'autres tâches spécifiques, telles que le nettoyage de corpus parallèles et la NMT avec mémoire de traduction, afin de mieux évaluer la capacité de généralisation des modèles proposés
In an increasingly global world, more situations appear where people need to express themselves in a foreign language or multiple languages. However, for many people, writing in a foreign language is not an easy task. Machine translation tools can help generate texts in multiple languages. With the tangible progress in neural machine translation (NMT), translation technologies are delivering usable translations in a growing number of contexts. However, it is not yet realistic for NMT systems to produce error-free translations. Therefore, users with a good command of a given foreign language may find assistance from computer-aided translation technologies. In case of difficulties, users writing in a foreign language can access external resources such as dictionaries, terminologies, or bilingual concordancers. However, consulting these resources causes an interruption in the writing process and starts another cognitive activity. To make the process smoother, it is possible to extend writing assistant systems to support bilingual text composition. However, existing studies mainly focused on generating texts in a foreign language. We suggest that showing corresponding texts in the user's mother tongue can also help users to verify the composed texts with synchronized bitexts. In this thesis, we study techniques to build bilingual writing assistant systems that allow free composition in both languages and display synchronized monolingual texts in the two languages. We introduce two types of simulated interactive systems. The first solution allows users to compose mixed-language texts, which are then translated into their monolingual counterparts. We propose a dual decoder Transformer model comprising a shared encoder and two decoders to simultaneously produce texts in two languages. We also explore the dual decoder model for various other tasks, such as multi-target translation, bidirectional translation, generating translation variants, and multilingual subtitling. The second design aims to extend commercial online translation systems by letting users freely alternate between the two languages, changing the texting input box at their will. In this scenario, the technical challenge is to keep the two input texts synchronized while taking the users' inputs into account, again with the goal of authoring two equally good versions of the text. For this, we introduce a general bilingual synchronization task and implement and experiment with autoregressive and non-autoregressive synchronization systems. We also investigate bilingual synchronization models on specific downstream tasks, such as parallel corpus cleaning and NMT with translation memories, to study the generalization ability of the proposed models
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "Non-autoregressive Machine Translation"

1

Zhou, Long, Jiajun Zhang, Yang Zhao und Chengqing Zong. „Non-autoregressive Neural Machine Translation with Distortion Model“. In Natural Language Processing and Chinese Computing, 403–15. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-60450-9_32.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Wang, Shuheng, Shumin Shi und Heyan Huang. „Improving Non-autoregressive Machine Translation with Soft-Masking“. In Natural Language Processing and Chinese Computing, 141–52. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-88480-2_12.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Guo, Ziyue, Hongxu Hou, Nier Wu und Shuo Sun. „Word-Level Error Correction in Non-autoregressive Neural Machine Translation“. In Communications in Computer and Information Science, 726–33. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-63820-7_83.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Wang, Yisong, Hongxu Hou, Shuo Sun, Nier Wu, Weichen Jian, Zongheng Yang und Pengcong Wang. „Dynamic Mask Curriculum Learning for Non-Autoregressive Neural Machine Translation“. In Communications in Computer and Information Science, 72–81. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-7960-6_8.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Non-autoregressive Machine Translation"

1

Bao, Guangsheng, Zhiyang Teng, Hao Zhou, Jianhao Yan und Yue Zhang. „Non-Autoregressive Document-Level Machine Translation“. In Findings of the Association for Computational Linguistics: EMNLP 2023. Stroudsburg, PA, USA: Association for Computational Linguistics, 2023. http://dx.doi.org/10.18653/v1/2023.findings-emnlp.986.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Xu, Jitao, Josep Crego und François Yvon. „Integrating Translation Memories into Non-Autoregressive Machine Translation“. In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics. Stroudsburg, PA, USA: Association for Computational Linguistics, 2023. http://dx.doi.org/10.18653/v1/2023.eacl-main.96.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Saharia, Chitwan, William Chan, Saurabh Saxena und Mohammad Norouzi. „Non-Autoregressive Machine Translation with Latent Alignments“. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). Stroudsburg, PA, USA: Association for Computational Linguistics, 2020. http://dx.doi.org/10.18653/v1/2020.emnlp-main.83.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Wei, Bingzhen, Mingxuan Wang, Hao Zhou, Junyang Lin und Xu Sun. „Imitation Learning for Non-Autoregressive Neural Machine Translation“. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, PA, USA: Association for Computational Linguistics, 2019. http://dx.doi.org/10.18653/v1/p19-1125.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Qian, Lihua, Hao Zhou, Yu Bao, Mingxuan Wang, Lin Qiu, Weinan Zhang, Yong Yu und Lei Li. „Glancing Transformer for Non-Autoregressive Neural Machine Translation“. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Stroudsburg, PA, USA: Association for Computational Linguistics, 2021. http://dx.doi.org/10.18653/v1/2021.acl-long.155.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Shan, Yong, Yang Feng und Chenze Shao. „Modeling Coverage for Non-Autoregressive Neural Machine Translation“. In 2021 International Joint Conference on Neural Networks (IJCNN). IEEE, 2021. http://dx.doi.org/10.1109/ijcnn52387.2021.9533529.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Li, Zhuohan, Zi Lin, Di He, Fei Tian, Tao Qin, Liwei Wang und Tie-Yan Liu. „Hint-Based Training for Non-Autoregressive Machine Translation“. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Stroudsburg, PA, USA: Association for Computational Linguistics, 2019. http://dx.doi.org/10.18653/v1/d19-1573.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Cheng, Hao, und Zhihua Zhang. „Con-NAT: Contrastive Non-autoregressive Neural Machine Translation“. In Findings of the Association for Computational Linguistics: EMNLP 2022. Stroudsburg, PA, USA: Association for Computational Linguistics, 2022. http://dx.doi.org/10.18653/v1/2022.findings-emnlp.463.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Huang, Chenyang, Fei Huang, Zaixiang Zheng, Osmar Zaïane, Hao Zhou und Lili Mou. „Multilingual Non-Autoregressive Machine Translation without Knowledge Distillation“. In Findings of the Association for Computational Linguistics: IJCNLP-AACL 2023 (Findings). Stroudsburg, PA, USA: Association for Computational Linguistics, 2023. http://dx.doi.org/10.18653/v1/2023.findings-ijcnlp.14.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Shao, Chenze, Yang Feng, Jinchao Zhang, Fandong Meng, Xilin Chen und Jie Zhou. „Retrieving Sequential Information for Non-Autoregressive Neural Machine Translation“. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, PA, USA: Association for Computational Linguistics, 2019. http://dx.doi.org/10.18653/v1/p19-1288.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie