Siga este link para ver outros tipos de publicações sobre o tema: Non-autoregressive Machine Translation.

Artigos de revistas sobre o tema "Non-autoregressive Machine Translation"

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Veja os 34 melhores artigos de revistas para estudos sobre o assunto "Non-autoregressive Machine Translation".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Veja os artigos de revistas das mais diversas áreas científicas e compile uma bibliografia correta.

1

Wang, Yiren, Fei Tian, Di He, Tao Qin, ChengXiang Zhai, and Tie-Yan Liu. "Non-Autoregressive Machine Translation with Auxiliary Regularization." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 5377–84. http://dx.doi.org/10.1609/aaai.v33i01.33015377.

Texto completo da fonte
Resumo:
As a new neural machine translation approach, NonAutoregressive machine Translation (NAT) has attracted attention recently due to its high efficiency in inference. However, the high efficiency has come at the cost of not capturing the sequential dependency on the target side of translation, which causes NAT to suffer from two kinds of translation errors: 1) repeated translations (due to indistinguishable adjacent decoder hidden states), and 2) incomplete translations (due to incomplete transfer of source side information via the decoder hidden states). In this paper, we propose to address thes
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Wang, Shuheng, Shumin Shi, Heyan Huang, and Wei Zhang. "Improving Non-Autoregressive Machine Translation via Autoregressive Training." Journal of Physics: Conference Series 2031, no. 1 (2021): 012045. http://dx.doi.org/10.1088/1742-6596/2031/1/012045.

Texto completo da fonte
Resumo:
Abstract In recent years, non-autoregressive machine translation has attracted many researchers’ attentions. Non-autoregressive translation (NAT) achieves faster decoding speed at the cost of translation accuracy compared with autoregressive translation (AT). Since NAT and AT models have similar architecture, a natural idea is to use AT task assisting NAT task. Previous works use curriculum learning or distillation to improve the performance of NAT model. However, they are complex to follow and diffucult to be integrated into some new works. So in this paper, to make it easy, we introduce a mu
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Ran, Qiu, Yankai Lin, Peng Li, and Jie Zhou. "Guiding Non-Autoregressive Neural Machine Translation Decoding with Reordering Information." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 15 (2021): 13727–35. http://dx.doi.org/10.1609/aaai.v35i15.17618.

Texto completo da fonte
Resumo:
Non-autoregressive neural machine translation (NAT) generates each target word in parallel and has achieved promising inference acceleration. However, existing NAT models still have a big gap in translation quality compared to autoregressive neural machine translation models due to the multimodality problem: the target words may come from multiple feasible translations. To address this problem, we propose a novel NAT framework ReorderNAT which explicitly models the reordering information to guide the decoding of NAT. Specially, ReorderNAT utilizes deterministic and non-deterministic decoding s
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Shao, Chenze, Jinchao Zhang, Jie Zhou, and Yang Feng. "Rephrasing the Reference for Non-autoregressive Machine Translation." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 11 (2023): 13538–46. http://dx.doi.org/10.1609/aaai.v37i11.26587.

Texto completo da fonte
Resumo:
Non-autoregressive neural machine translation (NAT) models suffer from the multi-modality problem that there may exist multiple possible translations of a source sentence, so the reference sentence may be inappropriate for the training when the NAT output is closer to other translations. In response to this problem, we introduce a rephraser to provide a better training target for NAT by rephrasing the reference sentence according to the NAT output. As we train NAT based on the rephraser output rather than the reference sentence, the rephraser output should fit well with the NAT output and not
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Wang, Shuheng, Shumin Shi, and Heyan Huang. "Enhanced encoder for non-autoregressive machine translation." Machine Translation 35, no. 4 (2021): 595–609. http://dx.doi.org/10.1007/s10590-021-09285-x.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Shao, Chenze, Jinchao Zhang, Yang Feng, Fandong Meng, and Jie Zhou. "Minimizing the Bag-of-Ngrams Difference for Non-Autoregressive Neural Machine Translation." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 01 (2020): 198–205. http://dx.doi.org/10.1609/aaai.v34i01.5351.

Texto completo da fonte
Resumo:
Non-Autoregressive Neural Machine Translation (NAT) achieves significant decoding speedup through generating target words independently and simultaneously. However, in the context of non-autoregressive translation, the word-level cross-entropy loss cannot model the target-side sequential dependency properly, leading to its weak correlation with the translation quality. As a result, NAT tends to generate influent translations with over-translation and under-translation errors. In this paper, we propose to train NAT to minimize the Bag-of-Ngrams (BoN) difference between the model output and the
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Li, Feng, Jingxian Chen, and Xuejun Zhang. "A Survey of Non-Autoregressive Neural Machine Translation." Electronics 12, no. 13 (2023): 2980. http://dx.doi.org/10.3390/electronics12132980.

Texto completo da fonte
Resumo:
Non-autoregressive neural machine translation (NAMT) has received increasing attention recently in virtue of its promising acceleration paradigm for fast decoding. However, these splendid speedup gains are at the cost of accuracy, in comparison to its autoregressive counterpart. To close this performance gap, many studies have been conducted for achieving a better quality and speed trade-off. In this paper, we survey the NAMT domain from two new perspectives, i.e., target dependency management and training strategies arrangement. Proposed approaches are elaborated at length, involving five mod
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Liu, Min, Yu Bao, Chengqi Zhao, and Shujian Huang. "Selective Knowledge Distillation for Non-Autoregressive Neural Machine Translation." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 11 (2023): 13246–54. http://dx.doi.org/10.1609/aaai.v37i11.26555.

Texto completo da fonte
Resumo:
Benefiting from the sequence-level knowledge distillation, the Non-Autoregressive Transformer (NAT) achieves great success in neural machine translation tasks. However, existing knowledge distillation has side effects, such as propagating errors from the teacher to NAT students, which may limit further improvements of NAT models and are rarely discussed in existing research. In this paper, we introduce selective knowledge distillation by introducing an NAT evaluator to select NAT-friendly targets that are of high quality and easy to learn. In addition, we introduce a simple yet effective progr
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Du, Quan, Kai Feng, Chen Xu, Tong Xiao, and Jingbo Zhu. "Non-autoregressive neural machine translation with auxiliary representation fusion." Journal of Intelligent & Fuzzy Systems 41, no. 6 (2021): 7229–39. http://dx.doi.org/10.3233/jifs-211105.

Texto completo da fonte
Resumo:
Recently, many efforts have been devoted to speeding up neural machine translation models. Among them, the non-autoregressive translation (NAT) model is promising because it removes the sequential dependence on the previously generated tokens and parallelizes the generation process of the entire sequence. On the other hand, the autoregressive translation (AT) model in general achieves a higher translation accuracy than the NAT counterpart. Therefore, a natural idea is to fuse the AT and NAT models to seek a trade-off between inference speed and translation quality. This paper proposes an ARF-N
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Xinlu, Zhang, Wu Hongguan, Ma Beijiao, and Zhai Zhengang. "Research on Low Resource Neural Machine Translation Based on Non-autoregressive Model." Journal of Physics: Conference Series 2171, no. 1 (2022): 012045. http://dx.doi.org/10.1088/1742-6596/2171/1/012045.

Texto completo da fonte
Resumo:
Abstract The autoregressive model can’t make full use of context information because of its single direction of generation, and the autoregressive method can’t perform parallel computation in decoding, which affects the efficiency of translation generation. Therefore, we explore a non-autoregressive translation generation method based on insertion and deletion in low-resource languages, which decomposes translation generation into three steps: deletion-insertion-generation. Therefore, the dynamic editing of the translation can be realized in the iterative updating process. At the same time, ea
Estilos ABNT, Harvard, Vancouver, APA, etc.
11

Huang, Chenyang, Hao Zhou, Osmar R. Zaïane, Lili Mou, and Lei Li. "Non-autoregressive Translation with Layer-Wise Prediction and Deep Supervision." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 10 (2022): 10776–84. http://dx.doi.org/10.1609/aaai.v36i10.21323.

Texto completo da fonte
Resumo:
How do we perform efficient inference while retaining high translation quality? Existing neural machine translation models, such as Transformer, achieve high performance, but they decode words one by one, which is inefficient. Recent non-autoregressive translation models speed up the inference, but their quality is still inferior. In this work, we propose DSLP, a highly efficient and high-performance model for machine translation. The key insight is to train a non-autoregressive Transformer with Deep Supervision and feed additional Layer-wise Predictions. We conducted extensive experiments on
Estilos ABNT, Harvard, Vancouver, APA, etc.
12

Guo, Junliang, Xu Tan, Linli Xu, Tao Qin, Enhong Chen, and Tie-Yan Liu. "Fine-Tuning by Curriculum Learning for Non-Autoregressive Neural Machine Translation." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 05 (2020): 7839–46. http://dx.doi.org/10.1609/aaai.v34i05.6289.

Texto completo da fonte
Resumo:
Non-autoregressive translation (NAT) models remove the dependence on previous target tokens and generate all target tokens in parallel, resulting in significant inference speedup but at the cost of inferior translation accuracy compared to autoregressive translation (AT) models. Considering that AT models have higher accuracy and are easier to train than NAT models, and both of them share the same model configurations, a natural idea to improve the accuracy of NAT models is to transfer a well-trained AT model to an NAT model through fine-tuning. However, since AT and NAT models differ greatly
Estilos ABNT, Harvard, Vancouver, APA, etc.
13

Guo, Junliang, Xu Tan, Di He, Tao Qin, Linli Xu, and Tie-Yan Liu. "Non-Autoregressive Neural Machine Translation with Enhanced Decoder Input." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 3723–30. http://dx.doi.org/10.1609/aaai.v33i01.33013723.

Texto completo da fonte
Resumo:
Non-autoregressive translation (NAT) models, which remove the dependence on previous target tokens from the inputs of the decoder, achieve significantly inference speedup but at the cost of inferior accuracy compared to autoregressive translation (AT) models. Previous work shows that the quality of the inputs of the decoder is important and largely impacts the model accuracy. In this paper, we propose two methods to enhance the decoder inputs so as to improve NAT models. The first one directly leverages a phrase table generated by conventional SMT approaches to translate source tokens to targe
Estilos ABNT, Harvard, Vancouver, APA, etc.
14

Shu, Raphael, Jason Lee, Hideki Nakayama, and Kyunghyun Cho. "Latent-Variable Non-Autoregressive Neural Machine Translation with Deterministic Inference Using a Delta Posterior." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 05 (2020): 8846–53. http://dx.doi.org/10.1609/aaai.v34i05.6413.

Texto completo da fonte
Resumo:
Although neural machine translation models reached high translation quality, the autoregressive nature makes inference difficult to parallelize and leads to high translation latency. Inspired by recent refinement-based approaches, we propose LaNMT, a latent-variable non-autoregressive model with continuous latent variables and deterministic inference procedure. In contrast to existing approaches, we use a deterministic inference algorithm to find the target sequence that maximizes the lowerbound to the log-probability. During inference, the length of translation automatically adapts itself. Ou
Estilos ABNT, Harvard, Vancouver, APA, etc.
15

Wang, Shuheng, Heyan Huang, and Shumin Shi. "Improving Non-Autoregressive Machine Translation Using Sentence-Level Semantic Agreement." Applied Sciences 12, no. 10 (2022): 5003. http://dx.doi.org/10.3390/app12105003.

Texto completo da fonte
Resumo:
Theinference stage can be accelerated significantly using a Non-Autoregressive Transformer (NAT). However, the training objective used in the NAT model also aims to minimize the loss between the generated words and the golden words in the reference. Since the dependencies between the target words are lacking, this training objective computed at word level can easily cause semantic inconsistency between the generated and source sentences. To alleviate this issue, we propose a new method, Sentence-Level Semantic Agreement (SLSA), to obtain consistency between the source and generated sentences.
Estilos ABNT, Harvard, Vancouver, APA, etc.
16

Guo, Pei, Yisheng Xiao, Juntao Li, and Min Zhang. "RenewNAT: Renewing Potential Translation for Non-autoregressive Transformer." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 11 (2023): 12854–62. http://dx.doi.org/10.1609/aaai.v37i11.26511.

Texto completo da fonte
Resumo:
Non-autoregressive neural machine translation (NAT) models are proposed to accelerate the inference process while maintaining relatively high performance. However, existing NAT models are difficult to achieve the desired efficiency-quality trade-off. For one thing, fully NAT models with efficient inference perform inferior to their autoregressive counterparts. For another, iterative NAT models can, though, achieve comparable performance while diminishing the advantage of speed. In this paper, we propose RenewNAT, a flexible framework with high efficiency and effectiveness, to incorporate the m
Estilos ABNT, Harvard, Vancouver, APA, etc.
17

Weng, Rongxiang, Heng Yu, Weihua Luo, and Min Zhang. "Deep Fusing Pre-trained Models into Neural Machine Translation." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 10 (2022): 11468–76. http://dx.doi.org/10.1609/aaai.v36i10.21399.

Texto completo da fonte
Resumo:
Pre-training and fine-tuning have become the de facto paradigm in many natural language processing (NLP) tasks. However, compared to other NLP tasks, neural machine translation (NMT) aims to generate target language sentences through the contextual representation from the source language counterparts. This characteristic means the optimization objective of NMT is far from that of the universal pre-trained models (PTMs), leading to the standard procedure of pre-training and fine-tuning does not work well in NMT. In this paper, we propose a novel framework to deep fuse the pre-trained representa
Estilos ABNT, Harvard, Vancouver, APA, etc.
18

Huang, Fei, Pei Ke, and Minlie Huang. "Directed Acyclic Transformer Pre-training for High-quality Non-autoregressive Text Generation." Transactions of the Association for Computational Linguistics 11 (2023): 941–59. http://dx.doi.org/10.1162/tacl_a_00582.

Texto completo da fonte
Resumo:
Abstract Non-AutoRegressive (NAR) text generation models have drawn much attention because of their significantly faster decoding speed and good generation quality in machine translation. However, in a wider range of text generation tasks, existing NAR models lack proper pre-training, making them still far behind the pre-trained autoregressive models. In this paper, we propose Pre-trained Directed Acyclic Transformer (PreDAT) and a novel pre-training task to promote prediction consistency in NAR generation. Experiments on five text generation tasks show that our PreDAT remarkably outperforms e
Estilos ABNT, Harvard, Vancouver, APA, etc.
19

Xiao, Yisheng, Ruiyang Xu, Lijun Wu, et al. "AMOM: Adaptive Masking over Masking for Conditional Masked Language Model." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 11 (2023): 13789–97. http://dx.doi.org/10.1609/aaai.v37i11.26615.

Texto completo da fonte
Resumo:
Transformer-based autoregressive (AR) methods have achieved appealing performance for varied sequence-to-sequence generation tasks, e.g., neural machine translation, summarization, and code generation, but suffer from low inference efficiency. To speed up the inference stage, many non-autoregressive (NAR) strategies have been proposed in the past few years. Among them, the conditional masked language model (CMLM) is one of the most versatile frameworks, as it can support many different sequence generation scenarios and achieve very competitive performance on these tasks. In this paper, we furt
Estilos ABNT, Harvard, Vancouver, APA, etc.
20

Dong, Qianqian, Feng Wang, Zhen Yang, Wei Chen, Shuang Xu, and Bo Xu. "Adapting Translation Models for Transcript Disfluency Detection." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 6351–58. http://dx.doi.org/10.1609/aaai.v33i01.33016351.

Texto completo da fonte
Resumo:
Transcript disfluency detection (TDD) is an important component of the real-time speech translation system, which arouses more and more interests in recent years. This paper presents our study on adapting neural machine translation (NMT) models for TDD. We propose a general training framework for adapting NMT models to TDD task rapidly. In this framework, the main structure of the model is implemented similar to the NMT model. Additionally, several extended modules and training techniques which are independent of the NMT model are proposed to improve the performance, such as the constrained de
Estilos ABNT, Harvard, Vancouver, APA, etc.
21

Xu, Weijia, and Marine Carpuat. "EDITOR: An Edit-Based Transformer with Repositioning for Neural Machine Translation with Soft Lexical Constraints." Transactions of the Association for Computational Linguistics 9 (2021): 311–28. http://dx.doi.org/10.1162/tacl_a_00368.

Texto completo da fonte
Resumo:
Abstract We introduce an Edit-Based TransfOrmer with Repositioning (EDITOR), which makes sequence generation flexible by seamlessly allowing users to specify preferences in output lexical choice. Building on recent models for non-autoregressive sequence generation (Gu et al., 2019), EDITOR generates new sequences by iteratively editing hypotheses. It relies on a novel reposition operation designed to disentangle lexical choice from word positioning decisions, while enabling efficient oracles for imitation learning and parallel edits at decoding time. Empirically, EDITOR uses soft lexical const
Estilos ABNT, Harvard, Vancouver, APA, etc.
22

S, Tarun. "Bridging Languages through Images: A Multilingual Text-to-Image Synthesis Approach." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 05 (2024): 1–5. http://dx.doi.org/10.55041/ijsrem33773.

Texto completo da fonte
Resumo:
This research investigates the challenges posed by the predominant focus on English language text-to-image generation (TTI) because of the lack of annotated image caption data in other languages. The resulting inequitable access to TTI technology in non-English-speaking regions motivates the research of multilingual TTI (mTTI) and the potential of neural machine translation (NMT) to facilitate its development. The study presents two main contributions. Firstly, a systematic empirical study employing a multilingual multi-modal encoder evaluates standard cross-lingual NLP methods applied to mTTI
Estilos ABNT, Harvard, Vancouver, APA, etc.
23

Welleck, Sean, and Kyunghyun Cho. "MLE-Guided Parameter Search for Task Loss Minimization in Neural Sequence Modeling." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 16 (2021): 14032–40. http://dx.doi.org/10.1609/aaai.v35i16.17652.

Texto completo da fonte
Resumo:
Neural autoregressive sequence models are used to generate sequences in a variety of natural language processing (NLP) tasks, where they are evaluated according to sequence-level task losses. These models are typically trained with maximum likelihood estimation, which ignores the task loss, yet empirically performs well as a surrogate objective. Typical approaches to directly optimizing the task loss such as policy gradient and minimum risk training are based around sampling in the sequence space to obtain candidate update directions that are scored based on the loss of a single sequence. In t
Estilos ABNT, Harvard, Vancouver, APA, etc.
24

Liu, Chuanming, and Jingqi Yu. "Uncertainty-aware non-autoregressive neural machine translation." Computer Speech & Language, August 2022, 101444. http://dx.doi.org/10.1016/j.csl.2022.101444.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
25

Ju, Fang, and Weihui Wang. "Non‐Autoregressive Translation Algorithm Based on LLM Knowledge Distillation in English Corpus." Engineering Reports, December 8, 2024. https://doi.org/10.1002/eng2.13077.

Texto completo da fonte
Resumo:
ABSTRACTAlthough significant advancements have been made in the quality of machine translation by large‐scale language models, their high computational costs and resource consumption have hindered their widespread adoption in practical applications. So this research introduces an English corpus‐based machine translation algorithm that leverages knowledge distillation from large language model, with the goal of enhancing translation quality and reducing the computational demands of the model. Initially, we conducted a thorough analysis of the English corpus to identify prevalent language patter
Estilos ABNT, Harvard, Vancouver, APA, etc.
26

Shao, Chenze, Yang Feng, Jinchao Zhang, Fandong Meng, and Jie Zhou. "Sequence-Level Training for Non-Autoregressive Neural Machine Translation." Computational Linguistics, September 6, 2021, 1–36. http://dx.doi.org/10.1162/coli_a_00421.

Texto completo da fonte
Resumo:
Abstract In recent years, Neural Machine Translation (NMT) has achieved notable results in various translation tasks. However, the word-by-word generation manner determined by the autoregressive mechanism leads to high translation latency of the NMT and restricts its low-latency applications. Non-Autoregressive Neural Machine Translation (NAT) removes the autoregressive mechanism and achieves significant decoding speedup through generating target words independently and simultaneously. Nevertheless, NAT still takes the word-level cross-entropy loss as the training objective, which is not optim
Estilos ABNT, Harvard, Vancouver, APA, etc.
27

Wang, Shuheng, Heyan Huang, and Shumin Shi. "Incorporating history and future into non-autoregressive machine translation." Computer Speech & Language, July 2022, 101439. http://dx.doi.org/10.1016/j.csl.2022.101439.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
28

Sheshadri, Shailashree K., and Deepa Gupta. "KasNAT: Non-autoregressive machine translation for Kashmiri to English using knowledge distillation." Journal of Intelligent & Fuzzy Systems, April 26, 2024, 1–15. http://dx.doi.org/10.3233/jifs-219383.

Texto completo da fonte
Resumo:
Non-Autoregressive Machine Translation (NAT) represents a groundbreaking advancement in Machine Translation, enabling the simultaneous prediction of output tokens and significantly boosting translation speeds compared to traditional auto-regressive (AR) models. Recent NAT models have adeptly balanced translation quality and speed, surpassing their AR counterparts. The widely employed Knowledge Distillation (KD) technique in NAT involves generating training data from pre-trained AR models, enhancing NAT model performance. While KD has consistently proven its empirical effectiveness and substant
Estilos ABNT, Harvard, Vancouver, APA, etc.
29

Xie, Pan, Zexian Li, Zheng Zhao, Jiaqi Liu, and Xiaohui Hu. "MvSR-NAT: Multi-view Subset Regularization for Non-Autoregressive Machine Translation." IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2022, 1–10. http://dx.doi.org/10.1109/taslp.2022.3221043.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
30

Wang, Shuheng, Shumin Shi, and Heyan Huang. "Alleviating repetitive tokens in non-autoregressive machine translation with unlikelihood training." Soft Computing, January 3, 2024. http://dx.doi.org/10.1007/s00500-023-09490-1.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
31

Xiao, Yisheng, Lijun Wu, Junliang Guo, et al. "A Survey on Non-Autoregressive Generation for Neural Machine Translation and Beyond." IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, 1–20. http://dx.doi.org/10.1109/tpami.2023.3277122.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
32

Qu, Xiangyu, Guojing Liu, and Liang Li. "Multi-scale Joint Learning with Negative Sample Mining for Non-autoregressive Machine Translation." Knowledge-Based Systems, May 2025, 113610. https://doi.org/10.1016/j.knosys.2025.113610.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
33

Lim, Yeon-Soo, Eun-Ju Park, Hyun-Je Song, and Seong-Bae Park. "A Non-Autoregressive Neural Machine Translation Model with Iterative Length Update of Target Sentence." IEEE Access, 2022, 1. http://dx.doi.org/10.1109/access.2022.3169419.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
34

Sheshadri, Shailashree K., Deepa Gupta, Biswajit Paul, and J. Siva Bhavani. "A Novel Approach to Continual Knowledge Transfer in Multilingual Neural Machine Translation Using Autoregressive and Non-Autoregressive Models for Indic Languages." IEEE Access, 2025, 1. https://doi.org/10.1109/access.2025.3570699.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!