Статті в журналах з теми "Non-Autoregressive"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Non-Autoregressive.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Non-Autoregressive".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Wang, Shuheng, Shumin Shi, Heyan Huang, and Wei Zhang. "Improving Non-Autoregressive Machine Translation via Autoregressive Training." Journal of Physics: Conference Series 2031, no. 1 (September 1, 2021): 012045. http://dx.doi.org/10.1088/1742-6596/2031/1/012045.

Повний текст джерела
Анотація:
Abstract In recent years, non-autoregressive machine translation has attracted many researchers’ attentions. Non-autoregressive translation (NAT) achieves faster decoding speed at the cost of translation accuracy compared with autoregressive translation (AT). Since NAT and AT models have similar architecture, a natural idea is to use AT task assisting NAT task. Previous works use curriculum learning or distillation to improve the performance of NAT model. However, they are complex to follow and diffucult to be integrated into some new works. So in this paper, to make it easy, we introduce a multi-task framework to improve the performance of NAT task. Specially, we use a fully shared encoder-decoder network to train NAT task and AT task simultaneously. To evaluate the performance of our model, we conduct experiments on serval benchmask tasks, including WMT14 EN-DE, WMT16 EN-RO and IWSLT14 DE-EN. The experimental results demonstrate that our model achieves improvements but still keeps simple.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Anděl, Jiří. "NON-NEGATIVE AUTOREGRESSIVE PROCESSES." Journal of Time Series Analysis 10, no. 1 (January 1989): 1–11. http://dx.doi.org/10.1111/j.1467-9892.1989.tb00011.x.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Hong-zhi, An. "NON-NEGATIVE AUTOREGRESSIVE MODELS." Journal of Time Series Analysis 13, no. 4 (July 1992): 283–95. http://dx.doi.org/10.1111/j.1467-9892.1992.tb00108.x.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Weber, Daniel, and Clemens Gühmann. "Non-Autoregressive vs Autoregressive Neural Networks for System Identification." IFAC-PapersOnLine 54, no. 20 (2021): 692–98. http://dx.doi.org/10.1016/j.ifacol.2021.11.252.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Tian, Zhengkun, Jiangyan Yi, Jianhua Tao, Shuai Zhang, and Zhengqi Wen. "Hybrid Autoregressive and Non-Autoregressive Transformer Models for Speech Recognition." IEEE Signal Processing Letters 29 (2022): 762–66. http://dx.doi.org/10.1109/lsp.2022.3152128.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Fei, Zhengcong. "Partially Non-Autoregressive Image Captioning." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 2 (May 18, 2021): 1309–16. http://dx.doi.org/10.1609/aaai.v35i2.16219.

Повний текст джерела
Анотація:
Current state-of-the-art image captioning systems usually generated descriptions autoregressively, i.e., every forward step conditions on the given image and previously produced words. The sequential attribution causes a unavoidable decoding latency. Non-autoregressive image captioning, on the other hand, predicts the entire sentence simultaneously and accelerates the inference process significantly. However, it removes the dependence in a caption and commonly suffers from repetition or missing issues. To make a better trade-off between speed and quality, we introduce a partially non-autoregressive model, named PNAIC, which considers a caption as a series of concatenated word groups. The groups are generated parallelly in global while each word in group is predicted from left to right, and thus the captioner can create multiple discontinuous words concurrently at each time step. More importantly, by incorporating curriculum learning-based training tasks of group length prediction and invalid group deletion, our model is capable of generating accurate captions as well as preventing common incoherent errors. Extensive experiments on MS COCO benchmark demonstrate that our proposed method achieves more than 3.5× speedup while maintaining competitive performance.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Grigoletto, Matteo. "Bootstrap prediction intervals for autoregressive models fitted to non-autoregressive processes." Journal of the Italian Statistical Society 7, no. 3 (December 1998): 285–95. http://dx.doi.org/10.1007/bf03178936.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Liu, Weidong, Shiqing Ling, and Qi-Man Shao. "On non-stationary threshold autoregressive models." Bernoulli 17, no. 3 (August 2011): 969–86. http://dx.doi.org/10.3150/10-bej306.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Bell, C. B., and E. P. Smith. "Infrence for non-negative autoregressive schemes." Communications in Statistics - Theory and Methods 15, no. 8 (January 1986): 2267–93. http://dx.doi.org/10.1080/03610928608829248.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Lii, K. S., and M. Rosenblatt. "Non-Gaussian autoregressive moving average processes." Proceedings of the National Academy of Sciences 90, no. 19 (October 1, 1993): 9168–70. http://dx.doi.org/10.1073/pnas.90.19.9168.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Breidt, F. J., R. A. Davis, K. S. Lii, and M. Rosenblatt. "Nonminimum phase non-Gaussian autoregressive processes." Proceedings of the National Academy of Sciences 87, no. 1 (January 1, 1990): 179–81. http://dx.doi.org/10.1073/pnas.87.1.179.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Wang, Yiren, Fei Tian, Di He, Tao Qin, ChengXiang Zhai, and Tie-Yan Liu. "Non-Autoregressive Machine Translation with Auxiliary Regularization." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 5377–84. http://dx.doi.org/10.1609/aaai.v33i01.33015377.

Повний текст джерела
Анотація:
As a new neural machine translation approach, NonAutoregressive machine Translation (NAT) has attracted attention recently due to its high efficiency in inference. However, the high efficiency has come at the cost of not capturing the sequential dependency on the target side of translation, which causes NAT to suffer from two kinds of translation errors: 1) repeated translations (due to indistinguishable adjacent decoder hidden states), and 2) incomplete translations (due to incomplete transfer of source side information via the decoder hidden states). In this paper, we propose to address these two problems by improving the quality of decoder hidden representations via two auxiliary regularization terms in the training process of an NAT model. First, to make the hidden states more distinguishable, we regularize the similarity between consecutive hidden states based on the corresponding target tokens. Second, to force the hidden states to contain all the information in the source sentence, we leverage the dual nature of translation tasks (e.g., English to German and German to English) and minimize a backward reconstruction error to ensure that the hidden states of the NAT decoder are able to recover the source side sentence. Extensive experiments conducted on several benchmark datasets show that both regularization strategies are effective and can alleviate the issues of repeated translations and incomplete translations in NAT models. The accuracy of NAT models is therefore improved significantly over the state-of-the-art NAT models with even better efficiency for inference.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Yang, Bang, Yuexian Zou, Fenglin Liu, and Can Zhang. "Non-Autoregressive Coarse-to-Fine Video Captioning." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 4 (May 18, 2021): 3119–27. http://dx.doi.org/10.1609/aaai.v35i4.16421.

Повний текст джерела
Анотація:
It is encouraged to see that progress has been made to bridge videos and natural language. However, mainstream video captioning methods suffer from slow inference speed due to the sequential manner of autoregressive decoding, and prefer generating generic descriptions due to the insufficient training of visual words (e.g., nouns and verbs) and inadequate decoding paradigm. In this paper, we propose a non-autoregressive decoding based model with a coarse-to-fine captioning procedure to alleviate these defects. In implementations, we employ a bi-directional self-attention based network as our language model for achieving inference speedup, based on which we decompose the captioning procedure into two stages, where the model has different focuses. Specifically, given that visual words determine the semantic correctness of captions, we design a mechanism of generating visual words to not only promote the training of scene-related words but also capture relevant details from videos to construct a coarse-grained sentence ``template''. Thereafter, we devise dedicated decoding algorithms that fill in the ``template'' with suitable words and modify inappropriate phrasing via iterative refinement to obtain a fine-grained description. Extensive experiments on two mainstream video captioning benchmarks, i.e., MSVD and MSR-VTT, demonstrate that our approach achieves state-of-the-art performance, generates diverse descriptions, and obtains high inference efficiency.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Wang, Shuheng, Shumin Shi, and Heyan Huang. "Enhanced encoder for non-autoregressive machine translation." Machine Translation 35, no. 4 (November 16, 2021): 595–609. http://dx.doi.org/10.1007/s10590-021-09285-x.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Högnäs, Göran. "COMPARISON OF SOME NON-LINEAR AUTOREGRESSIVE PROCESSES." Journal of Time Series Analysis 7, no. 3 (May 1986): 205–11. http://dx.doi.org/10.1111/j.1467-9892.1986.tb00503.x.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Chan, Ngai Hang, and Rongmao Zhang. "Non-stationary autoregressive processes with infinite variance." Journal of Time Series Analysis 33, no. 6 (July 10, 2012): 916–34. http://dx.doi.org/10.1111/j.1467-9892.2012.00807.x.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Huang, Jianhua Z., and Lijian Yang. "Identification of non-linear additive autoregressive models." Journal of the Royal Statistical Society: Series B (Statistical Methodology) 66, no. 2 (May 2004): 463–77. http://dx.doi.org/10.1111/j.1369-7412.2004.05500.x.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Abraham, B., and N. Balakrishna. "Product autoregressive models for non-negative variables." Statistics & Probability Letters 82, no. 8 (August 2012): 1530–37. http://dx.doi.org/10.1016/j.spl.2012.04.022.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Rosenblatt, M. "Prediction for some non-Gaussian autoregressive schemes." Advances in Applied Mathematics 7, no. 2 (June 1986): 182–98. http://dx.doi.org/10.1016/0196-8858(86)90030-8.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Póczos, Barnabás, and András Lőrincz. "Non-combinatorial estimation of independent autoregressive sources." Neurocomputing 69, no. 16-18 (October 2006): 2416–19. http://dx.doi.org/10.1016/j.neucom.2006.02.008.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Rosenblatt, Murray. "Prediction and Non-Gaussian Autoregressive Stationary Sequences." Annals of Applied Probability 5, no. 1 (February 1995): 239–47. http://dx.doi.org/10.1214/aoap/1177004838.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Kim, Hee-Young, and Yousung Park. "A non-stationary integer-valued autoregressive model." Statistical Papers 49, no. 3 (November 17, 2006): 485–502. http://dx.doi.org/10.1007/s00362-006-0028-1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Niranjana Murthy, H. S. "Comparison Between Non-Linear Autoregressive and Non-Linear Autoregressive with Exogeneous Inputs Models for Predicting Cardiac Ischemic Beats." Journal of Computational and Theoretical Nanoscience 17, no. 9 (July 1, 2020): 3974–78. http://dx.doi.org/10.1166/jctn.2020.9001.

Повний текст джерела
Анотація:
The prediction accuracy and generalization ability of neural models for forecasting Myocardial Ischemic Beats depends on type and architecture of employed network model. This paper presents the comparison analysis of recurrent neural network (RNN) architectures with embedded memory, Non-linear Autoregressive (NAR) and Non-linear Autoregressive with Exogeneous inputs (NARX) models for forecasting Ischemic Beats in ECG. Numerous architectures of the NAR and NARX models are verified for prediction and the performances are evaluated in terms of MSE. The performances of NAR and NARX models are validated by using ECG signals acquired from MIT-BIH database. The results have depicted that the NARX architecture with 2 neurons in hidden layer and 1 delay line outperformed with least Mean Square Error (MSE) of 0.0001 for detecting the ischemic beats in ECG.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Xinlu, Zhang, Wu Hongguan, Ma Beijiao, and Zhai Zhengang. "Research on Low Resource Neural Machine Translation Based on Non-autoregressive Model." Journal of Physics: Conference Series 2171, no. 1 (January 1, 2022): 012045. http://dx.doi.org/10.1088/1742-6596/2171/1/012045.

Повний текст джерела
Анотація:
Abstract The autoregressive model can’t make full use of context information because of its single direction of generation, and the autoregressive method can’t perform parallel computation in decoding, which affects the efficiency of translation generation. Therefore, we explore a non-autoregressive translation generation method based on insertion and deletion in low-resource languages, which decomposes translation generation into three steps: deletion-insertion-generation. Therefore, the dynamic editing of the translation can be realized in the iterative updating process. At the same time, each step can be calculated in parallel, which improves the decoding efficiency. In order to reduce the complexity of data sets in non-autoregressive model training, we have trained Uyghur-Chinese training data with sequence-level knowledge distillation. Experiments on Uyghur-Chinese, English-Romanian distilled data sets and standard data sets verify the effectiveness of the non-autoregressive method.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Salem, Abdulghafoor Jasem, and Abeer Abdulkhaleq Ahmad. "Stability of a Non-Linear Exponential Autoregressive Model." OALib 05, no. 04 (2018): 1–15. http://dx.doi.org/10.4236/oalib.1104482.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Salim Youns, Anas, and Abdul Ghafoor Jasim Salim. "Studying Stability of a Non-linear Autoregressive Model." JOURNAL OF EDUCATION AND SCIENCE 27, no. 3 (June 1, 2018): 163–77. http://dx.doi.org/10.33899/edusj.2018.159315.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Kunst, Robert M. "TESTING FOR CYCLICAL NON‐STATIONARITY IN AUTOREGRESSIVE PROCESSES." Journal of Time Series Analysis 18, no. 2 (March 1997): 123–35. http://dx.doi.org/10.1111/1467-9892.00042.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Grunwald, Gary K., and Rob J. Hyndman. "Smoothing non-Gaussian time series with autoregressive structure." Computational Statistics & Data Analysis 28, no. 2 (August 1998): 171–91. http://dx.doi.org/10.1016/s0167-9473(98)00034-6.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Rital, S., A. Meziane, M. Rziza, and D. Aboutajdine. "Two-dimensional non-Gaussian autoregressive model order determination." IEEE Signal Processing Letters 9, no. 12 (December 2002): 426–28. http://dx.doi.org/10.1109/lsp.2002.806061.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Sim, C. H. "Modelling non-normal first-order autoregressive time series." Journal of Forecasting 13, no. 4 (August 1994): 369–81. http://dx.doi.org/10.1002/for.3980130403.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Cui, Zhenchao, Ziang Chen, Zhaoxin Li, and Zhaoqi Wang. "A Pyramid Semi-Autoregressive Transformer with Rich Semantics for Sign Language Production." Sensors 22, no. 24 (December 8, 2022): 9606. http://dx.doi.org/10.3390/s22249606.

Повний текст джерела
Анотація:
As a typical sequence to sequence task, sign language production (SLP) aims to automatically translate spoken language sentences into the corresponding sign language sequences. The existing SLP methods can be classified into two categories: autoregressive and non-autoregressive SLP. The autoregressive methods suffer from high latency and error accumulation caused by the long-term dependence between current output and the previous poses. And non-autoregressive methods suffer from repetition and omission during the parallel decoding process. To remedy these issues in SLP, we propose a novel method named Pyramid Semi-Autoregressive Transformer with Rich Semantics (PSAT-RS) in this paper. In PSAT-RS, we first introduce a pyramid Semi-Autoregressive mechanism with dividing target sequence into groups in a coarse-to-fine manner, which globally keeps the autoregressive property while locally generating target frames. Meanwhile, the relaxed masked attention mechanism is adopted to make the decoder not only capture the pose sequences in the previous groups, but also pay attention to the current group. Finally, considering the importance of spatial-temporal information, we also design a Rich Semantics embedding (RS) module to encode the sequential information both on time dimension and spatial displacement into the same high-dimensional space. This significantly improves the coordination of joints motion, making the generated sign language videos more natural. Results of our experiments conducted on RWTH-PHOENIX-Weather-2014T and CSL datasets show that the proposed PSAT-RS is competitive to the state-of-the-art autoregressive and non-autoregressive SLP models, achieving a better trade-off between speed and accuracy.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Adedotun, A. F., T. O. Olatayo, and G. O. Odekina. "On Non-Linear Non-Gaussian Autoregressive Model with Application to Daily Exchange Rate." Journal of Physics: Conference Series 2199, no. 1 (February 1, 2022): 012031. http://dx.doi.org/10.1088/1742-6596/2199/1/012031.

Повний текст джерела
Анотація:
Abstract The most often used distribution in statistical modeling follows Gaussian distribution. But many real-life time series data do not follow normal distribution and assumptions; therefore, inference from such a model could be misleading. Thus, a reparameterized non-Gaussian Autoregressive (NGAR) model that has the capabilities of handling non-Gaussian time series was proposed, while Anderson Darling statistics was used to identify the distribution embedded in the time series. In order to determine the performance of the proposed model, the Nigerian monthly exchange rate (Dollar-Naira Selling Rate) was analyzed using proposed and classical autoregressive models. The proposed model was used to determine the joint distribution of the observed series by separating the marginal distribution from the serial dependence. The maximum Likelihood (MLE) estimation method was used to obtain an optimal solution in estimating the generalized gamma distribution of the proposed model. The selection criteria used in this study were Akaike Information Criterion (AIC). The result revealed through the value of the Anderson Darling statistics that the data set were not normally distributed. The best model was selected using the minimum values of AIC value. The study concluded that the proposed model clearly shows that the non-Gaussian Autoregressive model is a very good alternative for analyzing time series data that deviate from the assumptions of normality and, in particular, for the estimation of its parameters.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Zhang and Li. "An Experiment on Autoregressive and Threshold Autoregressive Models with Non-Gaussian Error with Application to Realized Volatility." Economies 7, no. 2 (June 17, 2019): 58. http://dx.doi.org/10.3390/economies7020058.

Повний текст джерела
Анотація:
This article explores the fitting of Autoregressive (AR) and Threshold AR (TAR) models witha non-Gaussian error structure. This is motivated by the problem of finding a possible probabilisticmodel for the realized volatility. A Gamma random error is proposed to cater for the non-negativity ofthe realized volatility. With many good properties, such as consistency even for non-Gaussian errors,the maximum likelihood estimate is applied. Furthermore, a non-gradient numerical Nelder–Meadmethod for optimization and a penalty method, introduced for the non-negative constraint imposedby the Gamma distribution, are used. In the simulation experiments, the proposed fitting methodfound the true model with a rather insignificant bias and mean square error (MSE), given the true ARor TAR model. The AR and TAR models with Gamma random error are then tested on empiricalrealized volatility data of 30 stocks, where one third of the cases are fitted quite well, suggestingthat the model may have potential as a supplement for current Gaussian random error models withproper adaptation.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Shu, Raphael, Jason Lee, Hideki Nakayama, and Kyunghyun Cho. "Latent-Variable Non-Autoregressive Neural Machine Translation with Deterministic Inference Using a Delta Posterior." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 05 (April 3, 2020): 8846–53. http://dx.doi.org/10.1609/aaai.v34i05.6413.

Повний текст джерела
Анотація:
Although neural machine translation models reached high translation quality, the autoregressive nature makes inference difficult to parallelize and leads to high translation latency. Inspired by recent refinement-based approaches, we propose LaNMT, a latent-variable non-autoregressive model with continuous latent variables and deterministic inference procedure. In contrast to existing approaches, we use a deterministic inference algorithm to find the target sequence that maximizes the lowerbound to the log-probability. During inference, the length of translation automatically adapts itself. Our experiments show that the lowerbound can be greatly increased by running the inference algorithm, resulting in significantly improved translation quality. Our proposed model closes the performance gap between non-autoregressive and autoregressive approaches on ASPEC Ja-En dataset with 8.6x faster decoding. On WMT'14 En-De dataset, our model narrows the gap with autoregressive baseline to 2.0 BLEU points with 12.5x speedup. By decoding multiple initial latent variables in parallel and rescore using a teacher model, the proposed model further brings the gap down to 1.0 BLEU point on WMT'14 En-De task with 6.8x speedup.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Guo, Junliang, Xu Tan, Di He, Tao Qin, Linli Xu, and Tie-Yan Liu. "Non-Autoregressive Neural Machine Translation with Enhanced Decoder Input." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 3723–30. http://dx.doi.org/10.1609/aaai.v33i01.33013723.

Повний текст джерела
Анотація:
Non-autoregressive translation (NAT) models, which remove the dependence on previous target tokens from the inputs of the decoder, achieve significantly inference speedup but at the cost of inferior accuracy compared to autoregressive translation (AT) models. Previous work shows that the quality of the inputs of the decoder is important and largely impacts the model accuracy. In this paper, we propose two methods to enhance the decoder inputs so as to improve NAT models. The first one directly leverages a phrase table generated by conventional SMT approaches to translate source tokens to target tokens, which are then fed into the decoder as inputs. The second one transforms source-side word embeddings to target-side word embeddings through sentence-level alignment and word-level adversary learning, and then feeds the transformed word embeddings into the decoder as inputs. Experimental results show our method largely outperforms the NAT baseline (Gu et al. 2017) by 5.11 BLEU scores on WMT14 English-German task and 4.72 BLEU scores on WMT16 English-Romanian task.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Du, Quan, Kai Feng, Chen Xu, Tong Xiao, and Jingbo Zhu. "Non-autoregressive neural machine translation with auxiliary representation fusion." Journal of Intelligent & Fuzzy Systems 41, no. 6 (December 16, 2021): 7229–39. http://dx.doi.org/10.3233/jifs-211105.

Повний текст джерела
Анотація:
Recently, many efforts have been devoted to speeding up neural machine translation models. Among them, the non-autoregressive translation (NAT) model is promising because it removes the sequential dependence on the previously generated tokens and parallelizes the generation process of the entire sequence. On the other hand, the autoregressive translation (AT) model in general achieves a higher translation accuracy than the NAT counterpart. Therefore, a natural idea is to fuse the AT and NAT models to seek a trade-off between inference speed and translation quality. This paper proposes an ARF-NAT model (NAT with auxiliary representation fusion) to introduce the merit of a shallow AT model to an NAT model. Three functions are designed to fuse the auxiliary representation into the decoder of the NAT model. Experimental results show that ARF-NAT outperforms the NAT baseline by 5.26 BLEU scores on the WMT’14 German-English task with a significant speedup (7.58 times) over several strong AT baselines.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Yusuf, Marwan, Emmeric Tanghe, Frederic Challita, Pierre Laly, Luc Martens, Davy P. Gaillot, Martine Lienard, and Wout Joseph. "Autoregressive Modeling Approach for Non-Stationary Vehicular Channel Simulation." IEEE Transactions on Vehicular Technology 71, no. 2 (February 2022): 1124–31. http://dx.doi.org/10.1109/tvt.2021.3132859.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Kim, Wiback, and Hosung Nam. "End-to-end non-autoregressive fast text-to-speech." Phonetics and Speech Sciences 13, no. 4 (December 2021): 47–53. http://dx.doi.org/10.13064/ksss.2021.13.4.047.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Lee, Moa, Junmo Lee, and Joon-Hyuk Chang. "Non-Autoregressive Fully Parallel Deep Convolutional Neural Speech Synthesis." IEEE/ACM Transactions on Audio, Speech, and Language Processing 30 (2022): 1150–59. http://dx.doi.org/10.1109/taslp.2022.3156797.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Silva, Maria Eduarda, Isabel Pereira, and Brendan McCabe. "Bayesian Outlier Detection in Non‐Gaussian Autoregressive Time Series." Journal of Time Series Analysis 40, no. 5 (December 2, 2018): 631–48. http://dx.doi.org/10.1111/jtsa.12439.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Nassiuma, Dankit. "NON-STATIONARY AUTOREGRESSIVE MOVING-AVERAGE PROCESSES WITH INFINITE VARIANCE." Journal of Time Series Analysis 14, no. 3 (May 1993): 297–304. http://dx.doi.org/10.1111/j.1467-9892.1993.tb00146.x.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Zamani Mehryan, S., and A. Sayyareh. "Statistical Inference in Autoregressive Models with Non-negative Residuals." Journal of Statistical Research of Iran 12, no. 1 (September 1, 2015): 83–104. http://dx.doi.org/10.18869/acadpub.jsri.12.1.83.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Franses, Philip Hans, and Michael McAleer. "Testing nested and non-nested periodically integrated autoregressive models." Communications in Statistics - Theory and Methods 26, no. 6 (January 1997): 1461–75. http://dx.doi.org/10.1080/03610929708831993.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Sengupta, D., and S. Kay. "Efficient estimation of parameters for non-Gaussian autoregressive processes." IEEE Transactions on Acoustics, Speech, and Signal Processing 37, no. 6 (June 1989): 785–94. http://dx.doi.org/10.1109/assp.1989.28052.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Kadaba, S. R., S. B. Gelfand, and R. L. Kashyap. "Recursive estimation of images using non-Gaussian autoregressive models." IEEE Transactions on Image Processing 7, no. 10 (1998): 1439–52. http://dx.doi.org/10.1109/83.718484.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Di Iorio, Francesca, and Umberto Triacca. "Testing for Granger non-causality using the autoregressive metric." Economic Modelling 33 (July 2013): 120–25. http://dx.doi.org/10.1016/j.econmod.2013.03.023.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Braglia, M. "A study of non-linear autoregressive processes: Integral theory." Il Nuovo Cimento D 18, no. 12 (December 1996): 1415–24. http://dx.doi.org/10.1007/bf02453783.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Braglia, M. "A study of non-linear autoregressive processes: Differential theory." Il Nuovo Cimento D 18, no. 6 (June 1996): 705–25. http://dx.doi.org/10.1007/bf02457358.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Ran, Qiu, Yankai Lin, Peng Li, and Jie Zhou. "Guiding Non-Autoregressive Neural Machine Translation Decoding with Reordering Information." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 15 (May 18, 2021): 13727–35. http://dx.doi.org/10.1609/aaai.v35i15.17618.

Повний текст джерела
Анотація:
Non-autoregressive neural machine translation (NAT) generates each target word in parallel and has achieved promising inference acceleration. However, existing NAT models still have a big gap in translation quality compared to autoregressive neural machine translation models due to the multimodality problem: the target words may come from multiple feasible translations. To address this problem, we propose a novel NAT framework ReorderNAT which explicitly models the reordering information to guide the decoding of NAT. Specially, ReorderNAT utilizes deterministic and non-deterministic decoding strategies that leverage reordering information as a proxy for the final translation to encourage the decoder to choose words belonging to the same translation. Experimental results on various widely-used datasets show that our proposed model achieves better performance compared to most existing NAT models, and even achieves comparable translation quality as autoregressive translation models with a significant speedup.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Huang, Chenyang, Hao Zhou, Osmar R. Zaïane, Lili Mou, and Lei Li. "Non-autoregressive Translation with Layer-Wise Prediction and Deep Supervision." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 10 (June 28, 2022): 10776–84. http://dx.doi.org/10.1609/aaai.v36i10.21323.

Повний текст джерела
Анотація:
How do we perform efficient inference while retaining high translation quality? Existing neural machine translation models, such as Transformer, achieve high performance, but they decode words one by one, which is inefficient. Recent non-autoregressive translation models speed up the inference, but their quality is still inferior. In this work, we propose DSLP, a highly efficient and high-performance model for machine translation. The key insight is to train a non-autoregressive Transformer with Deep Supervision and feed additional Layer-wise Predictions. We conducted extensive experiments on four translation tasks (both directions of WMT'14 EN-DE and WMT'16 EN-RO). Results show that our approach consistently improves the BLEU scores compared with respective base models. Specifically, our best variant outperforms the autoregressive model on three translation tasks, while being 14.8 times more efficient in inference.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії