Добірка наукової літератури з теми "Subtitling, Speech Translation, Neural Machine Translation, Evaluation"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Subtitling, Speech Translation, Neural Machine Translation, Evaluation".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Subtitling, Speech Translation, Neural Machine Translation, Evaluation"

1

Yan, Li. "Real-Time Automatic Translation Algorithm for Chinese Subtitles in Media Playback Using Knowledge Base." Mobile Information Systems 2022 (June 18, 2022): 1–11. http://dx.doi.org/10.1155/2022/5245035.

Повний текст джерела
Анотація:
Currently, speech technology allows for simultaneous subtitling of live television programs using speech recognition and the respeaking approach. Although many previous studies on the quality of live subtitling utilizing voice recognition have been proposed, little attention has been paid to the quantitative elements of subtitles. Due to the high performance of neural machine translation (NMT), it has become the standard machine translation method. A data-driven translation approach requires high-quality, large-scale training data and powerful computing resources to achieve good performance. However, data-driven translation will face challenges when translating languages with limited resources. This paper’s research work focuses on how to integrate linguistic knowledge into the NMT model to improve the translation performance and quality of the NMT system. A method of integrating semantic concept information in the NMT system is proposed to address the problem of out-of-set words and low-frequency terms in the NMT system. This research also provides an NMT-centered read modeling and decoding approach integrating an external knowledge base. The experimental results show that the proposed strategy can effectively increase the MT system’s translation performance.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Narayan, Ravi, V. P. Singh, and S. Chakraverty. "Quantum Neural Network Based Machine Translator for Hindi to English." Scientific World Journal 2014 (2014): 1–8. http://dx.doi.org/10.1155/2014/485737.

Повний текст джерела
Анотація:
This paper presents the machine learning based machine translation system for Hindi to English, which learns the semantically correct corpus. The quantum neural based pattern recognizer is used to recognize and learn the pattern of corpus, using the information of part of speech of individual word in the corpus, like a human. The system performs the machine translation using its knowledge gained during the learning by inputting the pair of sentences of Devnagri-Hindi and English. To analyze the effectiveness of the proposed approach, 2600 sentences have been evaluated during simulation and evaluation. The accuracy achieved on BLEU score is 0.7502, on NIST score is 6.5773, on ROUGE-L score is 0.9233, and on METEOR score is 0.5456, which is significantly higher in comparison with Google Translation and Bing Translation for Hindi to English Machine Translation.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Ganesh, Preetham, Bharat S. Rawal, Alexander Peter, and Andi Giri. "POS-Tagging based Neural Machine Translation System for European Languages using Transformers." WSEAS TRANSACTIONS ON INFORMATION SCIENCE AND APPLICATIONS 18 (May 24, 2021): 26–33. http://dx.doi.org/10.37394/23209.2021.18.5.

Повний текст джерела
Анотація:
The interaction between human beings has always faced different kinds of difficulties. One of those difficulties is the language barrier. It would be a tedious task for someone to learn all the syllables in a new language in a short period and converse with a native speaker without grammatical errors. Moreover, having a language translator at all times would be intrusive and expensive. We propose a novel approach to Neural Machine Translation (NMT) system using interlanguage word similaritybased model training and Part-Of-Speech (POS) Tagging based model testing. We compare these approaches using two classical architectures: Luong Attention-based Sequence-to-Sequence architecture and Transformer based model. The sentences for the Luong Attention-based Sequence-to-Sequence were tokenized using SentencePiece tokenizer. The sentences for the Transformer model were tokenized using Subword Text Encoder. Three European languages were selected for modeling, namely, Spanish, French, and German. The datasets were downloaded from multiple sources such as Europarl Corpus, Paracrawl Corpus, and Tatoeba Project Corpus. Sparse Categorical CrossEntropy was the evaluation metric during the training stage, and during the testing stage, the Bilingual Evaluation Understudy (BLEU) Score, Precision Score, and Metric for Evaluation of Translation with Explicit Ordering (METEOR) score were the evaluation metrics.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Belinkov, Yonatan, Nadir Durrani, Fahim Dalvi, Hassan Sajjad, and James Glass. "On the Linguistic Representational Power of Neural Machine Translation Models." Computational Linguistics 46, no. 1 (March 2020): 1–52. http://dx.doi.org/10.1162/coli_a_00367.

Повний текст джерела
Анотація:
Despite the recent success of deep neural networks in natural language processing and other spheres of artificial intelligence, their interpretability remains a challenge. We analyze the representations learned by neural machine translation (NMT) models at various levels of granularity and evaluate their quality through relevant extrinsic properties. In particular, we seek answers to the following questions: (i) How accurately is word structure captured within the learned representations, which is an important aspect in translating morphologically rich languages? (ii) Do the representations capture long-range dependencies, and effectively handle syntactically divergent languages? (iii) Do the representations capture lexical semantics? We conduct a thorough investigation along several parameters: (i) Which layers in the architecture capture each of these linguistic phenomena; (ii) How does the choice of translation unit (word, character, or subword unit) impact the linguistic properties captured by the underlying representations? (iii) Do the encoder and decoder learn differently and independently? (iv) Do the representations learned by multilingual NMT models capture the same amount of linguistic information as their bilingual counterparts? Our data-driven, quantitative evaluation illuminates important aspects in NMT models and their ability to capture various linguistic phenomena. We show that deep NMT models trained in an end-to-end fashion, without being provided any direct supervision during the training process, learn a non-trivial amount of linguistic information. Notable findings include the following observations: (i) Word morphology and part-of-speech information are captured at the lower layers of the model; (ii) In contrast, lexical semantics or non-local syntactic and semantic dependencies are better represented at the higher layers of the model; (iii) Representations learned using characters are more informed about word-morphology compared to those learned using subword units; and (iv) Representations learned by multilingual models are richer compared to bilingual models.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Arora, Karunesh Kumar, and Shyam Sunder Agrawal. "Source-side Reordering to Improve Machine Translation between Languages with Distinct Word Orders." ACM Transactions on Asian and Low-Resource Language Information Processing 20, no. 4 (July 31, 2021): 1–18. http://dx.doi.org/10.1145/3448252.

Повний текст джерела
Анотація:
English and Hindi have significantly different word orders. English follows the subject-verb-object (SVO) order, while Hindi primarily follows the subject-object-verb (SOV) order. This difference poses challenges to modeling this pair of languages for translation. In phrase-based translation systems, word reordering is governed by the language model, the phrase table, and reordering models. Reordering in such systems is generally achieved during decoding by transposing words within a defined window. These systems can handle local reorderings, and while some phrase-level reorderings are carried out during the formation of phrases, they are weak in learning long-distance reorderings. To overcome this weakness, researchers have used reordering as a step in pre-processing to render the reordered source sentence closer to the target language in terms of word order. Such approaches focus on using parts-of-speech (POS) tag sequences and reordering the syntax tree by using grammatical rules, or through head finalization. This study shows that mere head finalization is not sufficient for the reordering of sentences in the English-Hindi language pair. It describes various grammatical constructs and presents a comparative evaluation of reorderings with the original and the head-finalized representations. The impact of the reordering on the quality of translation is measured through the BLEU score in phrase-based statistical systems and neural machine translation systems. A significant gain in BLEU score was noted for reorderings in different grammatical constructs.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

P., Dr Karrupusamy. "Analysis of Neural Network Based Language Modeling." March 2020 2, no. 1 (March 30, 2020): 53–63. http://dx.doi.org/10.36548/jaicn.2020.1.006.

Повний текст джерела
Анотація:
The fundamental and core process of the natural language processing is the language modelling usually referred as the statistical language modelling. The language modelling is also considered to be vital in the processing the natural languages as the other chores such as the completion of sentences, recognition of speech automatically, translations of the statistical machines, and generation of text and so on. The success of the viable natural language processing totally relies on the quality of the modelling of the language. In the previous spans the research field such as the linguistics, psychology, speech recognition, data compression, neuroscience, machine translation etc. As the neural network are the very good choices for having a quality language modelling the paper presents the analysis of neural networks in the modelling of the language. Utilizing some of the dataset such as the Penn Tree bank, Billion Word Benchmark and the Wiki Test the neural network models are evaluated on the basis of the word error rate, perplexity and the bilingual evaluation under study scores to identify the optimal model.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

P., Dr Karrupusamy. "Analysis of Neural Network Based Language Modeling." March 2020 2, no. 1 (March 30, 2020): 53–63. http://dx.doi.org/10.36548/jaicn.2020.3.006.

Повний текст джерела
Анотація:
The fundamental and core process of the natural language processing is the language modelling usually referred as the statistical language modelling. The language modelling is also considered to be vital in the processing the natural languages as the other chores such as the completion of sentences, recognition of speech automatically, translations of the statistical machines, and generation of text and so on. The success of the viable natural language processing totally relies on the quality of the modelling of the language. In the previous spans the research field such as the linguistics, psychology, speech recognition, data compression, neuroscience, machine translation etc. As the neural network are the very good choices for having a quality language modelling the paper presents the analysis of neural networks in the modelling of the language. Utilizing some of the dataset such as the Penn Tree bank, Billion Word Benchmark and the Wiki Test the neural network models are evaluated on the basis of the word error rate, perplexity and the bilingual evaluation under study scores to identify the optimal model.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Xue, Nan. "Analysis Model of Spoken English Evaluation Algorithm Based on Intelligent Algorithm of Internet of Things." Computational Intelligence and Neuroscience 2022 (March 27, 2022): 1–8. http://dx.doi.org/10.1155/2022/8469945.

Повний текст джерела
Анотація:
With the in-depth promotion of the national strategy for the integration of artificial intelligence technology and entity development, speech recognition processing technology, as an important medium of human-computer interaction, has received extensive attention and motivated research in industry and academia. However, the existing accurate speech recognition products are based on massive data platform, which has the problems of slow response and security risk, which makes it difficult for the existing speech recognition products to meet the application requirements for timely translation of speech with high response time and network security requirements under the condition of network instability and insecurity. Based on this, this paper studies the analysis model of oral English evaluation algorithm based on Internet of things intelligent algorithm in speech recognition technology. Firstly, based on the automatic machine learning and lightweight learning strategy, a lightweight technology of automatic speech recognition depth neural network adapted to the edge computing power is proposed. Secondly, the quantitative evaluation of Internet of things intelligent classification algorithm and big data analysis in this system is described. In the evaluation, the evaluation method of oral English characteristics is adopted. At the same time, the Internet of things intelligent classification algorithm and big data analysis strategy are used to evaluate the accuracy of oral English. Finally, the experimental results show that the oral English feature recognition system based on Internet of things intelligent classification algorithm and big data analysis has the advantages of good reliability, high intelligence, and strong ability to resist subjective factors, which proves the advantages of Internet of things intelligent classification algorithm and big data analysis in English feature recognition.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Wu, Long, Ta Li, Li Wang, and Yonghong Yan. "Improving Hybrid CTC/Attention Architecture with Time-Restricted Self-Attention CTC for End-to-End Speech Recognition." Applied Sciences 9, no. 21 (October 31, 2019): 4639. http://dx.doi.org/10.3390/app9214639.

Повний текст джерела
Анотація:
As demonstrated in hybrid connectionist temporal classification (CTC)/Attention architecture, joint training with a CTC objective is very effective to solve the misalignment problem existing in the attention-based end-to-end automatic speech recognition (ASR) framework. However, the CTC output relies only on the current input, which leads to the hard alignment issue. To address this problem, this paper proposes the time-restricted attention CTC/Attention architecture, which integrates an attention mechanism with the CTC branch. “Time-restricted” means that the attention mechanism is conducted on a limited window of frames to the left and right. In this study, we first explore time-restricted location-aware attention CTC/Attention, establishing the proper time-restricted attention window size. Inspired by the success of self-attention in machine translation, we further introduce the time-restricted self-attention CTC/Attention that can better model the long-range dependencies among the frames. Experiments with wall street journal (WSJ), augmented multiparty interaction (AMI), and switchboard (SWBD) tasks demonstrate the effectiveness of the proposed time-restricted self-attention CTC/Attention. Finally, to explore the robustness of this method to noise and reverberation, we join a train neural beamformer frontend with the time-restricted attention CTC/Attention ASR backend in the CHIME-4 dataset. The reduction of word error rate (WER) and the increase of perceptual evaluation of speech quality (PESQ) approve the effectiveness of this framework.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Karakanta, Alina. "Experimental research in automatic subtitling." Translation Spaces, June 14, 2022. http://dx.doi.org/10.1075/ts.21021.kar.

Повний текст джерела
Анотація:
Abstract Recent developments in neural machine translation, and especially speech translation, are gradually but firmly entering the field of audiovisual translation (AVT). Automation in subtitling is extending from a machine translation (MT) component to fully automatic subtitling, which comprises MT, auto-spotting and automatic segmentation. The rise of this new paradigm renders MT-oriented experimental designs inadequate for the evaluation and investigation of automatic subtitling, since they fail to encompass the multimodal nature and technical requirements of subtitling. This paper highlights the methodological gaps to be addressed by multidisciplinary efforts in order to overcome these inadequacies and obtain metrics and methods that lead to rigorous experimental research in automatic subtitling. It presents a review of previous experimental designs in MT for subtitling, identifies their limitations for conducting research under the new paradigm and proposes a set of recommendations towards achieving replicability and reproducibility in experimental research at the crossroads between AVT and MT.
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "Subtitling, Speech Translation, Neural Machine Translation, Evaluation"

1

Karakanta, Alina. "Automatic subtitling: A new paradigm." Doctoral thesis, Università degli studi di Trento, 2022. https://hdl.handle.net/11572/356701.

Повний текст джерела
Анотація:
Audiovisual Translation (AVT) is a field where Machine Translation (MT) has long found limited success mainly due to the multimodal nature of the source and the formal requirements of the target text. Subtitling is the predominant AVT type, quickly and easily providing access to the vast amounts of audiovisual content becoming available daily. Automation in subtitling has so far focused on MT systems which translate source language subtitles, already transcribed and timed by humans. With recent developments in speech translation (ST), the time is ripe for extended automation in subtitling, with end-to-end solutions for obtaining target language subtitles directly from the source speech. In this thesis, we address the key steps for accomplishing the new paradigm of automatic subtitling: data, models and evaluation. First, we address the lack of representative data by compiling MuST-Cinema, a speech-to-subtitles corpus. Segmenter models trained on MuST-Cinema accurately split sentences into subtitles, and enable automatic data augmentation techniques. Having representative data at hand, we move to developing direct ST models for three scenarios: offline subtitling, dual subtitling, live subtitling. Lastly, we propose methods for evaluating subtitle-specific aspects, such as metrics for subtitle segmentation, a product- and process-based exploration of the effect of spotting changes in the subtitle post-editing process, and finally, a comprehensive survey on subtitlers' user experience and views on automatic subtitling. Our findings show the potential of speech technologies for extending automation in subtitling to provide multilingual access to information and communication.
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Subtitling, Speech Translation, Neural Machine Translation, Evaluation"

1

Primandhika, Restu Bias, Muhammad Nadzeri Munawar, and Aceng Ruhendi Saifullah. "Experiment on a Transformer Model Indonesian-to-Sundanese Neural Machine Translation with Sundanese Speech Level Evaluation." In Thirteenth Conference on Applied Linguistics (CONAPLIN 2020). Paris, France: Atlantis Press, 2021. http://dx.doi.org/10.2991/assehr.k.210427.069.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії