Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Tamil speech recognition.

Artykuły w czasopismach na temat „Tamil speech recognition”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 47 najlepszych artykułów w czasopismach naukowych na temat „Tamil speech recognition”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.

1

Rojathai, S., i M. Venkatesulu. "Investigation of ANFIS and FFBNN Recognition Methods Performance in Tamil Speech Word Recognition". International Journal of Software Innovation 2, nr 2 (kwiecień 2014): 43–53. http://dx.doi.org/10.4018/ijsi.2014040103.

Pełny tekst źródła
Streszczenie:
In speech word recognition systems, feature extraction and recognition plays a most significant role. More number of feature extraction and recognition methods are available in the existing speech word recognition systems. In most recent Tamil speech word recognition system has given high speech word recognition performance with PAC-ANFIS compared to the earlier Tamil speech word recognition systems. So the investigation of speech word recognition by various recognition methods is needed to prove their performance in the speech word recognition. This paper presents the investigation process with well known Artificial Intelligence method as Feed Forward Back Propagation Neural Network (FFBNN) and Adaptive Neuro Fuzzy Inference System (ANFIS). The Tamil speech word recognition system with PAC-FFBNN performance is analyzed in terms of statistical measures and Word Recognition Rate (WRR) and compared with PAC-ANFIS and other existing Tamil speech word recognition systems.
Style APA, Harvard, Vancouver, ISO itp.
2

Rojathai, S., i M. Venkatesulu. "Tamil Speech Word Recognition System with Aid of ANFIS and Dynamic Time Warping (DTW)". Journal of Computational and Theoretical Nanoscience 13, nr 10 (1.10.2016): 6719–27. http://dx.doi.org/10.1166/jctn.2016.5619.

Pełny tekst źródła
Streszczenie:
It is unfortunate that though the extant Tamil speech word recognition techniques have come out successful in detecting speech words from the speech word database by means of MFCC (Mel Frequency Cepstral Coefficients) features and FFBNN (Feed Forward Back Propagation Neural Network), they seem to have failed miserably to come up to expectations by generating less than desired outcomes of recital in terms of precision. Hence we have proudly launched, through this document, an innovative Tamil speech recognition technique to address the challenge by making use of novel features with ANFIS (Adaptive Neuro Fuzzy Inference System) based recognition method. Thus, at the outset, preprocessing is performed to cut down the noise in the input speech signals. Thereafter, feature vectors are mined from the preprocessed speech signals and furnished to the ANFIS. The epoch making technique is home to three novel features such as Energy Entropy, Short Time Energy and Zero Crossing Rate which are mined from the Tamil speech word signals and subjected to the word recognition procedure, in which, the ANFIS system is well guided by the features from feature mining task and the recognition efficiency is authenticated by using a set of test speech words. In the course of the testing stage, with a view to achieving exact outcomes, the dynamic time warping is estimated by means of the guidance and test word feature values. The performance outcomes illustrate the fact that the innovative Tamil speech word recognition technique has been able to achieve amazing efficiency in recognizing the input Tamil speech words, in addition to yielding higher levels of achievement in terms of precision. Moreover, the accomplishment of the well-conceived recognition technique is assessed and contrasted with the modern Tamil speech word recognition techniques.
Style APA, Harvard, Vancouver, ISO itp.
3

Nelapati, Ratna Kanth, i Saraswathi Selvarajan. "Affect Recognition in Human Emotional Speech using Probabilistic Support Vector Machines". International Journal on Recent and Innovation Trends in Computing and Communication 10, nr 2s (31.12.2022): 166–73. http://dx.doi.org/10.17762/ijritcc.v10i2s.5924.

Pełny tekst źródła
Streszczenie:
The problem of inferring human emotional state automatically from speech has become one of the central problems in Man Machine Interaction (MMI). Though Support Vector Machines (SVMs) were used in several worksfor emotion recognition from speech, the potential of using probabilistic SVMs for this task is not explored. The emphasis of the current work is on how to use probabilistic SVMs for the efficient recognition of emotions from speech. Emotional speech corpuses for two Dravidian languages- Telugu & Tamil- were constructed for assessing the recognition accuracy of Probabilistic SVMs. Recognition accuracy of the proposed model is analyzed using both Telugu and Tamil emotional speech corpuses and compared with three of the existing works. Experimental results indicated that the proposed model is significantly better compared with the existing methods.
Style APA, Harvard, Vancouver, ISO itp.
4

Hashim Changrampadi, Mohamed, A. Shahina, M. Badri Narayanan i A. Nayeemulla Khan. "End-to-End Speech Recognition of Tamil Language". Intelligent Automation & Soft Computing 32, nr 2 (2022): 1309–23. http://dx.doi.org/10.32604/iasc.2022.022021.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Thangarajan, R., A. M. Natarajan i M. Selvam. "Syllable modeling in continuous speech recognition for Tamil language". International Journal of Speech Technology 12, nr 1 (marzec 2009): 47–57. http://dx.doi.org/10.1007/s10772-009-9058-0.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Suriya, Dr S., S. Nivetha, P. Pavithran, Ajay Venkat S., Sashwath K. G. i Elakkiya G. "Effective Tamil Character Recognition Using Supervised Machine Learning Algorithms". EAI Endorsed Transactions on e-Learning 8, nr 2 (8.02.2023): e1. http://dx.doi.org/10.4108/eetel.v8i2.3025.

Pełny tekst źródła
Streszczenie:
Computational linguistics is the branch of linguistics in which the techniques of computer science are applied to the analysis and synthesis of language and speech. The main goals of computational linguistics include: Text-to- speech conversion, Speech-to-text conversion and Translating from one language to another. A part of Computational Linguistics is the Character recognition. Character recognition has been one of the active and challenging research areas in the field of image processing and pattern recognition. Character recognition methodology mainly focuses on recognizing the characters irrespective of the difficulties that arises due to the variations in writing style. The aim of this project is to perform character recognition for of one of the complex structures of south Indian language ‘Tamil’ using a supervised algorithm that increases the accuracy of recognition. The novelty of this system is that it recognizes the characters of the Predominant Tamil Language. The proposed approach is capable of recognizing text where the traditional character recognition systems fails, notably in the presence of blur, low contrast, low resolution, high image noise, and other distortions. This system uses Convolutional Neural Network Algorithm that are able to exact the local features more accurately as they restrict the receptive fields of the hidden layers to be local. Convolutional Neural Networks are a great kind of multi-layer neural networks that uses back-propagation algorithm. Convolutional Neural Networks are used to recognize visual patterns directly from pixel images with minimal preprocessing. This trained network is used for recognition and classification. The results show that the proposed system yields good recognition rates.
Style APA, Harvard, Vancouver, ISO itp.
7

Sarkar, Swagata, Sanjana R, Rajalakshmi S i Harini T J. "Simulation and detection of tamil speech accent using modified mel frequency cepstral coefficient algorithm". International Journal of Engineering & Technology 7, nr 3.3 (8.06.2018): 426. http://dx.doi.org/10.14419/ijet.v7i2.33.14202.

Pełny tekst źródła
Streszczenie:
Automatic Speech reconstruction system is a topic of interest of many researchers. Since many online courses are come into the picture, so recent researchers are concentrating on speech accent recognition. Many works have been done in this field. In this paper speech accent recognition of Tamil speech from different zones of Tamilnadu is addressed. Hidden Markov Model (HMM) and Viterbi algorithms are very popularly used algorithms. Researchers have worked with Mel Frequency Cepstral Coefficients (MFCC) to identify speech as well as speech accent. In this paper speech accent features are identified by modified MFCC algorithm. The classification of features is done by back propagation algorithm.
Style APA, Harvard, Vancouver, ISO itp.
8

Geetha, K., i R. Vadivel. "Phoneme Segmentation of Tamil Speech Signals Using Spectral Transition Measure". Oriental journal of computer science and technology 10, nr 1 (4.03.2017): 114–19. http://dx.doi.org/10.13005/ojcst/10.01.15.

Pełny tekst źródła
Streszczenie:
Process of identifying the end points of the acoustic units of the speech signal is called speech segmentation. Speech recognition systems can be designed using sub-word unit like phoneme. A Phoneme is the smallest unit of the language. It is context dependent and tedious to find the boundary. Automated phoneme segmentation is carried in researches using Short term Energy, Convex hull, Formant, Spectral Transition Measure(STM), Group Delay Functions, Bayesian Information Criterion, etc. In this research work, STM is used to find the phoneme boundary of Tamil speech utterances. Tamil spoken word dataset was prepared with 30 words uttered by 4 native speakers with a high quality microphone. The performance of the segmentation is analysed and results are presented.
Style APA, Harvard, Vancouver, ISO itp.
9

A, Akila, i Chandra E. "WORD BASED TAMIL SPEECH RECOGNITION USING TEMPORAL FEATURE BASED SEGMENTATION". ICTACT Journal on Image and Video Processing 5, nr 4 (1.05.2015): 1037–43. http://dx.doi.org/10.21917/ijivp.2015.0152.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Kalamani, M., M. Krishnamoorthi i R. S. Valarmathi. "Continuous Tamil Speech Recognition technique under non stationary noisy environments". International Journal of Speech Technology 22, nr 1 (30.11.2018): 47–58. http://dx.doi.org/10.1007/s10772-018-09580-8.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
11

Lyakso, Elena, Nersisson Ruban, Olga Frolova i Mary A. Mekala. "The children’s emotional speech recognition by adults: Cross-cultural study on Russian and Tamil language". PLOS ONE 18, nr 2 (15.02.2023): e0272837. http://dx.doi.org/10.1371/journal.pone.0272837.

Pełny tekst źródła
Streszczenie:
The current study investigated the features of cross-cultural recognition of four basic emotions “joy–neutral (calm state)–sad–anger” in the spontaneous and acting speech of Indian and Russian children aged 8–12 years across Russian and Tamil languages. The research tasks were to examine the ability of Russian and Indian experts to recognize the state of Russian and Indian children by their speech, determine the acoustic features of correctly recognized speech samples, and specify the influence of the expert’s language on the cross-cultural recognition of the emotional states of children. The study includes a perceptual auditory study by listeners and instrumental spectrographic analysis of child speech. Different accuracy and agreement between Russian and Indian experts were shown in recognizing the emotional states of Indian and Russian children by their speech, with more accurate recognition of the emotional state of children in their native language, in acting speech vs spontaneous speech. Both groups of experts recognize the state of anger via acting speech with the high agreement. The difference between the groups of experts was in the definition of joy, sadness, and neutral states depending on the test material with a different agreement. Speech signals with emphasized differences in acoustic patterns were more accurately classified by experts as belonging to emotions of different activation. The data showed that, despite the universality of basic emotions, on the one hand, the cultural environment affects their expression and perception, on the other hand, there are universal non-linguistic acoustic features of the voice that allow us to identify emotions via speech.
Style APA, Harvard, Vancouver, ISO itp.
12

C., Ms Vimala, i V. Radha. "Speaker Independent Isolated Speech Recognition System for Tamil Language using HMM". Procedia Engineering 30 (2012): 1097–102. http://dx.doi.org/10.1016/j.proeng.2012.01.968.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
13

Rajendran, Sukumar, Kiruba Thangam Raja, Nagarajan G., Stephen Dass A., Sandeep Kumar M. i Prabhu Jayagopal. "Deep Learning Speech Synthesis Model for Word/Character-Level Recognition in the Tamil Language". International Journal of e-Collaboration 19, nr 4 (20.01.2023): 1–14. http://dx.doi.org/10.4018/ijec.316824.

Pełny tekst źródła
Streszczenie:
As electronics and the increasing popularity of social media are widely used, a large amount of text data is created at unprecedented rates. All data created cannot be read by humans, and what they discuss in their sphere of interest may be found. Modeling of themes is a way to identify subjects in a vast number of texts. There has been a lot of study on subject-modeling in English. At the same time, millions of people worldwide speak Tamil; there is no great development in resource-scarce languages such as Tamil being spoken by millions of people worldwide. The consequences of specific deep learning models are usually difficult to interpret for the typical user. They are utilizing various visualization techniques to represent the outcomes of deep learning in a meaningful way. Then, they use metrics like similarity, correlation, perplexity, and coherence to evaluate the deep learning models.
Style APA, Harvard, Vancouver, ISO itp.
14

SREE BASKARAN RAGURAM, Laxmi, i Vijaya MADHAYA SHANMUGAM. "Deep belief networks for phoneme recognition in continuous Tamil speech–an analysis". Traitement du signal 34, nr 3-4 (28.10.2017): 137–51. http://dx.doi.org/10.3166/ts.34.137-151.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
15

Jeyalakshmi, C., i A. Revathi. "Efficient speech recognition system for hearing impaired children in classical Tamil language". International Journal of Biomedical Engineering and Technology 26, nr 1 (2017): 84. http://dx.doi.org/10.1504/ijbet.2017.10010081.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
16

Jeyalakshmi, C., i A. Revathi. "Efficient speech recognition system for hearing impaired children in classical Tamil language". International Journal of Biomedical Engineering and Technology 26, nr 1 (2018): 84. http://dx.doi.org/10.1504/ijbet.2018.089261.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
17

Akila, A., i E. Chandra. "Performance enhancement of syllable based Tamil speech recognition system using time normalization and rate of speech". CSI Transactions on ICT 2, nr 2 (czerwiec 2014): 77–84. http://dx.doi.org/10.1007/s40012-014-0044-6.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
18

C, Vimala, i V. Radha. "Efficient Acoustic Front-End Processing for Tamil Speech Recognition using Modified GFCC Features". International Journal of Image, Graphics and Signal Processing 8, nr 7 (8.07.2016): 22–31. http://dx.doi.org/10.5815/ijigsp.2016.07.03.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
19

Milton, A., i K. Anish Monsely. "Tamil and English speech database for heartbeat estimation". International Journal of Speech Technology 21, nr 4 (25.09.2018): 967–73. http://dx.doi.org/10.1007/s10772-018-9557-y.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
20

Putta, Venkata Subbaiah, A. Selwin Mich Priyadharson i Venkatesa Prabhu Sundramurthy. "Regional Language Speech Recognition from Bone-Conducted Speech Signals through Different Deep Learning Architectures". Computational Intelligence and Neuroscience 2022 (25.08.2022): 1–10. http://dx.doi.org/10.1155/2022/4473952.

Pełny tekst źródła
Streszczenie:
Bone-conducted microphone (BCM) senses vibrations from bones in the skull during speech to electrical audio signal. When transmitting speech signals, bone-conduction microphones (BCMs) capture speech signals based on the vibrations of the speaker’s skull and have better noise-resistance capabilities than standard air-conduction microphones (ACMs). BCMs have a different frequency response than ACMs because they only capture the low-frequency portion of speech signals. When we replace an ACM with a BCM, we may get satisfactory noise suppression results, but the speech quality and intelligibility may suffer due to the nature of the solid vibration. Mismatched BCM and ACM characteristics can also have an impact on ASR performance, and it is impossible to recreate a new ASR system using voice data from BCMs. The speech intelligibility of a BCM-conducted speech signal is determined by the location of the bone used to acquire the signal and accurately model phonemes of words. Deep learning techniques such as neural network have traditionally been used for speech recognition. However, neural networks have a high computational cost and are unable to model phonemes in signals. In this paper, the intelligibility of BCM signal speech was evaluated for different bone locations, namely the right ramus, larynx, and right mastoid. Listener and deep learning architectures such as CapsuleNet, UNet, and S-Net were used to acquire the BCM signal for Tamil words and evaluate speech intelligibility. As validated by the listener and deep learning architectures, the Larynx bone location improves speech intelligibility.
Style APA, Harvard, Vancouver, ISO itp.
21

Priya, C. Bharathi, S. P. Siddique Ibrahim, D. Yamuna Thangam, X. Francis Jency i P. Parthasarathi. "Tamil Sign Language Translation and Recognition System for Deaf-mute People Using Image Processing Techniques". Applied and Computational Engineering 8, nr 1 (1.08.2023): 398–404. http://dx.doi.org/10.54254/2755-2721/8/20230193.

Pełny tekst źródła
Streszczenie:
This work concentrates on the device, which helps as a translation system for translating sign gestures into text. The disabled people, in particular, hearing and speech impaired people, are facing difficulties in society. Communication of disabled people becomes worse as the majority of ordinary people do not understand it. These disabled people face difficulty communicating with others; some have many problems communicating with others in sign language. It causes the communication gap between them where impaired people cannot share their views and skills with others. We headed to facilitate communication between the disabled people and the "Tamil sign language translator to solve this problem." Here, gestures are translated to Tamil language to find a localized solution. It processes 31 Tamil alphabets, 12 Vowels, 18 Consonants, and 1 Aayudha Ezhuthu. It is 32 combinations with five fingers points either up or down and mapped to decimal numbers. Here in this process, we need edge detection, which is accurately done and Processed by canny edge detection. In addition to this process, we have used two gesture recognition methods and training the input system through our mainframe algorithm called scale-invariant feature detection transform. We have developed this system, which is useful for deaf and dumb people for essential communication.
Style APA, Harvard, Vancouver, ISO itp.
22

Kalith, Ibralebbe Mohamed, David Asirvatham, Ali Khatibi i Samantha Thelijjagoda. "Comparison of Syllable and Phoneme Modelling of Agglutinative Tamil Isolated Words in Speech Recognition". Current Journal of Applied Science and Technology 29, nr 4 (10.10.2018): 1–10. http://dx.doi.org/10.9734/cjast/2018/40568.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
23

Praveena, Jayakumar, i Vincent Churchill Soundaraj Priya. "Evaluating the Benefit of Hearing Aid Using Paired Words in Tamil". Audiology and Speech Research 19, nr 3 (31.07.2023): 190–200. http://dx.doi.org/10.21848/asr.230089.

Pełny tekst źródła
Streszczenie:
Purpose: Subjective measurements, such as speech audiometry, are essential to determine the perception of speech as it provides insight regarding perceptual abilities. The present study aimed to develop paired word test stimuli in Tamil and evaluate their utility for assessing the benefits of a hearing aid. Methods: The stimuli were 30 paired words which were paired to rhythm containing almost all vowels and consonants of the Tamil language differing in one or more distinctive features, such as place, manner, voicing features of consonants, and height, duration, and rounding features of vowels. The paired words test was administered to 60 participants with normal hearing and 60 participants with hearing impairment. The correct identification scores and their percentage were computed to notice the benefit provided by their hearing aids. Results: The overall performance of individuals with normal hearing on the paired identification was high, suggesting that these paired word test materials could be used for individuals with hearing impairment to assess hearing aid benefit. A greater improvement in recognition scores for paired words was obtained after being fitted with a hearing aid in individuals with hearing impairment. It was noticed that due to hearing loss, the audibility of perception reduces, yielding lower scores in paired word identification. Conclusion: When the proper fitting was done, the percentage of identification scores was increased. Therefore, the present study concludes that speech perception abilities can be evaluated and quantified using these paired word tests.
Style APA, Harvard, Vancouver, ISO itp.
24

Kalith, Ibralebbe Mohamed, David Asirvatham i Ismail Raisal. "Context-dependent Syllable Modeling of Sentence-based Semi-continuous Speech Recognition for the Tamil Language". Information Technology Journal 16, nr 3 (15.08.2017): 125–33. http://dx.doi.org/10.3923/itj.2017.125.133.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
25

Lokesh, S., Priyan Malarvizhi Kumar, M. Ramya Devi, P. Parthasarathy i C. Gokulnath. "An Automatic Tamil Speech Recognition system by using Bidirectional Recurrent Neural Network with Self-Organizing Map". Neural Computing and Applications 31, nr 5 (9.04.2018): 1521–31. http://dx.doi.org/10.1007/s00521-018-3466-5.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
26

Fernandes, J. Bennilo, i Kasiprasad Mannepalli. "Enhanced Deep Hierarchal GRU & BILSTM using Data Augmentation and Spatial Features for Tamil Emotional Speech Recognition". International Journal of Modern Education and Computer Science 14, nr 3 (8.06.2022): 45–63. http://dx.doi.org/10.5815/ijmecs.2022.03.03.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
27

Saraswathi, S., i T. V. Geetha. "Time scale modification and vocal tract length normalization for improving the performance of Tamil speech recognition system implemented using language independent segmentation algorithm". International Journal of Speech Technology 9, nr 3-4 (grudzień 2006): 151–63. http://dx.doi.org/10.1007/s10772-007-9004-y.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
28

Saraswathi, S., i T. V. Geetha. "Comparison of performance of enhanced morpheme-based language model with different word-based language models for improving the performance of Tamil speech recognition system". ACM Transactions on Asian Language Information Processing 6, nr 3 (listopad 2007): 9. http://dx.doi.org/10.1145/1290002.1290003.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
29

K K A, Abdullah, Robert A B C i Adeyemo A B. "August 2016 VOLUME 5, ISSUE 8, AUGUST 2016 5th Generation Wi-Fi Shatha Ghazal, Raina S Alkhlailah Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5801 ECG Arrhythmia Detection Using Choi-Williams Time-Frequency Distribution and Artificial Neural Network Sanjit K. Dash, G. Sasibhushana Rao Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5802 Data Security using RSA Algorithm in Cloud Computing Santosh Kumar Singh, Dr. P.K. Manjhi, Dr. R.K. Tiwari Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5803 Detection Algorithms in Medical Imaging Priyanka Pareek, Pankaj Dalal Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5804 A Review Study on the CPU Scheduling Algorithms Shweta Jain, Dr. Saurabh Jain Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5805 Healthcare Biosensors - A Paradigm Shift To Wireless Technology Taha Mukhtar Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5806 Congestion Control for Peer to Peer Application using Random Early Detection Algorithm Sonam Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5807 Quantitative and Qualitative Analysis of Milk Parameters using Arduino Controller Y.R. Bhamare, M.B. Matsagar, C.G. Dighavkar Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5808 Ardunio Based Security and Safety using GSM as Fault Alert System for BTS (Base Transceiver Station) Umeshwari Khot, Prof. Venkat N. Ghodke Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5809 Automatic Single and Multi Topic Summarization and Evolution to Generate Timeline Mrs. V. Meenakshi, Ms. S. Jeyanthi Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5810 Data Hiding in Encrypted HEVC/AVC Video Streams Saltanat Shaikh, Prof. Shahzia Sayyad Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5811 A Study of Imbalanced Classification Problem P. Rajeshwari, D. Maheshwari Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5812 Design of PTL based Area Efficient and Low Power 4-bit ALU Saraabu Narendra Achari, Mr. C. Pakkiraiah Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5813 The Design of Driver Safety Awareness and Assistance System through Sleep Activated and Auto Brake System for Vehicle Control D. Sivabalaselvamani, Dr. A. Tamilarasi, L. Rahunathan and A.S. Harishankher Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5814 Parameters Selection, Applications & Convergence Analysis of PSO Algorithms Sachin Kumar, Mr. N.K. Gupta Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5815 Effective Pattern Deploying Model for the Document Restructuring and Classification Niketa, Jharna Chopra Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5816 Cataloging Telugu Sentences by Hidden Morkov Techniques V. Suresh Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5817 Biometrics for Cell Phone Safety Jyoti Tiwari, Santosh Kumar Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5818 Digital Image Watermarking using Modified DWT&DCT Combination and Bi Linear Interpolation Yannam .Nagarjuna, K. Chaitanya Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5819 Comparative Study and Analysis on the Techniques of Web Mining Dipika Sahu, Yamini Chouhan Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5820 A Review of MIL-STD-1553 Bus Trends and Future K. Padmanabham, Prabhakar Kanugo, Dr. K. Nagabhushan Raju, M. Chandrashekar Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5821 Design of QPSK Digital Modulation Scheme Using Turbo Codes for an Air Borne System D. Sai Brunda, B. Geetha Rani Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5822 An Efficient Locally Weighted Spectral Cluster for Automatic Image Segmentation Vishnu Priya M, J Santhosh Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5823 An Efficient Sliding Window Based Micro Cluster Over Data Streams Nancy Mary, A. Venugopal Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5824 Comparative Analysis of Traditional Frequency Reuse Techniques in LTE Network Neelam Rani, Dr. Sanjeev Kumar Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5825 Score Level Integration of Fingerprint and Hand Geometry Biometrics Jyoti Tiwari, Santosh Kumar Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5826 CHARM: Intelligently Cost and Bandwidth Detection for FTP Servers using Heuristic Algorithm Shiva Urolagin Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5827 Image Enhancement Using Modified Exposure Based Histogram SK. Nasreen, N. Anupama Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5828 Human Gesture Based Recognition and Classification Using MATLAB Suman, Er. Kapil Sirohi Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5829 Image Denoising- A Novel Approach Dipali D. Sathe, Prof. K.N. Barbole Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5830 Design of Low Pass Digital FIR Filter Using Nature Inspired Technique Nisha Rani, Balraj Singh, Darshan Singh Sidhu Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5831 Issues and Challenges in Software Quality Assurance Himangi, Surender singh Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5832 Hybridization of GSA and AFSA to Detect Black Hole Attack in Wireless Sensor Network Soni Rani, Charanjit Singh Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5833 Reversible Watermarking Technique for Data Hiding, Accurate Tamper Detection in ROI and Exact Recovery of ROI Y. Usha Madhuri, K. Chaitanya Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5834 Fault Tolerance and Concurrency Control in Heterogeneous Distributed Database Systems Sagar Patel, Meghna Burli, Nidhi Shah, Prof. (Mrs.) Vinaya Sawant Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5835 Collection of Offline Tamil Handwriting Samples and Database Creation D. Rajalakshmi, Dr. S.K. Jayanthi Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5836 Overview of Renewable Energy in Maharashtra Mr. Sagar P. Thombare, Mr. Vishal Gunjal, Miss. Snehal Bhandarkar Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5837 Comparative Analysis of Efficient Image Steganographic Technique with the 2-bit LSB Algorithm for Color Images K. S. Sadasiva Rao, Dr A. Damodaram Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5838 An Efficient Reverse Converter Design for Five Moduli Set RNS Y. Ayyavaru Reddy, B. Sekhar Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5839 VLSI Design of Area Efficient High Performance SPMV Accelerator using VBW-CBQCSR Scheme N. Narasimharao, A. Mallaiah Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5840 Customer Retention of MCDR using 3SCDM Approaches Suban Ravichandran, Chandrasekaran Ramasamy Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5841 User Privacy and Data Trustworthiness in Mobile Crowd Sensing Ms. T. Sharadha, Dr. R. Vijaya Bhanu Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5842 A Safe Anti-Conspiracy Data Model For Changing Groups in Cloud G. Ajay Kumar, Devaraj Verma C Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5843 Scope and Adoption of M-Commerce in India Anurag Mishra, Sanjay Medhavi, Khan Shah Mohd, P.C. Mishra Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5844 A Secure Data Hiding Scheme For Color Image Mrs. S.A. Bhavani Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5845 A Study of Different Content Based Image Retrieval Techniques C. Gururaj, D. Jayadevappa, Satish Tunga Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5846 Cache Management for Big Data Applications: Survey Kiran Grover, Surender Singh Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5847 Survey on Energy Efficient Protocols and Challenges in IOT Syeda Butool Fatima, Sayyada Fahmeeda Sultana, Sadiya Ansari Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5848 Educational Data Mining For Evaluating Students Performance Sampreethi P.K, VR. Nagarajan Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5849 Iterative Pareto Principle for Software Test Case Prioritization Manas Kumar Yogi, G. Vijay Kumar, D. Uma Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5850 Localization Techniques in Wireless Sensor Networks: A Review Abhishek Kumar, Deepak Prashar Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5851 Ensemble Averaging Filter for Noise Reduction Tom Thomas Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5852 Survey Paper on Get My Route Application Shubham A. Purohit, Tushar R. Khandare, Prof. Swapnil V. Deshmukh Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5853 Design and Implementation of Smart Car with Self-Navigation and Self-Parking Systems using Sensors and RFID Technology Madhuri M. Bijamwar, Prof. S.G. Kole, Prof. S.S. Savkare Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5854 Comparison Study of Induction Motor Drives using Microcontroller and FPGA Sooraj M S, Sreerag K T V Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5855 A Survey on Text Categorization Senthil Kumar B, Bhavitha Varma E Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5856 Multirate Signal Reconstruction Using Two Channel Orthogonal Filter Bank Sijo Thomas, Darsana P Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5857 The Multi-keyword Synonym Search for Encrypted Cloud Data Using Clustering Method Monika Rani H G, Varshini Vidyadhar Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5858 A Review on Various Speech Enhancement Techniques Alugonda Rajani, Soundarya .S.V.S Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5859 A Survey on Various Spoofing Attacks and Image Fusion Techniques Pravallika .P, Dr. K. Satya Prasad Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5860 Non-Invasive Vein Detection using Infra-red Rays Aradhana Singh, Dr. S.C. Prasanna Kumar, Dr. B.G. Sudershan Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5861 Boundary-Polygons for Minutiae based Fingerprinst Recognition Kusha Maharshi, Prashant Sahai Saxena Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5862 Image Forgery Detection on Digital Images Nimi Susan Saji, Ranjitha Rajan Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5863 Enhancing Information Security in Big Data Renu Kesharwani Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5864 Secure Multi-Owner Data Sharing for Dynamic Groups in Cloud Ms. Nilophar M. Masuldar, Prof. V. P. Kshirsagar Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5865 Compact Microstrip Octagonal Slot Antenna for Wireless Communication Applications Thasneem .H, Midhun Joy Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5866 ‘Aquarius’- Smart IOT Technology for Water Level Monitoring System Prof. A. M. Jagtap, Bhaldar Saniya Sikandar, Shinde Sharmila Shivaji, Khalate Vrushali Pramod, Nirmal Kalyani Sarangdhar Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5867 Future of Results in Select Search Engine Peerzada Mohammad Iqbal, Dr. Abdul Majid Baba, Aasim Bashir Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5868 Semantic Indexing Techniques on Information Retrieval of Web Content". IJARCCE 5, nr 8 (30.08.2016): 347–52. http://dx.doi.org/10.17148/ijarcce.2016.5869.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
30

"ANFIS for Tamil Phoneme Classification". International Journal of Engineering and Advanced Technology 8, nr 6 (30.08.2019): 2907–14. http://dx.doi.org/10.35940/ijeat.f8804.088619.

Pełny tekst źródła
Streszczenie:
Phoneme recognition is an intricate problem lying under non-linear systems. Most research in this area revolve around try to model the pattern of features observed in the speech spectra with the use of Hidden Markov Models (HMM), various types of neural networks like deep recurrent neural networks, time delay neural networks, etc. for efficient phoneme recognition. In this paper, we study the effectiveness of the hybrid architecture, the Adaptive Neuro-Fuzzy Inference System (ANFIS) for capturing the spectral features of the speech signal to handle the problem of Phoneme Recognition. In spite of a wide range of research in this field, here we examine the power of ANFIS for least explored Tamil phoneme recognition problem. The experimental results have shown the ability of the model to learn the patterns associated with various phonetic classes, indicated with recognition improvement in terms of accuracy to its counterparts.
Style APA, Harvard, Vancouver, ISO itp.
31

Prabhu, V., i G. Gunasekaran. "Fuzzy Logic based Nam Speech Recognition for Tamil Syllables". Indian Journal of Science and Technology 9, nr 1 (8.01.2016). http://dx.doi.org/10.17485/ijst/2016/v9i1/85763.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
32

"A Pilot Research on Android Based Voice Recognition Application". International Journal of Recent Technology and Engineering 8, nr 4 (30.11.2019): 7272–77. http://dx.doi.org/10.35940/ijrte.d5284.118419.

Pełny tekst źródła
Streszczenie:
In recent trend, Speech recognition has become extensively used in customer service based organization. It has acquired great deal of research in pattern matching employed machine learning (learning speech by experience) and neural networks based speech endorsement domains. Speech recognition is the technology of capturing and perceiving human voice, interpreting it, producing text from it, managing digital devices and assisting visually impaired and older adults using unequivocal digital signal processing. In this paper we have presented a comprehensive study of different methodologies in android enabled speech recognition system that focused at analysis of the operability and reliability of voice note app. Subsequently we have suggested and experimented an android based speech recognizer app viz. Annotate which predominately focus on voice dictation in five different languages (English, Hindi, Tamil, Malayalam and Telugu) and extracting text from image using Automatic Speech Recognition (ASR) and Optical Character Recognition (OCR) algorithm. Finally, we identified opportunities for future enhancements in this realm.
Style APA, Harvard, Vancouver, ISO itp.
33

Fernandes, Bennilo, i Kasiprasad Mannepalli. "Speech Emotion Recognition Using Deep Learning LSTM for Tamil Language". Pertanika Journal of Science and Technology 29, nr 3 (31.07.2021). http://dx.doi.org/10.47836/pjst.29.3.33.

Pełny tekst źródła
Streszczenie:
Deep Neural Networks (DNN) are more than just neural networks with several hidden units that gives better results with classification algorithm in automated voice recognition activities. Then spatial correlation was considered in traditional feedforward neural networks and which do not manage speech signal properly to it extend, so recurrent neural networks (RNNs) were implemented. Long Short-Term Memory (LSTM) systems is a unique case of RNNs for speech processing, thus considering long-term dependencies Deep Hierarchical LSTM and BiLSTM is designed with dropout layers to reduce the gradient and long-term learning error in emotional speech analysis. Thus, four different combinations of deep hierarchical learning architecture Deep Hierarchical LSTM and LSTM (DHLL), Deep Hierarchical LSTM and BiLSTM (DHLB), Deep Hierarchical BiLSTM and LSTM (DHBL) and Deep Hierarchical dual BiLSTM (DHBB) is designed with dropout layers to improve the networks. The performance test of all four model were compared in this paper and better efficiency of classification is attained with minimal dataset of Tamil Language. The experimental results show that DHLB reaches the best precision of about 84% in recognition of emotions for Tamil database, however, the DHBL gives 83% of efficiency. Other design layers also show equal performance but less than the above models DHLL & DHBB shows 81% of efficiency for lesser dataset and minimal execution and training time.
Style APA, Harvard, Vancouver, ISO itp.
34

"Analysis of Emotional Speech Recognition for Tamil Language Using SVM". Journal of critical reviews 7, nr 11 (2.07.2020). http://dx.doi.org/10.31838/jcr.07.11.542.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
35

"Speech Recognition System for Isolated Tamil Words using Random Forest Algorithm". International Journal of Recent Technology and Engineering 9, nr 1 (30.05.2020): 2431–35. http://dx.doi.org/10.35940/ijrte.a1467.059120.

Pełny tekst źródła
Streszczenie:
ASR is the use of system software and hardware based techniques to identify and process human voice. In this research, Tamil words are analyzed, segmented as syllables, followed by feature extraction and recognition. Syllables are segmented using short term energy and segmentation is done in order to minimize the corpus size. The algorithm for syllable segmentation works by performing the STE function of the continuous speech signal. The proposed approach for speech recognition uses the combination of Mel-Frequency Cepstral Coefficients (MFCC) and Linear Predictive Coding (LPC). MFCC features are used to extract a feature vector containing all information about the linguistic message. The LPC affords a robust, dependable and correct technique for estimating the parameters that signify the vocal tract system.LPC features can reduce the bit rate of speech (i.e reducing the measurement of transmitting signal).The combined feature extraction technique will minimize the size of transmitting signal. Then the proposed FE algorithm is evaluated on the speech corpus using the Random forest approach. Random forest is an effective algorithm which can build a reliable training model as its training time is less because the classifier works on the subset of features alone.
Style APA, Harvard, Vancouver, ISO itp.
36

"Speech Query Recognition for Tamil Language Using Wavelet and Wavelet Packets". Journal of Information Processing Systems, 2015. http://dx.doi.org/10.3745/jips.02.0033.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
37

Saraswathi, S., i T. V. Geetha. "Design of language models at various phases of Tamil speech recognition system". International Journal of Engineering, Science and Technology 2, nr 5 (28.09.2010). http://dx.doi.org/10.4314/ijest.v2i5.60157.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
38

Chelliah, Jeyalakshmi, KiranBala Benny, Revathi Arunachalam i Viswanathan Balasubramanian. "Robust Hearing-Impaired Speaker Recognition from Speech using Deep Learning Networks in Native Language". International Arab Journal of Information Technology 20, nr 1 (2022). http://dx.doi.org/10.34028/iajit/20/1/11.

Pełny tekst źródła
Streszczenie:
Several research works in speaker recognition have grown recently due to its tremendous applications in security, criminal investigations and in other major fields. Identification of a speaker is represented by the way they speak, and not on the spoken words. Hence the identification of hearing-impaired speakers from their speech is a challenging task since their speech is highly distorted. In this paper, a new task has been introduced in recognizing Hearing Impaired (HI) speakers using speech as a biometric in native language Tamil. Though their speech is very hard to get recognized even by their parents and teachers, our proposed system accurately identifies them by adapting enhancement of their speeches. Due to the huge variety in their utterances, instead of applying the spectrogram of raw speech, Mel Frequency Cepstral Coefficient features are derived from speech and it is applied as spectrogram to Convolutional Neural Network (CNN), which is not necessary for ordinary speakers. In the proposed system of recognizing HI speakers, is used as a modelling technique to assess the performance of the system and this deep learning network provides 80% accuracy and the system is less complex. Auto Associative Neural Network (AANN) is used as a modelling technique and performance of AANN is only 9% accurate and it is found that CNN performs better than AANN for recognizing HI speakers. Hence this system is very much useful for the biometric system and other security related applications for hearing impaired speakers.
Style APA, Harvard, Vancouver, ISO itp.
39

Magrina, M. Merline. "Recognition of Ancient Tamil Characters from Epigraphical inscriptions using Raspberry Pi based Tesseract OCR". International Journal of Scientific Research in Computer Science, Engineering and Information Technology, 25.03.2021, 118–27. http://dx.doi.org/10.32628/cseit217230.

Pełny tekst źródła
Streszczenie:
Optical Character Recognition (OCR) is the process of identification of the printed text using photoelectric devices and computer software. It converts the inscribed text on the stones into machine encoded format. OCR is widely used in machine learning process like cognitive computing, machine translation, text to speech conversion and text mining.OCR is mainly used in the research fields like Character Recognition, Artificial Intelligence and Computer Vision. In this research, the recognition process is done using OCR, the inscribed character is processed using Raspberry Pi device on which it recognizes characters using Artificial Neural Network. This work mainly focuses on the recognition of ancient Tamil characters inscribed on stones to modern Tamil characters belong to 9th and 12th century characters. The input image is subjected to gray scale conversion process and enhanced using adaptive thresholding process. The output image is subjected to thinning process to reduce the pixel size of the image. Then the characters are classified using Artificial Neural Network Architecture and the classified characters are mapped to modern Tamil character using Unicode. The Artificial Neural Network has input layer, hidden layer of 15 neurons and output layer of 1 neuron to classify the characters. The accuracy of the constructed system for the recognition of epigraphical inscriptions is calculated. The above process is carried out in raspbian environment using python process.
Style APA, Harvard, Vancouver, ISO itp.
40

Madan, Chetan, Harshita Diddee, Deepika Kumar i Mamta Mittal. "CodeFed: Federated Speech Recognition for Low-Resource Code-Switching Detection". ACM Transactions on Asian and Low-Resource Language Information Processing, 17.11.2022. http://dx.doi.org/10.1145/3571732.

Pełny tekst źródła
Streszczenie:
One common constraint in the practical application of speech recognition is Code Switching. The issue of code-switched languages is especially aggravated in the context of Indian languages - since most massively multilingual models are trained on corpora that are not representative of the diverse set of Indian languages. An associated constraint with such systems is the privacy-intrusive nature of the applications that aim to collate such representative data. To collectively mitigate both problems, this works presents CodeFed: A federated learning-based code-switching detection model that can be deployed to collaboratively trained by leveraging private data from multiple users, without compromising their privacy. Using a representative low-resource Indic dataset, we demonstrate the superior performance of a collaboratively trained global model that is trained using federated learning on three low-resource Indic languages - Gujarati, Tamil and Telugu and draw a comparison of the model with respect to most current work in the field. Finally, to evaluate the practical realizability of the proposed system, CodeFed also discusses the system overview of the label generation architecture which may accompany CodeFed’s possible real-time deployment.
Style APA, Harvard, Vancouver, ISO itp.
41

Paliwal, Mrinal, i Pankaj Saraswat. "Automated Waves Files Splitting". International Journal of Innovative Research in Computer Science & Technology, 11.01.2021, 18–21. http://dx.doi.org/10.55524/ijircst.2021.9.6.4.

Pełny tekst źródła
Streszczenie:
The ASS (Automatic Speech Segmentation) Technique is used in this article to segment spontaneous speech into syllable-like units. The segmentation of the acoustic signal into syllabic units is an essential step in the construction of a syllable-centric ASS system. The purpose of this article is to determine the smallest unit of speech that should be regarded when writing. Any voice recognition system may be trained. In a few Indian cities, technologies for continuous voice recognition have been created. Hindi and Tamil are examples of such languages. This article examines the statistical characteristics of Punjabi syllables and how they may be used to reduce the number of syllables in sentence. During voice recognition, the search area is expanded. We explain how to perform the majority of the segmentation in this article automatically. The frequency of syllables and the number of syllables in each word are shown. We suggest the following: For objective evaluation of stuttering disfluencies, an automated segmentation technique for syllable repetition in read speech was developed. It employs a novel method and consists of three stages: feature extraction, rule matching, and segmentation.
Style APA, Harvard, Vancouver, ISO itp.
42

Sakuntharaj, Ratnasingam. "Suggestion Generation for Erroneous Words in Tamil Documents Using N-gram Technique". Asian Journal of Research in Computer Science, 5.05.2022, 45–54. http://dx.doi.org/10.9734/ajrcos/2022/v13i330317.

Pełny tekst źródła
Streszczenie:
A spell checker is a tool that finds and corrects erroneous words and grammatical mistakes in a text document. Spelling error detection and correction techniques are widely used by text editing systems, search engines, text to speech and speech to text conversion systems, machine translation systems and optical character recognition systems. The spell checkers for European languages and some Indic languages are well developed. However, perhaps, owing to Tamil being a morphologically rich and agglutinative language this has been a challenging task. Erroneous words and grammatical mistakes can occur in sentences due to various reasons. Erroneous words can be classified into two categories, namely non-word errors and real-word errors. This work aims to correct non-word errors in Tamil documents by suggesting alternatives. The proposed approach uses letter-level and word-level n-gram, stemming and hash table techniques. Test results show that the suggestions generated by the system are with 95% accuracy.
Style APA, Harvard, Vancouver, ISO itp.
43

Fernandes, Bennilo, i Kasiprasad Mannepalli. "An Analysis of Emotional Speech Recognition for Tamil Language Using Deep Learning Gate Recurrent Unit". Pertanika Journal of Science and Technology 29, nr 3 (31.07.2021). http://dx.doi.org/10.47836/pjst.29.3.37.

Pełny tekst źródła
Streszczenie:
Designing the interaction among human language and a registered emotional database enables us to explore how the system performs and has multiple approaches for emotion detection in patient services. As of now, clustering techniques were primarily used in many prominent areas and in emotional speech recognition, even though it shows best results a new approach to the design is focused on Long Short-Term Memory (LSTM), Bi-Directional LSTM and Gated Recurrent Unit (GRU) as an estimation method for emotional Tamil datasets is available in this paper. A new approach of Deep Hierarchal LSTM/BiLSTM/GRU layer is designed to obtain the best result for long term learning voice dataset. Different combinations of deep learning hierarchal architecture like LSTM & GRU (DHLG), BiLSTM & GRU (DHBG), GRU & LSTM (DHGL), GRU & BiLSTM (DHGB) and dual GRU (DHGG) layer is designed with introduction of dropout layer to overcome the learning problem and gradient vanishing issues in emotional speech recognition. Moreover, to increase the design outcome within each emotional speech signal, various feature extraction combinations are utilized. From the analysis an average classification validity of the proposed DHGB model gives 82.86%, which is slightly higher than other models like DHGL (82.58), DHBG (82%), DHLG (81.14%) and DHGG (80%). Thus, by comparing all the models DHGB gives prominent outcome of 5% more than other four models with minimum training time and low dataset.
Style APA, Harvard, Vancouver, ISO itp.
44

Vijayakumar, Kavitha, Saranyaa Gunalan i Ranjith Rajeshwaran. "Development of Minimal Pair Test in Tamil (MPT-T)". JOURNAL OF CLINICAL AND DIAGNOSTIC RESEARCH, 2021. http://dx.doi.org/10.7860/jcdr/2021/46807.15357.

Pełny tekst źródła
Streszczenie:
Introduction: Speech perception testing provides an accurate measurement of the child’s ability to perceive and distinguish the various phonetic segments and patterns of the sounds. From among the many types of speech stimuli used, minimal pairs can also be used to assess the phoneme recognition skills. Thus, the study focused on developing Minimal Pair Test in Tamil (MPT-T). Aim: The aim of the present study was to develop and validate the MPT in Tamil on Normal Hearing (NH) children and paediatric cochlear implantees (CI). Materials and Methods: It was an experimental study which included school going children in the age range of six to eight years and the duration of the study was 12 months. The test was developed in two phases. The first phase focussed on the construction of the word list, recording of the word pairs and the preparation of the test. The second phase was administration of the test on NH children and paediatric cochlear implantees. The test scores were analysed using Mann Whitney U test, Kruskal Wallis and Wilcoxon signed-rank test. The results showed a statistical significance between the NH group and the paediatric cochlear implantees. Results: The present study included 40 NH children and 15 paediatric cochlear implantees through purposive sampling method. The specific speech feature analysis of the paediatric cochlear implantees revealed that there was difficulty identifying the word pairs differing in Vowel Length (VL) and the best performed feature was Place of Articulation (POA). The results showed statistical significance between the NH group and the paediatric cochlear implantees. Conclusion: The developed test can be effectively used in clinic for assessing speech perception abilities of pediatric Cochlear Implantees and also in planning the rehabilitative goals.
Style APA, Harvard, Vancouver, ISO itp.
45

"Design and Control of Dual-Arm Cooperative Manipulator using Speech Commands". International Journal of Engineering and Advanced Technology 9, nr 1S3 (31.12.2019): 122–29. http://dx.doi.org/10.35940/ijeat.a1025.1291s319.

Pełny tekst źródła
Streszczenie:
Cooperative manipulators are among the subject of interest in the scientific community for the last few years. Here an overview of the design and control of such cooperative manipulators using Speech Commands in English, Hindi, and Tamil is discussed. Here we choose two identical Robot arms from lynxmotion, and both manipulators move in conjunction with one another to achieve more payload while grasping or handling the object by the end effector. The simultaneous control of identical robot manipulators could be performed by pronouncing simple speech commands by the end user using a smartphone, which then is converted into text format using a speech recognition engine and this text fed to servo controller helps in actuating the joints of identical robot arms. Cooperative manipulators are used for handling radioactive elements and also in the field of medicine as rehabilitation aid and also in surgeries. An Android app specifically built for this purpose communicates through Bluetooth technology makes the interface for end-user simple to control both identical robot arms simultaneously.
Style APA, Harvard, Vancouver, ISO itp.
46

Lokesh, S., Priyan Malarvizhi Kumar, M. Ramya Devi, P. Parthasarathy i C. Gokulnath. "Retraction Note: An Automatic Tamil Speech Recognition system by using Bidirectional Recurrent Neural Network with Self-Organizing Map". Neural Computing and Applications, 19.12.2022. http://dx.doi.org/10.1007/s00521-022-08144-x.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
47

Fernandes, Bennilo, i Kasiprasad Mannepalli. "Enhanced Deep Hierarchical Long Short-Term Memory and Bidirectional Long Short-Term Memory for Tamil Emotional Speech Recognition using Data Augmentation and Spatial Features". Pertanika Journal of Science and Technology 29, nr 4 (29.10.2021). http://dx.doi.org/10.47836/pjst.29.4.39.

Pełny tekst źródła
Streszczenie:
Neural networks have become increasingly popular for language modelling and within these large and deep models, overfitting, and gradient remains an important problem that heavily influences the model performance. As long short-term memory (LSTM) and bidirectional long short-term memory (BILSTM) individually solve long-term dependencies in sequential data, the combination of both LSTM and BILSTM in hierarchical gives added reliability to minimise the gradient, overfitting, and long learning issues. Hence, this paper presents four different architectures such as the Enhanced Deep Hierarchal LSTM & BILSTM (EDHLB), EDHBL, EDHLL & EDHBB has been developed. The experimental evaluation of a deep hierarchical network with spatial and temporal features selects good results for four different models. The average accuracy of EDHLB is 92.12%, EDHBL is 93.13, EDHLL is 94.14% & EDHBB is 93.19% and the accuracy level obtained for the basic models such as the LSTM, which is 74% and BILSTM, which is 77%. By evaluating all the models, EDHBL performs better than other models, with an average efficiency of 94.14% and a good accuracy rate of 95.7%. Moreover, the accuracy for the collected Tamil emotional dataset, such as happiness, fear, anger, sadness, and neutral emotions indicates 100% accuracy in a cross-fold matrix. Emotions such as disgust show around 80% efficiency. Lastly, boredom shows 75% accuracy. Moreover, the training time and evaluation time utilised by EDHBL is less when compared with the other models. Therefore, the experimental analysis shows EDHBL as superior to the other models on the collected Tamil emotional dataset. When compared with the basic models, it has attained 20% more efficiency.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii