Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Real-time Text Recognition.

Zeitschriftenartikel zum Thema „Real-time Text Recognition“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "Real-time Text Recognition" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Neumann, Lukas, und Jiri Matas. „Real-Time Lexicon-Free Scene Text Localization and Recognition“. IEEE Transactions on Pattern Analysis and Machine Intelligence 38, Nr. 9 (01.09.2016): 1872–85. http://dx.doi.org/10.1109/tpami.2015.2496234.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Merino‐Gracia, Carlos, und Majid Mirmehdi. „Real‐time text tracking in natural scenes“. IET Computer Vision 8, Nr. 6 (Dezember 2014): 670–81. http://dx.doi.org/10.1049/iet-cvi.2013.0217.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Thakur, Amrita, Pujan Budhathoki, Sarmila Upreti, Shirish Shrestha und Subarna Shakya. „Real Time Sign Language Recognition and Speech Generation“. Journal of Innovative Image Processing 2, Nr. 2 (03.06.2020): 65–76. http://dx.doi.org/10.36548/jiip.2020.2.001.

Der volle Inhalt der Quelle
Annotation:
Sign Language is the method of communication of deaf and dumb people all over the world. However, it has always been a difficulty in communication between a verbal impaired person and a normal person. Sign Language Recognition is a breakthrough for helping deaf-mute people to communicate with others. The commercialization of an economical and accurate recognition system is today’s concern of researchers all over the world. Thus, sign language recognition systems based on Image processing and neural networks are preferred over gadget system as they are more accurate and easier to make. The aim of this paper is to build a user friendly and accurate sign language recognition system trained by neural network thereby generating text and speech of the input gesture. This paper also presents text to sign language generation model that enables a way to establish a two-way communication without the need of a translator.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Taha, Mohamed, Noha Abd-ElKareem und Mazen Selim. „Real-Time Arabic Text-Reading for Visually Impaired People“. International Journal of Sociotechnology and Knowledge Development 13, Nr. 2 (April 2021): 168–85. http://dx.doi.org/10.4018/ijskd.2021040110.

Der volle Inhalt der Quelle
Annotation:
Visually impaired (VI) people suffer from many difficulties when accessing printed material using existing technologies. These problems may include text alignment, focus, accuracy, software processing speed, mobility, and efficiency. Current technologies such as flatbed scanners and OCR programs need to scan an entire page. Recently, VI people prefer mobile devices because of their handiness and accessibility, but they have problems with focusing the mobile camera on the printed material. In this paper, a real-time Arabic text-reading prototype for VI people is proposed. It is based on using a wearable device for a hand finger. It is designed as a wearable ring attached to a tiny webcam device. The attached camera captures the printed Arabic text and passes it to the Arabic OCR system. Finally, the recognized characters are translated into speech using the text-to-speech (TTS) technology. Experimental results demonstrate the feasibility of the proposed prototype. It achieved an accuracy of 95.86% for Arabic character recognition and 98.5% for English character recognition.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Mafla, Andrés, Rubèn Tito, Sounak Dey, Lluís Gómez, Marçal Rusiñol, Ernest Valveny und Dimosthenis Karatzas. „Real-time Lexicon-free Scene Text Retrieval“. Pattern Recognition 110 (Februar 2021): 107656. http://dx.doi.org/10.1016/j.patcog.2020.107656.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Al-Jumaily, Harith, Paloma Martínez, José L. Martínez-Fernández und Erik Van der Goot. „A real time Named Entity Recognition system for Arabic text mining“. Language Resources and Evaluation 46, Nr. 4 (01.05.2011): 543–63. http://dx.doi.org/10.1007/s10579-011-9146-z.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Choi, Yong-Sik, Jin-Gu Kang, Jong Wha J. Joo und Jin-Woo Jung. „Real-time Informatized caption enhancement based on speaker pronunciation time database“. Multimedia Tools and Applications 79, Nr. 47-48 (05.09.2020): 35667–88. http://dx.doi.org/10.1007/s11042-020-09590-2.

Der volle Inhalt der Quelle
Annotation:
AbstractIBM Watson is one of the representative tools for speech recognition system which can automatically generate not only speech-to-text information but also speaker ID and timing information, which is called as Informatized Caption. However, if there is some noise in the voice signal to the IBM Watson API, the recognition performance is significantly decreased. It can be easily found in movies with background music and special sound effects. This paper aims to improve the inaccuracy problem of current Informatized Captions in noisy environments. In this paper, a method of modifying incorrectly recognized words and a method of enhancing timing accuracy while updating database in real time are suggested based on the original caption and Informatized Caption information. Experimental results shows that the proposed method can give 81.09% timing accuracy for the case of 10 representative animation, horror and action movies.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Lu, Zhiyuan, Xiang Chen, Xu Zhang, Kay-Yu Tong und Ping Zhou. „Real-Time Control of an Exoskeleton Hand Robot with Myoelectric Pattern Recognition“. International Journal of Neural Systems 27, Nr. 05 (03.05.2017): 1750009. http://dx.doi.org/10.1142/s0129065717500095.

Der volle Inhalt der Quelle
Annotation:
Robot-assisted training provides an effective approach to neurological injury rehabilitation. To meet the challenge of hand rehabilitation after neurological injuries, this study presents an advanced myoelectric pattern recognition scheme for real-time intention-driven control of a hand exoskeleton. The developed scheme detects and recognizes user’s intention of six different hand motions using four channels of surface electromyography (EMG) signals acquired from the forearm and hand muscles, and then drives the exoskeleton to assist the user accomplish the intended motion. The system was tested with eight neurologically intact subjects and two individuals with spinal cord injury (SCI). The overall control accuracy was [Formula: see text] for the neurologically intact subjects and [Formula: see text] for the SCI subjects. The total lag of the system was approximately 250[Formula: see text]ms including data acquisition, transmission and processing. One SCI subject also participated in training sessions in his second and third visits. Both the control accuracy and efficiency tended to improve. These results show great potential for applying the advanced myoelectric pattern recognition control of the wearable robotic hand system toward improving hand function after neurological injuries.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Zhan, Ce, Wanqing Li, Philip Ogunbona und Farzad Safaei. „A Real-Time Facial Expression Recognition System for Online Games“. International Journal of Computer Games Technology 2008 (2008): 1–7. http://dx.doi.org/10.1155/2008/542918.

Der volle Inhalt der Quelle
Annotation:
Multiplayer online games (MOGs) have become increasingly popular because of the opportunity they provide for collaboration, communication, and interaction. However, compared with ordinary human communication, MOG still has several limitations, especially in communication using facial expressions. Although detailed facial animation has already been achieved in a number of MOGs, players have to use text commands to control the expressions of avatars. In this paper, we propose an automatic expression recognition system that can be integrated into an MOG to control the facial expressions of avatars. To meet the specific requirements of such a system, a number of algorithms are studied, improved, and extended. In particular, Viola and Jones face-detection method is extended to detect small-scale key facial components; and fixed facial landmarks are used to reduce the computational load with little performance degradation in the recognition accuracy.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Oliveira-Neto, Francisco Moraes, Lee D. Han und Myong K. Jeong. „Tracking Large Trucks in Real Time with License Plate Recognition and Text-Mining Techniques“. Transportation Research Record: Journal of the Transportation Research Board 2121, Nr. 1 (Januar 2009): 121–27. http://dx.doi.org/10.3141/2121-13.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Manoharan, Sudha, und Palani Sankaran. „An Audio Based Real Time Text Detection and Recognition Approach for Visually Impaired People“. Journal of Computational and Theoretical Nanoscience 13, Nr. 8 (01.08.2016): 4895–905. http://dx.doi.org/10.1166/jctn.2016.5363.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Mahendru, Mansi, Sanjay Kumar Dubey und Divya Gaur. „Deep Convolutional Sequence Approach Towards Real-Time Intelligent Optical Scanning“. International Journal of Computer Vision and Image Processing 11, Nr. 4 (Oktober 2021): 63–76. http://dx.doi.org/10.4018/ijcvip.2021100105.

Der volle Inhalt der Quelle
Annotation:
Visual text recognition is the most dynamic computer vision application due to its rising demand in several applications like crime scene detection, assisting blind people, digitizing, book scanning, etc. However, numerous research works were executed on static visuals having organized text and on captured video frames in the past. The key objective of this study is to develop the real-time intelligent optical scanner that will extract every sequence of text from high-speed video, noisy visual input, and offline handwritten script. The scientific work has been carried out with the combination of multiple deep learning approaches, namely EAST, CNN, and Bi-LSTM with CTC. The system is trained and tested on four public datasets (i.e., ICDAR 2015, SVT, Synth-Text, IAM-3.0) and measured on the basis of recall, precision, and f-measure. Based on the challenges, performance has been examined under three different categories, and the outcomes are optimistic and encouraging for future advancement.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Božilović, Boško, Branislav M. Todorović und Miroslav Obradović. „Text–Independent Speaker Recognition Using Two–Dimensional Information Entropy“. Journal of Electrical Engineering 66, Nr. 3 (01.05.2015): 169–73. http://dx.doi.org/10.2478/jee-2015-0027.

Der volle Inhalt der Quelle
Annotation:
AbstractSpeaker recognition is the process of automatically recognizing who is speaking on the basis of speaker specific characteristics included in the speech signal. These speaker specific characteristics are called features. Over the past decades, extensive research has been carried out on various possible speech signal features obtained from signal in time or frequency domain. The objective of this paper is to introduce two-dimensional information entropy as a new text-independent speaker recognition feature. Computations are performed in time domain with real numbers exclusively. Experimental results show that the two-dimensional information entropy is a speaker specific characteristic, useful for speaker recognition.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Li, Lian Huan. „Research on Character Segmentation Method in Image Text Recognition“. Advanced Materials Research 546-547 (Juli 2012): 1345–50. http://dx.doi.org/10.4028/www.scientific.net/amr.546-547.1345.

Der volle Inhalt der Quelle
Annotation:
Character Segmentation is the key step for image text recognition. This paper presents a text tilt correction algorithm using tracked characteristics rectangle contour to extract angle, using line scan method based on the number of transitions to determine the character on the bottom. In order to meet the requirements of real-time and reliability, takes improved secondary single-character segmentation algorithm based on vertical projection method.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Damatraseta, Febri, Rani Novariany und Muhammad Adlan Ridhani. „Real-time BISINDO Hand Gesture Detection and Recognition with Deep Learning CNN“. Jurnal Informatika Kesatuan 1, Nr. 1 (13.07.2021): 71–80. http://dx.doi.org/10.37641/jikes.v1i1.774.

Der volle Inhalt der Quelle
Annotation:
BISINDO is one of Indonesian sign language, which do not have many facilities to implement. Because it can cause deaf people have difficulty to live their daily life. Therefore, this research tries to offer an recognition or translation system of the BISINDO alphabet into a text. The system is expected to help deaf people to communicate in two directions. In this study the problems encountered is small datasets. Therefore this research will do the testing of hand gesture recognition, by comparing two model CNN algorithms, that is LeNet-5 and Alexnet. This test will look for which classification technique is better if the dataset conditions in an amount that does not reach 1000 images in each class. After testing, the results found that the CNN technique on the Alexnet architectural model is better to used, this is because when doing the testing process by using still-image and Alexnet model data which has been released in training process, Alexnet model data gives greater prediction results that is equal to 76%. While the LeNet model is only able to predict with the percentage of 19%. When that Alexnet data model used on the system offered, only able to predict correcly by 60%. Keywords: Sign language, BISINDO, Computer Vision, Hand Gesture Recognition, Skin Segmentation, CIELab, Deep Learning, CNN.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Singh, Ananta, und Dishant Khosla. „A Robust and Real Time Approach for Scene Text Localisation and Recognition in Image Processing“. International Journal of Signal Processing, Image Processing and Pattern Recognition 9, Nr. 1 (31.01.2016): 43–50. http://dx.doi.org/10.14257/ijsip.2016.9.1.05.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Jasim, Mahmood, Tao Zhang und Md Hasanuzzaman. „A Real-Time Computer Vision-Based Static and Dynamic Hand Gesture Recognition System“. International Journal of Image and Graphics 14, Nr. 01n02 (Januar 2014): 1450006. http://dx.doi.org/10.1142/s0219467814500065.

Der volle Inhalt der Quelle
Annotation:
This paper presents a novel method for computer vision-based static and dynamic hand gesture recognition. Haar-like feature-based cascaded classifier is used for hand area segmentation. Static hand gestures are recognized using linear discriminant analysis (LDA) and local binary pattern (LBP)-based feature extraction methods. Static hand gestures are classified using nearest neighbor (NN) algorithm. Dynamic hand gestures are recognized using the novel text-based principal directional features (PDFs), which are generated from the segmented image sequences. Longest common subsequence (LCS) algorithm is used to classify the dynamic gestures. For testing, the Chinese numeral gesture dataset containing static hand poses and directional gesture dataset containing complex dynamic gestures are prepared. The mean accuracy of LDA-based static hand gesture recognition on the Chinese numeral gesture dataset is 92.42%. The mean accuracy of LBP-based static hand gesture recognition on the Chinese numeral gesture dataset is 87.23%. The mean accuracy of the novel dynamic hand gesture recognition method using PDF on directional gesture dataset is 94%.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Gong, Cheng, Dongfang Xu, Zhihao Zhou, Nicola Vitiello und Qining Wang. „BPNN-Based Real-Time Recognition of Locomotion Modes for an Active Pelvis Orthosis with Different Assistive Strategies“. International Journal of Humanoid Robotics 17, Nr. 01 (30.01.2020): 2050004. http://dx.doi.org/10.1142/s0219843620500048.

Der volle Inhalt der Quelle
Annotation:
Real-time human intent recognition is important for controlling low-limb wearable robots. In this paper, to achieve continuous and precise recognition results on different terrains, we propose a real-time training and recognition method for six locomotion modes including standing, level ground walking, ramp ascending, ramp descending, stair ascending and stair descending. A locomotion recognition system is designed for the real-time recognition purpose with an embedded BPNN-based algorithm. A wearable powered orthosis integrated with this system and two inertial measurement units is used as the experimental setup to evaluate the performance of the designed method while providing hip assistance. Experiments including on-board training and real-time recognition parts are carried out on three able-bodied subjects. The overall recognition accuracies of six locomotion modes based on subject-dependent models are 98.43% and 98.03% respectively, with the wearable orthosis in two different assistance strategies. The cost time of recognition decision delivered to the orthosis is about 0.9[Formula: see text]ms. Experimental results show an effective and promising performance of the proposed method to realize real-time training and recognition for future control of low-limb wearable robots assisting users on different terrains.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Abbas, Zain, Wei Chao, Chanyoung Park, Vivek Soni und Sang Hoon Hong. „Augmented Reality-Based Real-Time Accurate Artifact Management System for Museums“. PRESENCE: Virtual and Augmented Reality 27, Nr. 1 (März 2019): 136–50. http://dx.doi.org/10.1162/pres_a_00314.

Der volle Inhalt der Quelle
Annotation:
In this article, we present an accurate and easy to use augmented reality (AR) application for mobile devices. In addition, we show how to better organize and track artifacts using augmented reality for museum employees using both the mobile device and a 3D graphic model of the museum in a PC server. The AR mobile application can connect to the server, which maintains the status of artifacts including its 3D location and respective room location. The system relies on 3D measurements of the rooms in the museum as well as coordinates of the artifacts and reference markers in the respective rooms. The measured coordinates of the artifacts through the AR mobile application are stored in the server and displayed at the corresponding location of the 3D rendered representation of the room. The mobile application allows museum managers to add, remove, or modify artifacts' locations simply by touching the desired location on the touch screen showing live video with AR overlay. Therefore, the accuracy of the touch screen-based artifact positioning is very important. The accuracy of the proposed technique is validated by evaluating angular error measurements with respect to horizontal and vertical field of views that are 60[Formula: see text] and 47[Formula: see text], respectively. The worst-case angular errors in our test environment exhibited 0.60[Formula: see text] for horizontal and 0.29[Formula: see text] for vertical, which is calculated to be well within the error due to touch screen sensing accuracy.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Salunkhe, Akilesh, Manthan Raut, Shayantan Santra und Sumedha Bhagwat. „Android-based object recognition application for visually impaired“. ITM Web of Conferences 40 (2021): 03001. http://dx.doi.org/10.1051/itmconf/20214003001.

Der volle Inhalt der Quelle
Annotation:
Detecting objects in real-time and converting them into an audio output was a challenging task. Recent advancement in computer vision has allowed the development of various real-time object detection applications. This paper describes a simple android app that would help the visually impaired people in understanding their surroundings. The information about the surrounding environment was captured through a phone’s camera where real-time object recognition through tensorflow’s object detection API was done. The detected objects were then converted into an audio output by using android’s text-to-speech library. Tensorflow lite made the offline processing of complex algorithms simple. The overall accuracy of the proposed system was found to be approximately 90%.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

K, Sagar G., und Shreekanth T. „Real Time Implementation of Optical Character Recognition Based TTS System using Raspberry pi“. International Journal of Advanced Research in Computer Science and Software Engineering 7, Nr. 7 (30.07.2017): 149. http://dx.doi.org/10.23956/ijarcsse/v7i7/0117.

Der volle Inhalt der Quelle
Annotation:
Thetext to speech (TTS) conversion technology is proposed to help the blind people and people with poor vision. According to survey done by World Health Organization (WHO) there are about 286 million blind people in this world and about 91% of them reside in developing countries. So there is necessity of portable TTS converter which should be affordable to help the blinds. To help the blind community a smart reader is proposed in this paper. It includes a web cam to capture input text page which is then processed by TTS unit installed in raspberry pi and the output is then amplified by audio and given out on speaker.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Cherkas, P. S., und V. A. Tsarev. „METHOD OF AUTOMATIC ADAPTIVE CONTROL OF IMAGE ACQUISITION PROCESS IN REAL TIME TEXT LABEL RECOGNITION SYSTEMS“. Computer Optics 37, Nr. 3 (01.01.2013): 376–84. http://dx.doi.org/10.18287/0134-2452-2013-37-3-376-384.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Pan, Xunyu, Colin Crowe, Toby Myers und Emily Jetton. „Real-time Whiteboard Coding on Mobile Devices“. Electronic Imaging 2020, Nr. 8 (26.01.2020): 309–1. http://dx.doi.org/10.2352/issn.2470-1173.2020.8.imawm-309.

Der volle Inhalt der Quelle
Annotation:
Mobile devices typically support input from virtual keyboards or pen-based technologies, allowing handwriting to be a potentially viable text input solution for programming on touchscreen devices. The major problem, however, is that handwriting recognition systems are built to take advantage of the rules of natural languages rather than programming languages. In addition, mobile devices are also inherently restricted by the limitation of screen size and the inconvenient use of a virtual keyboard. In this work, we create a novel handwriting-to-code transformation system on a mobile platform to recognize and analyze source code written directly on a whiteboard or a piece of paper. First, the system recognizes and further compiles the handwritten source code into an executable program. Second, a friendly graphical user interface (GUI) is provided to visualize how manipulating different sections of code impacts the program output. Finally, the coding system supports an automatic error detection and correction mechanism to help address the common syntax and spelling errors during the process of whiteboard coding. The mobile application provides a flexible and user-friendly solution for realtime handwriting-based programming for learners under various environments where the keyboard or touchscreen input is not preferred.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

David, Jiří, Pavel Švec, Vít Pasker und Romana Garzinová. „Usage of Real Time Machine Vision in Rolling Mill“. Sustainability 13, Nr. 7 (31.03.2021): 3851. http://dx.doi.org/10.3390/su13073851.

Der volle Inhalt der Quelle
Annotation:
This article deals with the issue of computer vision on a rolling mill. The main goal of this article is to describe the designed and implemented algorithm for the automatic identification of the character string of billets on the rolling mill. The algorithm allows the conversion of image information from the front of the billet, which enters the rolling process, into a string of characters, which is further used to control the technological process. The purpose of this identification is to prevent the input pieces from being confused because different parameters of the rolling process are set for different pieces. In solving this task, it was necessary to design the optimal technical equipment for image capture, choose the appropriate lighting, search for text and recognize individual symbols, and insert them into the control system. The research methodology is based on the empirical-quantitative principle, the basis of which is the analysis of experimentally obtained data (photographs of billet faces) in real operating conditions leading to their interpretation (transformation into the shape of a digital chain). The first part of the article briefly describes the billet identification system from the point of view of technology and hardware resources. The next parts are devoted to the main parts of the algorithm of automatic identification—optical recognition of strings and recognition of individual characters of the chain using artificial intelligence. The method of optical character recognition using artificial neural networks is the basic algorithm of the system of automatic identification of billets and eliminates ambiguities during their further processing. Successful implementation of the automatic inspection system will increase the share of operation automation and lead to ensuring automatic inspection of steel billets according to the production plan. This issue is related to the trend of digitization of individual technological processes in metallurgy and also to the social sustainability of processes, which means the elimination of human errors in the management of the billet rolling process.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Alzubaidi, Mohammad A., Mwaffaq Otoom und Nouran S. Ahmad. „Real-time Assistive Reader Pen for Arabic Language“. ACM Transactions on Asian and Low-Resource Language Information Processing 20, Nr. 1 (April 2021): 1–30. http://dx.doi.org/10.1145/3423133.

Der volle Inhalt der Quelle
Annotation:
Disability is an impairment affecting an individual's livelihood and independence. Assistive technology enables the disabled cohort of the community to break the barriers to learning, access information, contribute to the community, and live independently. This article proposes an assistive device to enable people with visual disabilities and learning disabilities to access printed Arabic material in real-time, and to help them participate in the education system and the professional workforce. This proposed assistive device employs Optical Character Recognition (OCR) and Text To Speech (TTS) conversion, using concatenation synthesis. OCR is achieved using image processing, character extraction, and classification, while Arabic speech synthesis is achieved through concatenation synthesis, followed by Multi Band Re-synthesis Overlap-Add (MBROLA). Waveform generation in the second phase produces vocal output for the disabled user to hear. OCR character and word accuracy tests were conducted for nine Arabic fonts. The results show that six fonts were recognized with over 60% character accuracy and two fonts were recognized with over 88% accuracy. A Mean Opinion Score (MOS) test for speech quality was conducted. The results showed an overall MOS score of 3.53/5 and indicated that users were able to understand the speech. A real-time usability testing was conducted with 10 subjects. The results showed an overall average of agreements scores of 3.9/5 and indicated that the proposed Arabic reader pen meets the real-time constraints and is pleasant and satisfying to use and can contribute to make printed Arabic material accessible to visually impaired persons and people with learning disabilities.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Krstajić, Miloš, Mohammad Najm-Araghi, Florian Mansmann und Daniel A. Keim. „Story Tracker: Incremental visual text analytics of news story development“. Information Visualization 12, Nr. 3-4 (Juli 2013): 308–23. http://dx.doi.org/10.1177/1473871613493996.

Der volle Inhalt der Quelle
Annotation:
Online news sources produce thousands of news articles every day, reporting on local and global real-world events. New information quickly replaces the old, making it difficult for readers to put current events in the context of the past. The stories about these events have complex relationships and characteristics that are difficult to model: they can be weakly or strongly related or they can merge or split over time. In this article, we present a visual analytics system for temporal analysis of news stories in dynamic information streams, which combines interactive visualization and text mining techniques to facilitate the analysis of similar topics that split and merge over time. Text clustering algorithms extract stories from online news streams in consecutive time windows and identify similar stories from the past. The stories are displayed in a visualization, which (1) sorts the stories by minimizing clutter and overlap from edge crossings, (2) shows their temporal characteristics in different time frames with different levels of detail, and (3) allows incremental updates of the display without recalculating the past data. Stories can be interactively filtered by their duration and connectivity in order to be explored in full detail. To demonstrate the system’s capabilities for detailed dynamic text stream exploration, we present a use case with real news data about the Arabic Uprising in 2011.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Et al., Raghad Raied Mahmood. „Currency Detection for Visually Impaired Iraqi Banknote as a Study Case“. Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12, Nr. 6 (05.04.2021): 2940–48. http://dx.doi.org/10.17762/turcomat.v12i6.6078.

Der volle Inhalt der Quelle
Annotation:
It is relatively simple for a normal human to interpret and understand every banknote, but one of the major problems for visually impaired people are money recognition, especially for paper currency. Since money plays such an important role in our everyday lives and is required for every business transaction, real-time detection and recognition of banknotes become a necessity for blind or visually impaired people For that purpose, we propose a real-time object detection system to help visually impaired people in their daily business transactions. Dataset Images of the Iraqi banknote category are collected in different conditions initially and then, these images are augmented with different geometric transformations, to make the system strong. These augmented images are then annotated manually using the "LabelImg" program, from which training sets and validation image sets are prepared. We will use YOLOv3 real-time Object Detection algorithm trained on custom Iraqi banknote dataset for detection and recognition of banknotes. Then the label of the banknotes is identified and then converted into audio by using Google Text to Speech (gTTS), which will be the expected output. The performance of the trained model is evaluated on a test dataset and real-time live video. The test results demonstrate that the proposed method can detect and recognize Iraqi paper money with high mAP reaches 97.405% and a short time.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Myint, Myo, Kenta Yonemori, Akira Yanou, Khin Nwe Lwin, Mamoru Minami und Shintaro Ishiyama. „Visual Servoing for Underwater Vehicle Using Dual-Eyes Evolutionary Real-Time Pose Tracking“. Journal of Robotics and Mechatronics 28, Nr. 4 (19.08.2016): 543–58. http://dx.doi.org/10.20965/jrm.2016.p0543.

Der volle Inhalt der Quelle
Annotation:
[abstFig src='/00280004/12.jpg' width='300' text='ROV with dual-eyes cameras and 3D marker' ] Recently, a number of researches related to underwater vehicle has been conducted worldwide with the huge demand in different applications. In this paper, we propose visual servoing for underwater vehicle using dual-eyes cameras. A new method of pose estimation scheme that is based on 3D model-based recognition is proposed for real-time pose tracking to be applied in Autonomous Underwater Vehicle (AUV). In this method, we use 3D marker as a passive target that is simple but enough rich of information. 1-step Genetic Algorithm (GA) is utilized in searching process of pose in term of optimization, because of its effectiveness, simplicity and promising performance of recursive evaluation, for real-time pose tracking performance. The proposed system is implemented as software implementation and Remotely Operated Vehicle (ROV) is used as a test-bed. In simulated experiment, the ROV recognizes the target, estimates the relative pose of vehicle with respect to the target and controls the vehicle to be regulated in desired pose. PID control concept is adapted for proper regulation function. Finally, the robustness of the proposed system is verified in the case when there is physical disturbance and in the case when the target object is partially occluded. Experiments are conducted in indoor pool. Experimental results show recognition accuracy and regulating performance with errors kept in centimeter level.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Sorinas, Jennifer, Maria Dolores Grima, Jose Manuel Ferrandez und Eduardo Fernandez. „Identifying Suitable Brain Regions and Trial Size Segmentation for Positive/Negative Emotion Recognition“. International Journal of Neural Systems 29, Nr. 02 (21.02.2019): 1850044. http://dx.doi.org/10.1142/s0129065718500442.

Der volle Inhalt der Quelle
Annotation:
The development of suitable EEG-based emotion recognition systems has become a main target in the last decades for Brain Computer Interface applications (BCI). However, there are scarce algorithms and procedures for real-time classification of emotions. The present study aims to investigate the feasibility of real-time emotion recognition implementation by the selection of parameters such as the appropriate time window segmentation and target bandwidths and cortical regions. We recorded the EEG-neural activity of 24 participants while they were looking and listening to an audiovisual database composed of positive and negative emotional video clips. We tested 12 different temporal window sizes, 6 ranges of frequency bands and 60 electrodes located along the entire scalp. Our results showed a correct classification of 86.96% for positive stimuli. The correct classification for negative stimuli was a little bit less (80.88%). The best time window size, from the tested 1[Formula: see text]s to 12[Formula: see text]s segments, was 12[Formula: see text]s. Although more studies are still needed, these preliminary results provide a reliable way to develop accurate EEG-based emotion classification.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Murugan, S., und R. Karthika. „A Survey on Traffic Sign Detection Techniques Using Text Mining“. Asian Journal of Computer Science and Technology 8, S1 (05.02.2019): 21–24. http://dx.doi.org/10.51983/ajcst-2019.8.s1.1975.

Der volle Inhalt der Quelle
Annotation:
Traffic Sign Detection and Recognition (TSDR) technique is a critical step for ensuring vehicle safety. This paper provides a comprehensive survey on traffic sign detection and recognition system based on image and video data. The main focus is to present the current trends and challenges in the field of developing an efficient TSDR system. The ultimate aim of this survey is to analyze the various techniques for detecting traffic signs in real time applications. Image processing is a prominent research area, where multiple technologies are associated to convert an image into digital form and perform some functions on it, in order to get an enhanced image or to extract some useful information from it.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Rosalina, Rosalina, Johanes Parlindungan Hutagalung und Genta Sahuri. „Hiragana Handwriting Recognition Using Deep Neural Network Search“. International Journal of Interactive Mobile Technologies (iJIM) 14, Nr. 01 (20.01.2020): 161. http://dx.doi.org/10.3991/ijim.v14i01.11593.

Der volle Inhalt der Quelle
Annotation:
<span id="orcid-id" class="orcid-id-https">These days there is a huge demand in “storing the information available in paper documents into a computer storage disk”. Digitizing manual filled forms lead to handwriting recognition, a process of translating handwriting into machine editable text. The main objective of this research is to to create an Android application able to recognize and predict the output of handwritten characters by training a neural network model. This research will implement deep neural network in recognizing handwritten text recognition especially to recognize digits, Latin / Alphabet and Hiragana, capture an image or choose the image from gallery to scan the handwritten text from the image, use the live camera to detect the handwritten text real – time without capturing an image and could copy the results of the output from the off-line recognition and share it to other platforms such as notes, Email, and social media. </span>
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Rai, Laxmisha, und Hong Li. „MyOcrTool: Visualization System for Generating Associative Images of Chinese Characters in Smart Devices“. Complexity 2021 (07.05.2021): 1–14. http://dx.doi.org/10.1155/2021/5583287.

Der volle Inhalt der Quelle
Annotation:
Majority of Chinese characters are pictographic characters with strong associative ability and when a character appears for Chinese readers, they usually associate with the objects, or actions related to the character immediately. Having this background, we propose a system to visualize the simplified Chinese characters, so that developing any skills of either reading or writing Chinese characters is not necessary. Considering the extensive use and application of mobile devices, automatic identification of Chinese characters and display of associative images are made possible in smart devices to facilitate quick overview of a Chinese text. This work is of practical significance considering the research and development of real-time Chinese text recognition, display of associative images and for such users who would like to visualize the text with only images. The proposed Chinese character recognition system and visualization tool is named as MyOcrTool and developed for Android platform. The application recognizes the Chinese characters through OCR engine, and uses the internal voice playback interface to realize the audio functions and display the visual images of Chinese characters in real-time.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Wei, Qiang, Yukun Chen, Mandana Salimi, Joshua C. Denny, Qiaozhu Mei, Thomas A. Lasko, Qingxia Chen et al. „Cost-aware active learning for named entity recognition in clinical text“. Journal of the American Medical Informatics Association 26, Nr. 11 (11.07.2019): 1314–22. http://dx.doi.org/10.1093/jamia/ocz102.

Der volle Inhalt der Quelle
Annotation:
Abstract Objective Active Learning (AL) attempts to reduce annotation cost (ie, time) by selecting the most informative examples for annotation. Most approaches tacitly (and unrealistically) assume that the cost for annotating each sample is identical. This study introduces a cost-aware AL method, which simultaneously models both the annotation cost and the informativeness of the samples and evaluates both via simulation and user studies. Materials and Methods We designed a novel, cost-aware AL algorithm (Cost-CAUSE) for annotating clinical named entities; we first utilized lexical and syntactic features to estimate annotation cost, then we incorporated this cost measure into an existing AL algorithm. Using the 2010 i2b2/VA data set, we then conducted a simulation study comparing Cost-CAUSE with noncost-aware AL methods, and a user study comparing Cost-CAUSE with passive learning. Results Our cost model fit empirical annotation data well, and Cost-CAUSE increased the simulation area under the learning curve (ALC) scores by up to 5.6% and 4.9%, compared with random sampling and alternate AL methods. Moreover, in a user annotation task, Cost-CAUSE outperformed passive learning on the ALC score and reduced annotation time by 20.5%–30.2%. Discussion Although AL has proven effective in simulations, our user study shows that a real-world environment is far more complex. Other factors have a noticeable effect on the AL method, such as the annotation accuracy of users, the tiredness of users, and even the physical and mental condition of users. Conclusion Cost-CAUSE saves significant annotation cost compared to random sampling.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Demner-Fushman, Dina, Willie J. Rogers und Alan R. Aronson. „MetaMap Lite: an evaluation of a new Java implementation of MetaMap“. Journal of the American Medical Informatics Association 24, Nr. 4 (27.01.2017): 841–44. http://dx.doi.org/10.1093/jamia/ocw177.

Der volle Inhalt der Quelle
Annotation:
Abstract MetaMap is a widely used named entity recognition tool that identifies concepts from the Unified Medical Language System Metathesaurus in text. This study presents MetaMap Lite, an implementation of some of the basic MetaMap functions in Java. On several collections of biomedical literature and clinical text, MetaMap Lite demonstrated real-time speed and precision, recall, and F1 scores comparable to or exceeding those of MetaMap and other popular biomedical text processing tools, clinical Text Analysis and Knowledge Extraction System (cTAKES) and DNorm.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Gobron, Stephane, Junghyun Ahn, Georgios Paltoglou, Michael Thelwall und Daniel Thalmann. „From sentence to emotion: a real-time three-dimensional graphics metaphor of emotions extracted from text“. Visual Computer 26, Nr. 6-8 (07.04.2010): 505–19. http://dx.doi.org/10.1007/s00371-010-0446-x.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Wijaya, Novan. „Capital Letter Pattern Recognition in Text to Speech by Way of Perceptron Algorithm“. Knowledge Engineering and Data Science 1, Nr. 1 (31.12.2017): 26. http://dx.doi.org/10.17977/um018v1i12018p26-32.

Der volle Inhalt der Quelle
Annotation:
Computer vision is a data transformation retrieved or generated from webcam into another form in means of determining decision. All kinds of transformations are carried through to attain specific aims. One of the supporting techniques in implementing computer vision on a system is digital image processing as the objective of digital image processing is to transform digital-formatted picture so that it can be processed in computer. Computer vision and digital image processing can be implemented in a system of capital letter introduction and real-time handwriting reading on a whiteboard supported by artificial neural network mode “perceptron algorithm” used as a learning technique for the system to learn and recognize the letters. The way it works is captured in letter pattern using a webcam and generates a continuous image that is transformed into digital image form and processed using several techniques such as grayscale image, thresholding, and cropping image.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Petrovic, Vladimir, und Jelena Popovic-Bozovic. „A method for real-time memory efficient implementation of blob detection in large images“. Serbian Journal of Electrical Engineering 14, Nr. 1 (2017): 67–84. http://dx.doi.org/10.2298/sjee1701067p.

Der volle Inhalt der Quelle
Annotation:
In this paper we propose a method for real-time blob detection in large images with low memory cost. The method is suitable for implementation on the specialized parallel hardware such as multi-core platforms, FPGA and ASIC. It uses parallelism to speed-up the blob detection. The input image is divided into blocks of equal sizes to which the maximally stable extremal regions (MSER) blob detector is applied in parallel. We propose the usage of multiresolution analysis for detection of large blobs which are not detected by processing the small blocks. This method can find its place in many applications such as medical imaging, text recognition, as well as video surveillance or wide area motion imagery (WAMI). We explored the possibilities of usage of detected blobs in the feature-based image alignment as well. When large images are processed, our approach is 10 to over 20 times more memory efficient than the state of the art hardware implementation of the MSER.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Sani, Dian Ahkam, und Muchammad Saifulloh. „Speech to Text Processing for Interactive Agent of Virtual Tour Navigation“. International Journal of Artificial Intelligence & Robotics (IJAIR) 1, Nr. 1 (31.10.2019): 31. http://dx.doi.org/10.25139/ijair.v1i1.2030.

Der volle Inhalt der Quelle
Annotation:
The development of science and technology is one way to replace the method of human interaction with computers, one of which is to provide voice input. Conversion of sound into text form with the Backpropagation method can be understood and realized through feature extraction, including the use of Linear Predictive Coding (LPC). Linear Predictive Coding is one way to represent the signal in obtaining the features of each sound pattern. In brief, the way this speech recognition system worked was by inputting human voice through a microphone (analog signal) which then sampled with a sampling speed of 8000 Hz so that it became a digital signal with the assistance of sound card on the computer. The digital signal from the sample then entered the initial process using LPC, so that several LPC coefficients were obtained. The LPC outputs were then trained using the Backpropagation learning method. The results of the learning were classified with a word and stored in a database afterwards. The results of the test were in the form of an introduction program that able display the voice plots. the results of speech recognition with voice recognition percentage of respondents in the database iss 80% of the 100 data in the test in Real Time
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Zou, Yunlong, Xiangyu Liu, Hongyan Xu, Yingzhe Hou und Jialiang Qi. „Design of Intelligent Customer Service Report System Based on Automatic Speech Recognition and Text Classification“. E3S Web of Conferences 295 (2021): 01064. http://dx.doi.org/10.1051/e3sconf/202129501064.

Der volle Inhalt der Quelle
Annotation:
In combination with features such as intensive labor and speech in the customer service report field, this paper discusses the design of a customer service report system based on artificial intelligence automatic speech recognition technology and big data text classification technology. The proposed system realizes functions like a flat IVR menu, quick transcription and input of work orders, dynamic tracking of failure hotspots, automatic classification and accumulation of the knowledge base, speech emotion detection and real-time supervision of service quality, and it can improve the user experience and reduce the labor strengths of customer service staff. The automatically accumulated knowledge base can further assist with feedback to resolve the difficult problem that the emerging intelligent network Q&A and intelligent robots rely on a manually summarized knowledge base.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Fiorentino, Michele, Saverio Debernardis, Antonio E. Uva und Giuseppe Monno. „Augmented Reality Text Style Readability with See-Through Head-Mounted Displays in Industrial Context“. Presence: Teleoperators and Virtual Environments 22, Nr. 2 (01.08.2013): 171–90. http://dx.doi.org/10.1162/pres_a_00146.

Der volle Inhalt der Quelle
Annotation:
The application of augmented reality in industrial environments requires an effective visualization of text on a see-through head-mounted display (HMD). The main contribution of this work is an empirical study of text styles as viewed through a monocular optical see-through display on three real workshop backgrounds, examining four colors and four different text styles. We ran 2,520 test trials with 14 participants using a mixed design and evaluated completion time and error rates. We found that both presentation mode and background influence the readability of text, but there is no interaction effect between these two variables. Another interesting aspect is that the presentation mode differentially influences completion time and error rate. The present study allows us to draw some guidelines for an effective use of AR text visualization in industrial environments. We suggest maximum contrast when reading time is important, and the use of colors to reduce errors. We also recommend a colored billboard with transparent text where colors have a specific meaning.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Xing, Xuemin, Debao Wen, Hsing-Chung Chang, Li Fu Chen und Zhi Hui Yuan. „Highway Deformation Monitoring Based on an Integrated CRInSAR Algorithm — Simulation and Real Data Validation“. International Journal of Pattern Recognition and Artificial Intelligence 32, Nr. 11 (24.07.2018): 1850036. http://dx.doi.org/10.1142/s0218001418500362.

Der volle Inhalt der Quelle
Annotation:
Long-term surface deformation monitoring of highways is crucial to prevent potential hazards and ensure sustainable transportation system safety. DInSAR technique shows its great advantages for ground movements monitoring compared with traditional geodetic survey methods. However, the unavoidable influences of the temporal and spatial decorrelation have brought restrictions for traditional DInSAR on the application for ribbon infrastructures deformation monitoring. In addition, PS and SBAS techniques are not suitable for the area where adequate natural high coherent points cannot be detected. Due to this, we designed an integrated highway deformation monitoring algorithm based on CRInSAR technique in this paper, the processing flow including Corner Reflectors (CR) identification, CR baseline network establishment, phase unwrapping, and time series highway deformation estimation. Both the simulated and real data experiments are conducted to assess and validate the algorithm. In the scenario using simulated data, 10 different noise levels are added to test the performance under different circumstances. The RMSE of linear deformation velocities for 10 different noise levels are obtained and analyzed, to investigate how the accuracy varies with noise. In the real data experiment, part of a highway in Henan, China is chosen as the test area. Six PALSAR images acquired from 22 December 2008 to 09 February 2010 were collected and 12 CR points were installed along the highway. The ultimate time series deformation estimated show that all the CR points are stable. CR04 is undergoing the most serious subsidence, with the maximum magnitude of 13.71[Formula: see text]mm over 14 months. Field leveling measurements are used to assess the external deformation accuracy, the final RMSE is estimated to be [Formula: see text][Formula: see text]mm, which indicates good accordance with the result of leveling.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Islam, Kh, Sudanthi Wijewickrema, Ram Raj und Stephen O’Leary. „Street Sign Recognition Using Histogram of Oriented Gradients and Artificial Neural Networks“. Journal of Imaging 5, Nr. 4 (03.04.2019): 44. http://dx.doi.org/10.3390/jimaging5040044.

Der volle Inhalt der Quelle
Annotation:
Street sign identification is an important problem in applications such as autonomous vehicle navigation and aids for individuals with vision impairments. It can be especially useful in instances where navigation techniques such as global positioning system (GPS) are not available. In this paper, we present a method of detection and interpretation of Malaysian street signs using image processing and machine learning techniques. First, we eliminate the background from an image to segment the region of interest (i.e., the street sign). Then, we extract the text from the segmented image and classify it. Finally, we present the identified text to the user as a voice notification. We also show through experimental results that the system performs well in real-time with a high level of accuracy. To this end, we use a database of Malaysian street sign images captured through an on-board camera.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Lin, Xu, und Gao Wen. „Human-Computer Chinese Sign Language Interaction System“. International Journal of Virtual Reality 4, Nr. 3 (01.01.2000): 82–92. http://dx.doi.org/10.20870/ijvr.2000.4.3.2651.

Der volle Inhalt der Quelle
Annotation:
The generation and recognition of body language is a key technologies of VR. Sign Language is a visual-gestural language mainly used by hearing-impaired people. In this paper, gesture and facial expression models are created using computer graphics and used to synthesize Chinese Sign Language (CSL), and from it a human-computer CSL interaction system is implemented. Using a system combining CSL synthesis and CSL recognition subsystem, hearing-impaired people with data-gloves can pantomime CSL, which can then be displayed on the computer screen in real time and translated into Chinese text. Hearing people can also use the system by entering Chinese text, which is translated into CSL and displayed on the computer screen. In this way hearing-impaired people and hearing people can communicate with each other conveniently.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Bongao, Melchiezhedhieck J., Arvin F. Almadin, Christian L. Falla, Juan Carlo F. Greganda, Steven Valentino E. Arellano und Phillip Amir M. Esguerra. „SBC Based Object and Text Recognition Wearable System u sing Convolutional Neural Network with Deep Learning Algorithm“. International Journal of Recent Technology and Engineering (IJRTE) 10, Nr. 3 (30.09.2021): 198–205. http://dx.doi.org/10.35940/ijrte.c6474.0910321.

Der volle Inhalt der Quelle
Annotation:
This Raspberry Single-Board Computer-Based Object and Text Real-time Recognition Wearable Device using Convolutional Neural Network through TensorFlow Deep Learning, Python and C++ programming languages, and SQLite database application, which detect stationary objects, road signs and Philippine (PHP) money bills, and recognized texts through camera and translate it to audible outputs such as English and Filipino languages. Moreover, the system has a battery notification status using an Arduino microcontroller unit. It also has a switch for object detection mode, text recognition mode, and battery status report mode. This could fulfill the incapability of visually impaired in identifying of objects and the lack of reading ability as well as reducing the assistance that visually impaired needs. Descriptive quantitative research, Waterfall System Development Life Cycle and Evolutionary Prototyping Models were used as the methodologies of this study. Visually impaired persons and the Persons with Disability Affairs Office of the City Government of Biñan, Laguna, Philippines served as the main respondents of the survey conducted. Obtained results stipulated that the object detection, text recognition, and its attributes were accurate and reliable, which gives a significant distinction from the current system to detect objects and recognize printed texts for the visually impaired people.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Hu, Guohua, Pascal Feldhaus, Yuwu Feng, Shengjie Wang, Juan Zheng, Huimin Duan und Juanjuan Gu. „Accuracy Improvement of Indoor Real-Time Location Tracking Algorithm for Smart Supermarket Based on Ultra-Wideband“. International Journal of Pattern Recognition and Artificial Intelligence 33, Nr. 12 (November 2019): 2058004. http://dx.doi.org/10.1142/s0218001420580045.

Der volle Inhalt der Quelle
Annotation:
Collecting data like location information is an essential part of concepts like the “IoT” or “Industry 4.0”. In the case of the development of a precise localization system and an integrated navigation system, indoor location technology receives more and more attention and has become a hot research topic. Common indoor location techniques are mainly based on wireless local area network, radio frequency tag, ZigBee technology, Bluetooth technology, infrared technology and ultra-wideband (UWB). However, these techniques are vulnerable to various noise signals and indoor environments, and also the positioning accuracy is easily affected by the complicated indoor environment. We studied the problem of real-time location tracking based on UWB in an indoor environment in this paper. We have proposed a combinational filtering algorithm and an improved Two-Way Ranging (ITWR) method for indoor real-time location tracking. The simulation results prove that the real-time performance and high accuracy of the presented algorithm can improve location accuracy. The experiment shows that the combinational algorithm and ITWR method which are applied to the positioning and navigation of the smart supermarket, have achieved quiet good results in positioning accuracy. The average positioning error is less than 10[Formula: see text]cm, some of the improvements can elevate the positioning accuracy by 17.5%. UWB is a suitable method for indoor real-time location tracking and has important theoretic value and practical significance.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Vantuch, Tomáš, Michal Prílepok, Jan Fulneček, Roman Hrbáč und Stanislav Mišák. „Towards the Text Compression Based Feature Extraction in High Impedance Fault Detection“. Energies 12, Nr. 11 (05.06.2019): 2148. http://dx.doi.org/10.3390/en12112148.

Der volle Inhalt der Quelle
Annotation:
High impedance faults of medium voltage overhead lines with covered conductors can be identified by the presence of partial discharges. Despite it is a subject of research for more than 60 years, online partial discharges detection is always a challenge, especially in environment with heavy background noise. In this paper, a new approach for partial discharge pattern recognition is presented. All results were obtained on data, acquired from real 22 kV medium voltage overhead power line with covered conductors. The proposed method is based on a text compression algorithm and it serves as a signal similarity estimation, applied for the first time on partial discharge pattern. Its relevancy is examined by three different variations of classification model. The improvement gained on an already deployed model proves its quality.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Gini, Giuseppina, Lisa Mazzon, Simone Pontiggia und Paolo Belluco. „A Classifier of Shoulder Movements for a Wearable EMG-Based Device“. Journal of Medical Robotics Research 02, Nr. 02 (17.03.2017): 1740003. http://dx.doi.org/10.1142/s2424905x17400037.

Der volle Inhalt der Quelle
Annotation:
Prostheses and exoskeletons need a control system able to rapidly understand user intentions; a noninvasive method is to deploy a myoelectric system, and a pattern recognition method to classify the intended movement to input to the controller. Here we focus on the classification phase. Our first aim is to recognize nine movements of the shoulder, a body part seldom considered in the literature and difficult to treat since the muscles involved are deep. We show that our novel sEMG two-phase classifier, working on a signal window of 500[Formula: see text]ms with 62[Formula: see text]ms increment, has a 97.7% accuracy for nine movements and about 100% accuracy on five movements. After developing the classifier using professionally collected sEMG data from eight channels, our second aim is to implement the classifier on a wearable device, composed by the Intel Edison board and a three-channel experimental portable acquisition board. Our final aim is to develop a complete classifier for dynamic situations, considering the transitions between movements and the real-time constraints. The performance of the classifier, using three channels, is about 96.9%, the classification frequency is 62[Formula: see text]Hz, and the computation time is 16[Formula: see text]ms, far less than the real-time constraint of 300[Formula: see text]ms.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Chou, Jui-Sheng, und Chia-Hsuan Liu. „Automated Sensing System for Real-Time Recognition of Trucks in River Dredging Areas Using Computer Vision and Convolutional Deep Learning“. Sensors 21, Nr. 2 (14.01.2021): 555. http://dx.doi.org/10.3390/s21020555.

Der volle Inhalt der Quelle
Annotation:
Sand theft or illegal mining in river dredging areas has been a problem in recent decades. For this reason, increasing the use of artificial intelligence in dredging areas, building automated monitoring systems, and reducing human involvement can effectively deter crime and lighten the workload of security guards. In this investigation, a smart dredging construction site system was developed using automated techniques that were arranged to be suitable to various areas. The aim in the initial period of the smart dredging construction was to automate the audit work at the control point, which manages trucks in river dredging areas. Images of dump trucks entering the control point were captured using monitoring equipment in the construction area. The obtained images and the deep learning technique, YOLOv3, were used to detect the positions of the vehicle license plates. Framed images of the vehicle license plates were captured and were used as input in an image classification model, C-CNN-L3, to identify the number of characters on the license plate. Based on the classification results, the images of the vehicle license plates were transmitted to a text recognition model, R-CNN-L3, that corresponded to the characters of the license plate. Finally, the models of each stage were integrated into a real-time truck license plate recognition (TLPR) system; the single character recognition rate was 97.59%, the overall recognition rate was 93.73%, and the speed was 0.3271 s/image. The TLPR system reduces the labor force and time spent to identify the license plates, effectively reducing the probability of crime and increasing the transparency, automation, and efficiency of the frontline personnel’s work. The TLPR is the first step toward an automated operation to manage trucks at the control point. The subsequent and ongoing development of system functions can advance dredging operations toward the goal of being a smart construction site. By intending to facilitate an intelligent and highly efficient management system of dredging-related departments by providing a vehicle LPR system, this paper forms a contribution to the current body of knowledge in the sense that it presents an objective approach for the TLPR system.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Zhou, Chaoran, Hang Yang, Jianping Zhao und Xin Zhang. „POI Classification Method Based on Feature Extension and Deep Learning“. Journal of Advanced Computational Intelligence and Intelligent Informatics 24, Nr. 7 (20.12.2020): 944–52. http://dx.doi.org/10.20965/jaciii.2020.p0944.

Der volle Inhalt der Quelle
Annotation:
The automatic classification of point of interest (POI) function types based on POI name texts and intelligent computing can provide convenience in travel recommendations, map information queries, urban function divisions, and other services. However, POI name texts belong to short texts, which few characters and sparse features. Therefore, it is difficult to guarantee the feature learning ability and classification effect of the model when distinguishing the POI function types. This paper proposes a POI classification method based on feature extension and deep learning to establish a short-text classification model. We utilize an Internet search engine as an external knowledge base to introduce real-time, large-scale text feature information to the original POI text to solve the limitation of sparse POI name text features. The input text information is represented by the attention calculation matrix used to reduce the noise information of the extended text and the word-embedding matrix of the original text. We utilize a convolutional neural network with excellent local feature extraction ability to establish the classification model. Experimental results on a real-world dataset (obtained from Baidu) show the excellent performance of our model in POI classification tasks compared with other baseline models.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Rahim, Md Abdur, Md Rashedul Islam und Jungpil Shin. „Non-Touch Sign Word Recognition Based on Dynamic Hand Gesture Using Hybrid Segmentation and CNN Feature Fusion“. Applied Sciences 9, Nr. 18 (10.09.2019): 3790. http://dx.doi.org/10.3390/app9183790.

Der volle Inhalt der Quelle
Annotation:
Hand gesture-based sign language recognition is a prosperous application of human– computer interaction (HCI), where the deaf community, hard of hearing, and deaf family members communicate with the help of a computer device. To help the deaf community, this paper presents a non-touch sign word recognition system that translates the gesture of a sign word into text. However, the uncontrolled environment, perspective light diversity, and partial occlusion may greatly affect the reliability of hand gesture recognition. From this point of view, a hybrid segmentation technique including YCbCr and SkinMask segmentation is developed to identify the hand and extract the feature using the feature fusion of the convolutional neural network (CNN). YCbCr performs image conversion, binarization, erosion, and eventually filling the hole to obtain the segmented images. SkinMask images are obtained by matching the color of the hand. Finally, a multiclass SVM classifier is used to classify the hand gestures of a sign word. As a result, the sign of twenty common words is evaluated in real time, and the test results confirm that this system can not only obtain better-segmented images but also has a higher recognition rate than the conventional ones.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie