Auswahl der wissenschaftlichen Literatur zum Thema „Handwritten characters“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Handwritten characters" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Handwritten characters"

1

Jehangir, Sardar, Sohail Khan, Sulaiman Khan, Shah Nazir und Anwar Hussain. „Zernike Moments Based Handwritten Pashto Character Recognition Using Linear Discriminant Analysis“. January 2021 40, Nr. 1 (01.01.2021): 152–59. http://dx.doi.org/10.22581/muet1982.2101.14.

Der volle Inhalt der Quelle
Annotation:
This paper presents an efficient Optical Character Recognition (OCR) system for offline isolated Pashto characters recognition. Developing an OCR system for handwritten character recognition is a challenging task because of the handwritten characters vary both in shape and in style and most of the time the handwritten characters also vary among the individuals. The identification of the inscribed Pashto letters becomes even palling due to the unavailability of a standard handwritten Pashto characters database. For experimental and simulation purposes a handwritten Pashto characters database is developed by collecting handwritten samples from the students of the university on A4 sized page. These collected samples are then scanned, stemmed and preprocessed to form a medium sized database that encompasses 14784 handwritten Pashto character images (336 distinguishing handwritten samples for each 44 characters in Pashto script). Furthermore, the Zernike moments are considered as a feature extractor tool for the proposed OCR system to extract features of each individual character. Linear Discriminant Analysis (LDA) is followed as a recognition tool for the proposed recognition system based on the calculated features map using Zernike moments. Applicability of the proposed system is tested by validating it with 10-fold cross-validation method and an overall accuracy of 63.71% is obtained for the handwritten Pashto isolated characters using the proposed OCR system.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Zhu, Cheng Hui, Wen Jun Xu, Jian Ping Wang und Xiao Bing Xu. „Research on a Characteristic Extraction Algorithm Based on Analog Space-Time Process for Off-Line Handwritten Chinese Characters“. Advanced Materials Research 433-440 (Januar 2012): 3649–55. http://dx.doi.org/10.4028/www.scientific.net/amr.433-440.3649.

Der volle Inhalt der Quelle
Annotation:
On the absence of space-time information, it is difficult to extract the character stroke feature from the off-line handwritten Chinese character image. A feature extraction algorithm is proposed based on analog space-time process by the process neural network. The handwritten Chinese character image is transformed into geometric shape by different types, different numbers, different locations, different orders and different structures of Chinese character strokes. By extracting fault-tolerant features of the five kinds of the off-line handwritten Chinese characters, the data-knowledge table of features is constructed. The parameters of process neural networks are optimized by Particle Swarm optimization (PSO). The handwritten Chinese characters are used to carry out simulation experiment in SCUT-IRAC-HCCLIB. The experiment results show that the algorithm exhibits a strong ability of cognizing handwritten Chinese characters.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Khan, Sulaiman, Habib Ullah Khan und Shah Nazir. „Offline Pashto Characters Dataset for OCR Systems“. Security and Communication Networks 2021 (27.07.2021): 1–7. http://dx.doi.org/10.1155/2021/3543816.

Der volle Inhalt der Quelle
Annotation:
In computer vision and artificial intelligence, text recognition and analysis based on images play a key role in the text retrieving process. Enabling a machine learning technique to recognize handwritten characters of a specific language requires a standard dataset. Acceptable handwritten character datasets are available in many languages including English, Arabic, and many more. However, the lack of datasets for handwritten Pashto characters hinders the application of a suitable machine learning algorithm for recognizing useful insights. In order to address this issue, this study presents the first handwritten Pashto characters image dataset (HPCID) for the scientific research work. This dataset consists of fourteen thousand, seven hundred, and eighty-four samples—336 samples for each of the 44 characters in the Pashto character dataset. Such samples of handwritten characters are collected on an A4-sized paper from different students of Pashto Department in University of Peshawar, Khyber Pakhtunkhwa, Pakistan. On total, 336 students and faculty members contributed in developing the proposed database accumulation phase. This dataset contains multisize, multifont, and multistyle characters and of varying structures.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

MALIK, LATESH, und P. S. DESHPANDE. „RECOGNITION OF HANDWRITTEN DEVANAGARI SCRIPT“. International Journal of Pattern Recognition and Artificial Intelligence 24, Nr. 05 (August 2010): 809–22. http://dx.doi.org/10.1142/s0218001410008123.

Der volle Inhalt der Quelle
Annotation:
Segmentation of handwritten text into lines, words and characters is one of the important steps in the handwritten text recognition process. In this paper, we propose a float fill algorithm for segmentation of unconstrained Devanagari text into words. Here, a text image is directly segmented into individual words. Rectangular boundaries are drawn around the words and horizontal lines are detected with template matching. A mask is designed for detecting the horizontal line and is applied to each word from left to right and top to bottom of the document. Header lines are removed for character separation. A new segment code features are extracted for each character. In this paper, we present the results of multiple classifier combination for offline handwritten Devanagari characters. The use of regular expressions in handwritten characters is a novel concept and they are defined in a manner so that they can become more robust to noise. We have achieved an accuracy of 94% for word level segmentation, 95% for coarse classification and 85% for fine classification of character recognition. On experimentation with a dataset of 5000 samples of characters, the overall recognition rate observed is 95% as we considered top five choice results. The proposed combined classifier can be applied to handwritten character recognition of any other language like English, Chinese, Arabic, etc. and can recognize the characters with same accuracy.18 For printed characters we have achieved accuracy of 100%, only by applying the regular expression classifier.17
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Amulya, K., Lakshmi Reddy, M. Chandara Kumar und Rachana D. „A Survey on Digitization of Handwritten Notes in Kannada“. International Journal of Innovative Technology and Exploring Engineering 12, Nr. 1 (30.12.2022): 6–11. http://dx.doi.org/10.35940/ijitee.a9350.1212122.

Der volle Inhalt der Quelle
Annotation:
Recognition of handwritten text is still an unresolved research problem in the field of optical character recognition. This article suggests an efficient method for creating handwritten text recognition systems. This is a challenging subject that has received a lot of attention recently. A discipline known as optical character recognition makes it possible to convert many kinds of texts or photos into editable, searchable, and analyzable data. Researchers have been using artificial intelligence and machine learning methods to automatically evaluate printed and handwritten documents during the past ten years in order to digitize them. This review paper's goals are to present research directions and a summary of previous studies on character recognition in handwritten texts. Since different people have different handwriting styles, handwritten characters might be challenging to read. Our "Digitization of handwritten notes" research and effort is to categorize and identify characters in the south Indian language of Kannada. The characters are extracted from printed texts and pre-processed using NumPy and OpenCV before being fed through a CNN
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Khan, Majid A., Nazeeruddin Mohammad, Ghassen Ben Brahim, Abul Bashar und Ghazanfar Latif. „Writer verification of partially damaged handwritten Arabic documents based on individual character shapes“. PeerJ Computer Science 8 (20.04.2022): e955. http://dx.doi.org/10.7717/peerj-cs.955.

Der volle Inhalt der Quelle
Annotation:
Author verification of handwritten text is required in several application domains and has drawn a lot of attention within the research community due to its importance. Though, several approaches have been proposed for the text-independent writer verification of handwritten text, none of these have addressed the problem domain where author verification is sought based on partially-damaged handwritten documents (e.g., during forensic analysis). In this paper, we propose an approach for offline text-independent writer verification of handwritten Arabic text based on individual character shapes (within the Arabic alphabet). The proposed approach enables writer verification for partially damaged documents where certain handwritten characters can still be extracted from the damaged document. We also provide a mechanism to identify which Arabic characters are more effective during the writer verification process. We have collected a new dataset, Arabic Handwritten Alphabet, Words and Paragraphs Per User (AHAWP), for this purpose in a classroom setting with 82 different users. The dataset consists of 53,199 user-written isolated Arabic characters, 8,144 Arabic words, 10,780 characters extracted from these words. Convolutional neural network (CNN) based models are developed for verification of writers based on individual characters with an accuracy of 94% for isolated character shapes and 90% for extracted character shapes. Our proposed approach provided up to 95% writer verification accuracy for partially damaged documents.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Wijaya, Aditya Surya, Nurul Chamidah und Mayanda Mega Santoni. „Pengenalan Karakter Tulisan Tangan Dengan K-Support Vector Nearest Neighbor“. IJEIS (Indonesian Journal of Electronics and Instrumentation Systems) 9, Nr. 1 (30.04.2019): 33. http://dx.doi.org/10.22146/ijeis.38729.

Der volle Inhalt der Quelle
Annotation:
Handwritten characters are difficult to be recognized by machine because people had various own writing style. This research recognizes handwritten character pattern of numbers and alphabet using K-Nearest Neighbour (KNN) algorithm. Handwritten recognition process is worked by preprocessing handwritten image, segmentation to obtain separate single characters, feature extraction, and classification. Features extraction is done by utilizing Zone method that will be used for classification by splitting this features data to training data and testing data. Training data from extracted features reduced by K-Support Vector Nearest Neighbor (K-SVNN) and for recognizing handwritten pattern from testing data, we used K-Nearest Neighbor (KNN). Testing result shows that reducing training data using K-SVNN able to improve handwritten character recognition accuracy.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Revathi, Buddaraju, M. V. D. Prasad und Naveen Kishore Gattim. „Computationally efficient handwritten Telugu text recognition“. Indonesian Journal of Electrical Engineering and Computer Science 34, Nr. 3 (01.06.2024): 1618. http://dx.doi.org/10.11591/ijeecs.v34.i3.pp1618-1626.

Der volle Inhalt der Quelle
Annotation:
<p>Optical character recognition (OCR) for regional languages is difficult due to their complex orthographic structure, lack of dataset resources, a greater number of characters and similarity in structure between characters. Telugu is popular language in states of Andhra and Telangana. Telugu exhibits distinct separation between characters within a word, making a character-level dataset sufficient. With a smaller dataset, we can effectively recognize more words. However, challenges arise during the training of compound characters, which are combinations of vowels and consonants. These are considered as two or more characters based on associated vattus and dheerghams with the base character. To address this challenge, each compound character is encoded into a numerical value and used as input during training, with subsequent retrieval during recognition. The segmentation issue arises from overlapping characters caused by varying handwritten styles. For handling segmentation issues at the character level arising from handwritten styles, we have proposed an algorithm based on the language's features. To enhance word-level accuracy a dictionary-based model was devised. A neural network utilizing the inception module is employed for feature extraction at various scales, achieving word-level accuracy rates of 78% with fewer trainable parameters.</p>
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Zhang, Yan, und Liumei Zhang. „SGooTY: A Scheme Combining the GoogLeNet-Tiny and YOLOv5-CBAM Models for Nüshu Recognition“. Electronics 12, Nr. 13 (26.06.2023): 2819. http://dx.doi.org/10.3390/electronics12132819.

Der volle Inhalt der Quelle
Annotation:
With the development of society, the intangible cultural heritage of Chinese Nüshu is in danger of extinction. To promote the research and popularization of traditional Chinese culture, we use deep learning to automatically detect and recognize handwritten Nüshu characters. To address difficulties such as the creation of a Nüshu character dataset, uneven samples, and difficulties in character recognition, we first build a large-scale handwritten Nüshu character dataset, HWNS2023, by using various data augmentation methods. This dataset contains 5500 Nüshu images and 1364 labeled character samples. Second, in this paper, we propose a two-stage scheme model combining GoogLeNet-tiny and YOLOv5-CBAM (SGooTY) for Nüshu recognition. In the first stage, five basic deep learning models including AlexNet, VGGNet16, GoogLeNet, MobileNetV3, and ResNet are trained and tested on the dataset, and the model structure is improved to enhance the accuracy of recognising handwritten Nüshu characters. In the second stage, we combine an object detection model to re-recognize misidentified handwritten Nüshu characters to ensure the accuracy of the overall system. Experimental results show that in the first stage, the improved model achieves the highest accuracy of 99.3% in recognising Nüshu characters, which significantly improves the recognition rate of handwritten Nüshu characters. After integrating the object recognition model, the overall recognition accuracy of the model reached 99.9%.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Bhat, Mohammad Idrees, und B. Sharada. „Spectral Graph-based Features for Recognition of Handwritten Characters: A Case Study on Handwritten Devanagari Numerals“. Journal of Intelligent Systems 29, Nr. 1 (21.07.2018): 799–813. http://dx.doi.org/10.1515/jisys-2017-0448.

Der volle Inhalt der Quelle
Annotation:
Abstract Interpretation of different writing styles, unconstrained cursiveness and relationship between different primitive parts is an essential and challenging task for recognition of handwritten characters. As feature representation is inadequate, appropriate interpretation/description of handwritten characters seems to be a challenging task. Although existing research in handwritten characters is extensive, it still remains a challenge to get the effective representation of characters in feature space. In this paper, we make an attempt to circumvent these problems by proposing an approach that exploits the robust graph representation and spectral graph embedding concept to characterise and effectively represent handwritten characters, taking into account writing styles, cursiveness and relationships. For corroboration of the efficacy of the proposed method, extensive experiments were carried out on the standard handwritten numeral Computer Vision Pattern Recognition, Unit of Indian Statistical Institute Kolkata dataset. The experimental results demonstrate promising findings, which can be used in future studies.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Dissertationen zum Thema "Handwritten characters"

1

Al-Emami, Samir Yaseen Safa. „Machine recognition of handwritten and typewritten Arabic characters“. Thesis, University of Reading, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.359173.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Wang, Jianguo. „Off-line computer recognition of unconstrained handwritten characters“. Thesis, The University of Sydney, 2001. https://hdl.handle.net/2123/27805.

Der volle Inhalt der Quelle
Annotation:
This thesis presents several techniques for improving the performance of off—line Optical Character Recognition (OCR) systems: broken character mending and recognition, feature extraction methods in OCR and hybrid methods for handwritten numeral recognition. As an application, form document image compression and indexing is also introduced. Broken characters mending techniques are investigated first. A macrostrtrcture analysis (MSA) mending method is proposed based on skeleton and boundary information and macrostructure analysis that investigates the stroke tendency and other properties of handwritten characters. A new skeleton end extension algorithm is also introduced. The MSA mending method is combined with a skeleton-based recognition algorithm to verify its efficiency. Experiment results indicate that significant improvement has been achieved. The feature extraction methods in OCR are analyzed by comparing their effectiveness in different situations. Several factors and their relation with the effectiveness of each feather extraction method are investigated. A dynamic feature extraction method is developed to improve the performance of hybrid OCR systems. Hybrid methods for handwritten numeral recognition are then described, which combine two compensatory recognisers by analyzing their performance for several aspects. The different performances of the two algorithms for broken, connected or slanted numerals. and the rneasurement—level decision provided by the neural network algorithm are detected and combined to develop matching rules for each recognition method. Five combination methods are developed to meet different requirements. Experiments with a large number of testing data show satisfactory results for the approach. Finally, a generic method for compressing and indexing multi—copy form documents is developed using template extraction and matching (TEM) strategies and OCR. De—skewing, location and distortion adjusting of form images are employed to realise the TEM method for practical applications. A statistical template extraction algorithm is developed using greyscale images created by overlapping a number of binary form images. The TEM method exploits the cmnponent—Ievel redundancy found in multi—copy form documents and reaches a high compression rate while keeping the original resolution and readability.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Ziogas, Georgios. „Classifying Handwritten Chinese Characters using Convolutional Neural Networks“. Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-371526.

Der volle Inhalt der Quelle
Annotation:
Image recognition applications have been increasingly gaining popularity, as computer hardware was getting more powerful and cheaper. This increase in computational resources, led researchers even closer to their target on creating algorithms that could achieve high accuracy in image recognition tasks. These algorithms are applied in many different fields, such as in medical images analysis and object recognition in real-time applications.Previously studies have shown that among many image recognition algorithms, artificial neural networks and specifically deep neural networks, perform outstandingly due to their ability to recognize extremely accurate patterns, shapes and specific characteristics in an image.In this thesis project we are going to investigate a specialized type of Deep Neural Networks, called Convolutional Neural Networks or CNNs, which are designed specifically for image recognition tasks. Furthermore we will analyze their hyper parameters, as well as explore different architectures, in order to understand how these affect the accuracy and speed of the recognition. Finally we will present the results of the different tests, in terms of accuracy and validate them according to specific statistical metrics. For the purpose of our research, a data-set of handwritten Chinese characters was used.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Mandal, Rakesh Kumar. „Development of Neural Network techniques to recognize handwritten characters“. Thesis, University of North Bengal, 2015. http://ir.nbu.ac.in/handle/123456789/1790.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Hosseini, Habib Mir Mohamad. „Analysis and recognition of Persian and Arabic handwritten characters /“. Title page, contents and abstract only, 1997. http://web4.library.adelaide.edu.au/theses/09PH/09phh8288.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Mitchell, John. „Computer based analysis of handwritten characters for hand-eye coordination therapy“. Thesis, University of Kent, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.358603.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Sae-Tang, Sutat. „A systematic study of offline recognition of Thai printed and handwritten characters“. Thesis, University of Southampton, 2011. https://eprints.soton.ac.uk/206079/.

Der volle Inhalt der Quelle
Annotation:
Thai characters pose some unique problems, which differ from English and other oriental scripts. The structure of Thai characters consists of small loops combined with curves and there is an absence of spaces between each word and sentence. In each line, moreover, Thai characters can be composed on four levels, depending on the type of character being written. This research focuses on OCR for the Thai language: printed and offline handwritten character recognition. An attempt to overcome the problems by simple but effective methods is the main consideration. A printed OCR developed by the National Electronics and Computer Technology Center (NECTEC) uses Kohonen self- organising maps (SOMs) for rough classification and back-propagation neural networks for fine classification. An evaluation of the NECTEC OCR is performed on a printed dataset that contains over 0.6 million tokens. Comparisons of the classifier, with and without the aspect ratio, and with and without SOMs, yield small, but statistically significant differences in recognition rate. A very straightforward classifier, the nearest neighbour, was examined to evaluate overall recognition performance and to compare with the classifier. It shows a significant improvement in recognition rate (about 98%) over the NECTEC classifier (about 96%) on both the original and distorted data (rotated and noisy), but at the expense of longer recognition times. For offline handwritten character recognition, three different classifiers are evaluated on three different datasets that contain, on average, approximately 10,000 tokens each. The neural network and HMMs are more effective and give higher recognition rates than the nearest neighbour classifier on three datasets. The best result obtained from the HMMs is 91.1% on ThaiCAM dataset. However, when evaluated on a different dataset, the recognition rates drastically reduce, due to differences in many aspects of online and offline handwritten data. An improvement in classification rates was obtained by adjusting the stroke width of a character in the online handwritten dataset (12 percentage points) and combining the training sets from the three datasets (7.6 percentage points). A boosting algorithm called AdaBoost yields a slight improvement in recognition rate (1.2 percentage points) over the original classifiers (without applying the AdaBoost algorithm).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

陳國評 und Kwok-ping Chan. „Fuzzy set theoretic approach to handwritten Chinese character recognition“. Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1989. http://hub.hku.hk/bib/B30425876.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Wang, Yongqiang. „A study on structured covariance modeling approaches to designing compact recognizers of online handwritten Chinese characters“. Click to view the E-thesis via HKUTO, 2009. http://sunzi.lib.hku.hk/hkuto/record/B42664305.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Wang, Yongqiang, und 王永強. „A study on structured covariance modeling approaches to designing compact recognizers of online handwritten Chinese characters“. Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2009. http://hub.hku.hk/bib/B42664305.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Bücher zum Thema "Handwritten characters"

1

Yang, Xiaoping. Qing dai shou xie wen xian zhi su zi yan jiu: A study on popular form of characters in the handwritten documents of the Qing dynasty. Beijing Shi: Beijing shi fan da xue chu ban she, 2019.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Li, Xiaolin. On-line handwritten Kanji character recognition. Birmingham: University of Birmingham, 1994.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Hastie, Trevor. Handwritten digit recognition via deformable prototypes. Toronto: University of Toronto, Dept. of Statistics, 1992.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Freeman, R. Austin. d'Arblay Mystery: Handwritten Style. Independently Published, 2021.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Freeman, R. Austin. Stoneware Monkey: Handwritten Style. Independently Published, 2021.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Nowlan, Philip Francis. Armageddon 2419 A. d: Handwritten Style. Independently Published, 2021.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Baum, L. Frank. Glinda of Oz: Handwritten Style. Independently Published, 2021.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Fenimore, Cooper James. Last of the Mohicans: Handwritten Style. Independently Published, 2021.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Pirlo, Giuseppe, Donato Impedovo und Michael C. Fairhurst. Advances in Digital Handwritten Signature Processing: A Human Artefact for E-Society. World Scientific Publishing Co Pte Ltd, 2014.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Chang, Iris J. A handwritten numeral recognition system with multi-level decision scheme (MDS). 1986.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "Handwritten characters"

1

Watson, Mark. „Recognition of Handwritten Characters“. In Common LISP Modules, 71–78. New York, NY: Springer New York, 1991. http://dx.doi.org/10.1007/978-1-4612-3186-8_6.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Suen, Ching Y. „Automatic Recognition of Handwritten Characters“. In Fundamentals in Handwriting Recognition, 70–80. Berlin, Heidelberg: Springer Berlin Heidelberg, 1994. http://dx.doi.org/10.1007/978-3-642-78646-4_4.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Borgohain, Olimpia, Pramod Kumar und Saurabh Sutradhar. „Recognition of Handwritten Assamese Characters“. In Algorithms for Intelligent Systems, 223–30. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-19-7041-2_17.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Shimomura, Haruna, und Hiroyoshi Miwa. „Automatic Generation of Handwritten Style Characters Including Untrained Characters“. In Advances in Intelligent Networking and Collaborative Systems, 14–23. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-40971-4_2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Bhattacharya, U., M. Shridhar und S. K. Parui. „On Recognition of Handwritten Bangla Characters“. In Computer Vision, Graphics and Image Processing, 817–28. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11949619_73.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Halder, Chayan, Sk Md Obaidullah, Jaya Paul und Kaushik Roy. „Writer Verification on Bangla Handwritten Characters“. In Advances in Intelligent Systems and Computing, 53–68. New Delhi: Springer India, 2015. http://dx.doi.org/10.1007/978-81-322-2653-6_4.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Ojumah, Samuel, Sanjay Misra und Adewole Adewumi. „A Database for Handwritten Yoruba Characters“. In Data Science and Analytics, 107–15. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-10-8527-7_10.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Zargar, Hisham, Ruba Almahasneh und László T. Kóczy. „Automatic Recognition of Handwritten Urdu Characters“. In Studies in Computational Intelligence, 165–75. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-74970-5_19.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Naga Manisha, C., Y. K. Sundara Krishna und E. Sreenivasa Reddy. „Glyph Segmentation for Offline Handwritten Telugu Characters“. In Advances in Intelligent Systems and Computing, 227–35. Singapore: Springer Singapore, 2017. http://dx.doi.org/10.1007/978-981-10-3223-3_21.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Li, Wei, Xiaoxuan He, Chao Tang, Keshou Wu, Xuhui Chen, Shaoyong Yu, Yuliang Lei, Yanan Fang und Yuping Song. „Handwritten Numbers and English Characters Recognition System“. In Advances in Intelligent Systems and Computing, 145–54. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-48499-0_18.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Handwritten characters"

1

Wasalwar, Yash Prashant, Kishan Singh Bagga, PVRR Bhogendra Rao und Snehlata Dongre. „Handwritten Character Recognition of Telugu Characters“. In 2023 IEEE 8th International Conference for Convergence in Technology (I2CT). IEEE, 2023. http://dx.doi.org/10.1109/i2ct57861.2023.10126377.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Koerich, A. L., und P. R. Kalva. „Unconstrained handwritten character recognition using metaclasses of characters“. In rnational Conference on Image Processing. IEEE, 2005. http://dx.doi.org/10.1109/icip.2005.1530112.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Joe, Kevin George, Meghna Savit und K. Chandrasekaran. „Offline Character recognition on Segmented Handwritten Kannada Characters“. In 2019 Global Conference for Advancement in Technology (GCAT). IEEE, 2019. http://dx.doi.org/10.1109/gcat47503.2019.8978320.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Bratić, Diana, und Nikolina Stanić Loknar. „AI driven OCR: Resolving handwritten fonts recognizability problems“. In 10th International Symposium on Graphic Engineering and Design. University of Novi Sad, Faculty of technical sciences, Department of graphic engineering and design,, 2020. http://dx.doi.org/10.24867/grid-2020-p82.

Der volle Inhalt der Quelle
Annotation:
Optical Character Recognition (OCR) is the electronic or mechanical conversion of images of typed, handwritten, or printed text into machine-encoded text. Advanced systems are capable to produce a high degree of recognition accuracy for most technic fonts, but when it comes to handwritten forms there is a problem occur in recognizing certain characters and limitations with conventional OCR processes persist. It is most pronounced in ascenders (k, b, l, d, h, t) and descenders (g, j, p, q, y). If the characters are linked by ligatures, the ascending and descending strokes are even less recognizable to the scanners. In order to reduce the likelihood of a recognition error, it is a necessary to create a large database of stored characters and their glyphs. Feature extraction decomposes glyphs into features like lines, closed loops, line direction, and line intersections. A Multilayer Perceptron (MLP) neural network based on Back Propagation Neural Network (BPNN) algorithm as a method of Artificial Intelligence (AI) has been used in text identification, classification and recognition using various methods: image pattern based, text-based, mark-based etc. Also, the application of AI generates of a large database of different letter cuts, and modifications, and variation of the same letter character structure. For this purpose, the recognizability test of handwritten fonts was performed. Within main group, subgroups of independent letter characters and letter characters linked by ligatures are created, and reading errors were observed. In each subgroup, four different font families (bold stroke, alternating stroke, monoline stroke, and brush stroke) were tested. In subgroup of independent letter characters, errors were observed in similar rounded lines such as the characters a, and e. In the subgroup of letter characters linked by ligatures, errors were also observed in similar rounded lines such as the letter characters a and e, m and n, but also in ascenders b and l, and descenders g and q. Furthermore, seven letter cuts were made from each basic test letters, and up to are thin, ultra-light, light, regular, semi-bold, bold, and ultra-bold, and stored in the existing EMNIST database. The scanning test was repeated, and recently obtained results showed a decrease in the deviation rate, i.e. higher accuracy. Reducing the number of deviations shows that the neural network gives acceptable answers but requires creation of a larger database within about 56,000 different characters.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Chaithra, D., und K. Indira. „Handwritten online character recognition for single stroke Kannada characters“. In 2017 2nd IEEE International Conference on Recent Trends in Electronics, Information & Communication Technology (RTEICT). IEEE, 2017. http://dx.doi.org/10.1109/rteict.2017.8256657.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Kato, Takahito. „Evaluation system for handwritten characters“. In SPIE/IS&T 1992 Symposium on Electronic Imaging: Science and Technology, herausgegeben von Donald P. D'Amato, Wolf-Ekkehard Blanz, Byron E. Dom und Sargur N. Srihari. SPIE, 1992. http://dx.doi.org/10.1117/12.130275.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Hussien, Rana S., Azza A. Elkhidir und Mohamed G. Elnourani. „Optical Character Recognition of Arabic handwritten characters using Neural Network“. In 2015 International Conference on Computing, Control, Networking, Electronics and Embedded Systems Engineering (ICCNEEE). IEEE, 2015. http://dx.doi.org/10.1109/iccneee.2015.7381412.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Prameela, N., P. Anjusha und R. Karthik. „Off-line Telugu handwritten characters recognition using optical character recognition“. In 2017 International Conference of Electronics, Communication and Aerospace Technology (ICECA). IEEE, 2017. http://dx.doi.org/10.1109/iceca.2017.8212801.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Macwan, Swital J., und Archana N. Vyas. „Classification of offline gujarati handwritten characters“. In 2015 International Conference on Advances in Computing, Communications and Informatics (ICACCI). IEEE, 2015. http://dx.doi.org/10.1109/icacci.2015.7275831.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

C.P., Hashrin, Amal Jossy, Sudhakaran K., Thushara A. und Ansamma John. „Segmenting Characters from Malayalam Handwritten Documents“. In 2019 1st International Conference on Innovations in Information and Communication Technology (ICIICT). IEEE, 2019. http://dx.doi.org/10.1109/iciict1.2019.8741416.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Berichte der Organisationen zum Thema "Handwritten characters"

1

Grother, Patrick J. Karhunen Loeve feature extraction for neural handwritten character recognition. Gaithersburg, MD: National Institute of Standards and Technology, 1992. http://dx.doi.org/10.6028/nist.ir.4824.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Fuller, J. J., A. Farsaie und T. Dumoulin. Handwritten Character Recognition Using Feature Extraction and Neural Networks. Fort Belvoir, VA: Defense Technical Information Center, Februar 1991. http://dx.doi.org/10.21236/ada238294.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Griffiths, Rachael. Transkribus in Practice: Improving CER. Verlag der Österreichischen Akademie der Wissenschaften, Oktober 2022. http://dx.doi.org/10.1553/tibschol_erc_cog_101001002_griffiths_cer.

Der volle Inhalt der Quelle
Annotation:
This paper documents ongoing efforts to enhance the accuracy of Handwritten Text Recognition (HTR) models using Transkribus, focusing on the transcription of Tibetan cursive (dbu med) manuscripts from the 11th to 13th centuries within the framework of the ERC-funded project, The Dawn of Tibetan Buddhist Scholasticism (11th-13th C.) (TibSchol). It presents the steps taken to improve the Character Error Rate (CER) of the HTR models, the results achieved so far, and considerations for those working on similar projects.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie