Journal articles on the topic 'Character images'

To see the other types of publications on this topic, follow the link: Character images.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Character images.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Chang, Yasheng, and Weiku Wang. "Text recognition in radiographic weld images." Insight - Non-Destructive Testing and Condition Monitoring 61, no. 10 (October 1, 2019): 597–602. http://dx.doi.org/10.1784/insi.2019.61.10.597.

Full text
Abstract:
Automatic recognition of text characters on radiographic images based on computer vision would be a very useful step forward as it could improve and simplify the file handling of digitised radiographs. Text recognition in radiographic weld images is challenging since there is no uniform font or character size and each character may tilt in different directions and by different amounts. Deep learning approaches for text recognition have recently achieved breakthrough performance using convolutional neural networks (CNNs). CNNs can recognise normalised characters in different fonts. However, the tilt of a character still has a strong influence on the accuracy of recognition. In this paper, a new improved algorithm is proposed based on the Radon transform, which is very effective at character rectification. The improved algorithm increases the accuracy of character recognition from 86.25% to 98.48% in the current experiments. The CNN is used to recognise the rectified characters, which achieves good accuracy and improves character recognition in radiographic weld images. A CNN greatly improves the efficiency of digital scanning and filing of radiographic film. The method proposed in this paper is also compared with other methods that are commonly used in other fields and the results show that the proposed method is better than state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
2

Wilterdink, Nico. "Images of national character." Society 32, no. 1 (November 1994): 43–51. http://dx.doi.org/10.1007/bf02693352.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wang, Zeya. "Discussion on the Significance of Algirdas Julien Greimas's Semiotic Square in Character Shaping — A Case Study of the Novel Mo Dao Zu Shi." Arts Studies and Criticism 3, no. 2 (June 27, 2022): 110. http://dx.doi.org/10.32629/asc.v3i2.832.

Full text
Abstract:
Algirdas Julien Greimas uses semiotic square to clarify the relationship between characters in the text, and studies the original value judgment of the text through logical deduction. This complete and effective interpretation method has been gradually applied to more disciplines. Today, in works such as Mo Dao Zu Shi, the characterization of characters is increasingly complicated, while the flat characterization is gradually declining. Therefore, the internal factors that form character images in literary works begin to become more and more complex. The use of Algirdas Julien Greimas's semiotic square to analyze the relationship between multiple factors in a character's personality not only reveals the multifaceted nature of the character in front of readers, but also helps to build a feasible frame for the creation of character images in the future.
APA, Harvard, Vancouver, ISO, and other styles
4

Tian, Xue Dong, Xue Sha Jia, Fang Yang, Xin Fu Li, and Xiu Fen Miao. "A Retrieval Method of Ancient Chinese Character Images." Applied Mechanics and Materials 462-463 (November 2013): 432–37. http://dx.doi.org/10.4028/www.scientific.net/amm.462-463.432.

Full text
Abstract:
Global and local image retrieval of ancient Chinese characters is a helpful means for character research work. Because of the characteristics of ancient Chinese characters such as complex structures and variant shapes, there exist many theoretical and technical problems in feature extraction, clustering and retrieval of these images. A retrieval method was established with the strategy of “clustering before matching” for the ancient Chinese character images by integrating the structural features of them. Firstly, preprocessed character image area was divided into elastic meshes and the directional elements were extracted to form the feature vectors. Then, K-means algorithm was employed to cluster the character images within global and local areas. Finally, the similar images within selected areas were searched in corresponding cluster and the obtained images were provided to users. The experimental results show that this method is helpful for the improvement of the efficiency of ancient Chinese character study.
APA, Harvard, Vancouver, ISO, and other styles
5

V. Seeri, Shivananda, J. D. Pujari, and P. S. Hiremath. "PNN Based Character Recognition in Natural Scene Images." Bonfring International Journal of Software Engineering and Soft Computing 6, Special Issue (October 31, 2016): 109–13. http://dx.doi.org/10.9756/bijsesc.8254.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Yanagisawa, Hideaki, Takuro Yamashita, and Hiroshi Watanabe. "Clustering of Comic Character Images for Extraction of Major Characters." Journal of The Institute of Image Information and Television Engineers 73, no. 1 (2019): 199–204. http://dx.doi.org/10.3169/itej.73.199.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Li, Xue Yong, and Chang Hou Lu. "A Gabor Filter Based Image Acquisition Method for Raised Characters." Applied Mechanics and Materials 373-375 (August 2013): 459–63. http://dx.doi.org/10.4028/www.scientific.net/amm.373-375.459.

Full text
Abstract:
This paper presents a novel image acquisition technology for raised characters. First, laser stripes modulated with height information of raised character are captured using the 3D vision technique; then, laser stripes are combined into agrating image, in which are complete character images with grating background; Finally, a rational design Gabor kernel is applied to filter thegrating image. Through this way, the background was removed and the grayscale images of the raised characters are reserved. Experiments show that the proposed method can get the well-separated character images. Also, it is more simple and efficient than the existing methods.
APA, Harvard, Vancouver, ISO, and other styles
8

Wu, Wei, Zheng Liu, Mo Chen, Zhiming Liu, Xi Wu, and Xiaohai He. "A New Framework for Container Code Recognition by Using Segmentation-Based and HMM-Based Approaches." International Journal of Pattern Recognition and Artificial Intelligence 29, no. 01 (January 4, 2015): 1550004. http://dx.doi.org/10.1142/s0218001415500044.

Full text
Abstract:
Traditional methods for automatic recognition of container code in visual images are based on segmentation and recognition of isolated characters. However, when the segment fails to separate each character from the others, those methods will not function properly. Sometimes the container code characters are printed or arranged very closely, which makes it a challenge to isolate each character. To address this issue, a new framework for automatic container code recognition (ACCR) in visual images is proposed in this paper. In this framework, code-character regions are first located by applying a horizontal high-pass filter and scan line analysis. Then, character blocks are extracted from the code-character regions and further classified into two categories, i.e. single-character block and multi-character block. Finally, a segmentation-based approach is implemented for recognition of the characters in single-character blocks, and a hidden Markov model (HMM)-based method is proposed for the multi-character blocks. The experimental results demonstrate the effectiveness of the proposed method, which can successfully recognize the container code with closely arranged characters.
APA, Harvard, Vancouver, ISO, and other styles
9

Rai, Laxmisha, and Hong Li. "MyOcrTool: Visualization System for Generating Associative Images of Chinese Characters in Smart Devices." Complexity 2021 (May 7, 2021): 1–14. http://dx.doi.org/10.1155/2021/5583287.

Full text
Abstract:
Majority of Chinese characters are pictographic characters with strong associative ability and when a character appears for Chinese readers, they usually associate with the objects, or actions related to the character immediately. Having this background, we propose a system to visualize the simplified Chinese characters, so that developing any skills of either reading or writing Chinese characters is not necessary. Considering the extensive use and application of mobile devices, automatic identification of Chinese characters and display of associative images are made possible in smart devices to facilitate quick overview of a Chinese text. This work is of practical significance considering the research and development of real-time Chinese text recognition, display of associative images and for such users who would like to visualize the text with only images. The proposed Chinese character recognition system and visualization tool is named as MyOcrTool and developed for Android platform. The application recognizes the Chinese characters through OCR engine, and uses the internal voice playback interface to realize the audio functions and display the visual images of Chinese characters in real-time.
APA, Harvard, Vancouver, ISO, and other styles
10

Angadi, S. A., and M. M. Kodabagi. "A Robust Segmentation Technique for Line, Word and Character Extraction from Kannada Text in Low Resolution Display Board Images." International Journal of Image and Graphics 14, no. 01n02 (January 2014): 1450003. http://dx.doi.org/10.1142/s021946781450003x.

Full text
Abstract:
Reliable extraction/segmentation of text lines, words and characters is one of the very important steps for development of automated systems for understanding the text in low resolution display board images. In this paper, a new approach for segmentation of text lines, words and characters from Kannada text in low resolution display board images is presented. The proposed method uses projection profile features and on pixel distribution statistics for segmentation of text lines. The method also detects text lines containing consonant modifiers and merges them with corresponding text lines, and efficiently separates overlapped text lines as well. The character extraction process computes character boundaries using vertical profile features for extracting character images from every text line. Further, the word segmentation process uses k-means clustering to group inter character gaps into character and word cluster spaces, which are used to compute thresholds for extracting words. The method also takes care of variations in character and word gaps. The proposed methodology is evaluated on a data set of 1008 low resolution images of display boards containing Kannada text captured from 2 mega pixel cameras on mobile phones at various sizes 240 × 320, 480 × 640 and 960 × 1280. The method achieves text line segmentation accuracy of 97.17%, word segmentation accuracy of 97.54% and character extraction accuracy of 99.09%. The proposed method is tolerant to font variability, spacing variations between characters and words, absence of free segmentation path due to consonant and vowel modifiers, noise and other degradations. The experimentation with images containing overlapped text lines has given promising results.
APA, Harvard, Vancouver, ISO, and other styles
11

T., Poornima, and M. Amanullah. "Proficient Character Recognition from Images." International Journal of Computer Applications 143, no. 4 (June 17, 2016): 4–7. http://dx.doi.org/10.5120/ijca2016909980.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

MacLeod, Norman, and David Steart. "Automated leaf physiognomic character identification from digital images." Paleobiology 41, no. 4 (September 2015): 528–53. http://dx.doi.org/10.1017/pab.2015.13.

Full text
Abstract:
AbstractResearch into the relationship between leaf form and climate over the last century has revealed that, in many species, the sizes and shapes of leaf characters exhibit highly structured and predictable patterns of variation in response to the local climate. Several procedures have been developed that quantify covariation between the relative abundance of plant character states and the states of climate variables as a means of estimating paleoclimate parameters. One of the most widely used of these is the Climate Leaf Analysis Multivariate Program (CLAMP). The consistency, accuracy and reliability with which leaf characters can be identified and assigned to CLAMP character-state categories is critical to the accuracy of all CLAMP analyses. Here we report results of a series of performance tests for an image-based, fully automated at the point of use, leaf character scoring system that can be used to generate CLAMP leaf character state data for: leaf bases (acute, cordate and round), leaf apices (acute, attenuate), leaf shapes (ovate, elliptical and obovate), leaf lobing (unlobed, lobed), and leaf aspect ratios (length/width). This image-based system returned jackknifed identification accuracy ratios of between 87% and 100%. These results demonstrate that automated image-based identification systems have the potential to improve paleoenvironmental inferences via the provision of accurate, consistent and rapid CLAMP leaf-character identifications. More generally, our results provide strong support for the feasibility of using fully automated, image-based morphometric procedures to address the general problem of morphological character-state identification.
APA, Harvard, Vancouver, ISO, and other styles
13

L. Almeida, Leandro, Maria S.V. Paiva, Francisco A. Silva, and Almir O. Artero. "Super-Resolution Images Enhanced for Applications to Character Recognition." SIJ Transactions on Computer Science Engineering & its Applications (CSEA) 01, no. 03 (August 16, 2013): 09–16. http://dx.doi.org/10.9756/sijcsea/v1i3/0103520101.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Lin, Guo-Shiang, Jia-Cheng Tu, and Jen-Yung Lin. "Keyword Detection Based on RetinaNet and Transfer Learning for Personal Information Protection in Document Images." Applied Sciences 11, no. 20 (October 13, 2021): 9528. http://dx.doi.org/10.3390/app11209528.

Full text
Abstract:
In this paper, a keyword detection scheme is proposed based on deep convolutional neural networks for personal information protection in document images. The proposed scheme is composed of key character detection and lexicon analysis. The first part is the key character detection developed based on RetinaNet and transfer learning. To find the key characters, RetinaNet, which is composed of convolutional layers featuring a pyramid network and two subnets, is exploited to detect key characters within the region of interest in a document image. After the key character detection, the second part is a lexicon analysis, which analyzes and combines several key characters to find the keywords. To train the model of RetinaNet, synthetic image generation and data augmentation are exploited to yield a large image dataset. To evaluate the proposed scheme, many document images are selected for testing, and two performance measurements, IoU (Intersection Over Union) and mAP (Mean Average Precision), are used in this paper. Experimental results show that the mAP rates of the proposed scheme are 85.1% and 85.84% for key character detection and keyword detection, respectively. Furthermore, the proposed scheme is superior to Tesseract OCR (Optical Character Recognition) software for detecting the key characters in document images. The experimental results demonstrate that the proposed method can effectively localize and recognize these keywords within noisy document images with Mandarin Chinese words.
APA, Harvard, Vancouver, ISO, and other styles
15

Zhang, Ce, Weilan Wang, and Guowei Zhang. "Construction of a Character Dataset for Historical Uchen Tibetan Documents under Low-Resource Conditions." Electronics 11, no. 23 (November 27, 2022): 3919. http://dx.doi.org/10.3390/electronics11233919.

Full text
Abstract:
The construction of a character dataset is an important part of the research on document analysis and recognition of historical Tibetan documents. The results of character segmentation research in the previous stage are presented by coloring the characters with different color values. On this basis, the characters are annotated, and the character images corresponding to the annotation are extracted to construct a character dataset. The construction of a character dataset is carried out as follows: (1) text annotation of segmented characters is performed; (2) the character image is extracted from the character block based on the real position information; (3) according to the class of annotated text, the extracted character images are classified to construct a preliminary character dataset; (4) data augmentation is used to solve the imbalance of classes and samples in the preliminary dataset; (5) research on character recognition based on the constructed dataset is performed. The experimental results show that under low-resource conditions, this paper solves the challenges in the construction of a historical Uchen Tibetan document character dataset and constructs a 610-class character dataset. This dataset lays the foundation for the character recognition of historical Tibetan documents and provides a reference for the construction of relevant document datasets.
APA, Harvard, Vancouver, ISO, and other styles
16

Ohnishi, Madoka, and Koichi Oda. "Unresolvable Pixels Contribute to Character Legibility: Another Reason Why High-Resolution Images Appear Clearer." i-Perception 11, no. 6 (November 2020): 204166952098110. http://dx.doi.org/10.1177/2041669520981102.

Full text
Abstract:
This study examined the effect of character sample density on legibility. As the spatial frequency component important for character recognition is said to be 1 to 3 cycles/letter (cpl), six dots in each direction should be sufficient to represent a character; however, some studies have reported that high-density characters are more legible. Considering that these seemingly contradictory findings could be compatible, we analyzed the frequency component of the character stimulus with adjusted sample density and found that the component content of 1 to 3 cpl increased in the high-density character. In the following three psychophysical experiments, high sample density characters tended to have lower contrast thresholds, both for normal and low vision. Furthermore, the contrast threshold with characters of each sample density was predicted from the amplitude of the 1 to 3 cpl component. Thus, while increasing the sample density improves legibility, adding a high frequency is not important in itself. The findings suggest that enhancing the frequency components important for recognizing characters by adding the high-frequency component contributes to making characters more legible.
APA, Harvard, Vancouver, ISO, and other styles
17

Liu, Fuxiang, Song He, and Zhisheng Chen. "Improved Alexnet-Based Character Detection Method." Journal of Physics: Conference Series 2278, no. 1 (May 1, 2022): 012025. http://dx.doi.org/10.1088/1742-6596/2278/1/012025.

Full text
Abstract:
Abstract Extracting text information in complex images is a hot spot in pattern recognition research with broad application prospects. Natural scene door numbers produce serious distortion due to blurred images, uneven illumination and low light illumination, which makes it difficult to achieve ideal results in character recognition, and recognizing characters of arbitrary length is even more of a challenge. In this paper, we adopt the improved Niblack’s local threshold segmentation method to segment the images, and mark the connected areas of the segmented images to highlight the important features, and finally input the above pre-processed images to the improved AlexNet network for target detection on SVHN (Street View House Number) to realize the detection of characters in real scenes. The experimental results show that the improved Alexet-based target detection method is able to complete the detection task of streetscape door number characters well with 92.89% correct recognition rate.
APA, Harvard, Vancouver, ISO, and other styles
18

Gao, Feng, Jingping Zhang, Yongge Liu, and Yahong Han. "Image Translation for Oracle Bone Character Interpretation." Symmetry 14, no. 4 (April 4, 2022): 743. http://dx.doi.org/10.3390/sym14040743.

Full text
Abstract:
The Oracle Bone Characters are the earliest known ancient Chinese characters and are an important record of the civilization of ancient China.The interpretation of the Oracle Bone Characters is challenging and requires professional knowledge from ancient Chinese language experts. Although some works have utilized deep learning to perform image detection and recognition using the Oracle Bone Characters, these methods have proven difficult to use for the interpretation of uninterpreted Oracle Bone Character images. Inspired by the prior knowledge that there exists a relation between glyphs from Oracle Bone Character images and images of modern Chinese characters, we proposed a method of image translation from Oracle Bone Characters to modern Chinese characters based on the use of a generative adversarial network to capture the implicit relationship between glyphs from Oracle Bone Characters and modern Chinese characters. The image translation process between Oracle Bone Characters and the modern Chinese characters forms a symmetrical structure, comprising an encoder and decoder. To our knowledge, our symmetrical image translation method is the first of its kind used for the task of interpreting Oracle Bone Characters. Our experiments indicated that our image translation method can provide glyph information to aid in the interpretation of Oracle Bone Characters.
APA, Harvard, Vancouver, ISO, and other styles
19

Shobha Rani, N., N. Chandan, A. Sajan Jain, and H. R. Kiran. "Deformed character recognition using convolutional neural networks." International Journal of Engineering & Technology 7, no. 3 (July 26, 2018): 1599. http://dx.doi.org/10.14419/ijet.v7i3.14053.

Full text
Abstract:
Realization of high accuracies towards south Indian character recognition is one the truly interesting research challenge. In this paper, our investigation is focused on recognition of one of the most widely used south Indian script called Kannada. In particular, the proposed exper-iment is subject towards the recognition of degraded character images which are extracted from the ancient Kannada poetry documents and also on the handwritten character images that are collected from various unconstrained environments. The character images in the degraded documents are slightly blurry as a result of which character image is imposed by a kind of broken and messy appearances, this particular aspect leads to various conflicting behaviors of the recognition algorithm which in turn reduces the accuracy of recognition. The training of degraded patterns of character image samples are carried out by using one of the deep convolution neural networks known as Alex net.The performance evaluation of this experimentation is subject towards the handwritten datasets gathered synthetically from users of age groups between 18-21, 22-25 and 26-30 and also printed datasets which are extracted from ancient document images of Kannada poetry/literature. The datasets are comprised of around 497 classes. 428 classes include consonants, vowels, simple compound characters and complex com-pound characters. Each base character combined with consonant/vowel modifiers in handwritten text with overlapping/touching diacritics are assumed as a separate class in Kannada script for our experimentation. However, for those compound characters that are non-overlapping/touching are still considered as individual classes for which the semantic analysis is carried out during the post processing stage of OCR. It is observed that the performance of the Alex net in classification of printed character samples is reported as 91.3% and with reference to handwritten text, and accuracy of 92% is recorded.
APA, Harvard, Vancouver, ISO, and other styles
20

Shen, Chuanxing, Yongjian Zhu, and Chi Wang. "Research on Character Computer Intelligent Recognition of Automobile Front Bar Bracket Based on Machine Vision." Journal of Physics: Conference Series 2083, no. 4 (November 1, 2021): 042041. http://dx.doi.org/10.1088/1742-6596/2083/4/042041.

Full text
Abstract:
Abstract Aiming at the problem that the characters in the superimposed character area on the surface of the front bumper bracket of the car are difficult to recognize, a machine vision-based automobile front bumper bracket character recognition method is proposed, use Python, OpenCV and Halcon computer vision library for software system design. For eight images collected from different angles of light source directions, an innovative method of polynomial fitting gray value can reducethe uneven illumination of the images. The photometric stereo algorithm is used to obtain a high-contrast character image, and the separation of the two types of characters is simultaneously achieved. After the image filtering, opening and closing processes canremove background interference, use a morphological improvement algorithm based on scanning algorithm to complete character positioning, and then the improved algorithm of projection dichotomy based on the connected domain size is used to complete the character segmentation, and finally the support vector machine is used for character feature recognition. The experimental results claim that these methods can quickly separate superimposed characters and complete the recognition of non-color difference convex characters and inkjet characters, and the average recognition accuracy rate is over 96%, which meets the expected recognition requirements.
APA, Harvard, Vancouver, ISO, and other styles
21

Lu, Liqiong, Dong Wu, Jianfang Xiong, Zhou Liang, and Faliang Huang. "Anchor-Free Braille Character Detection Based on Edge Feature in Natural Scene Images." Computational Intelligence and Neuroscience 2022 (August 8, 2022): 1–11. http://dx.doi.org/10.1155/2022/7201775.

Full text
Abstract:
Braille character detection helps communication between normal and visually impaired people. The existing Braille detection methods are all aimed at scanning Braille document images while ignoring natural scene Braille images and CNN shining in the field of pattern recognition is rarely used for Braille detection. Firstly, a natural scene Braille image data set named NSBD was constructed. Then, an anchor-free Braille character detection based on the edge feature was proposed by analyzing that Braille characters in natural scene images that are relatively small in size, and a Braille character is composed of Braille dots that werelocated at the edge region of Braille character. Finally, the performance of the proposed method and other classic methods based on CNN was compared on NSBD. The experimental results show that the proposed method has good performance.
APA, Harvard, Vancouver, ISO, and other styles
22

Phan, Thi Ha, Duc Chung Tran, and Mohd Fadzil Hassan. "Vietnamese character recognition based on CNN model with reduced character classes." Bulletin of Electrical Engineering and Informatics 10, no. 2 (April 1, 2021): 962–69. http://dx.doi.org/10.11591/eei.v10i2.2810.

Full text
Abstract:
This article will detail the steps to build and train the convolutional neural network (CNN) model for Vietnamese character recognition in educational books. Based on this model, a mobile application for extracting text content from images in Vietnamese textbooks was built using OpenCV and Canny edge detection algorithm. There are 178 characters classes in Vietnamese with accents. However, within the scope of Vietnamese character recognition in textbooks, some classes of characters only differ in terms of actual sizes, such as “c” and “C”, “o” and “O”. Therefore, the authors built the classification model for 138 Vietnamese character classes after filtering out similar character classes to increase the model's effectiveness.
APA, Harvard, Vancouver, ISO, and other styles
23

Devi, N. "Offline Handwritten Character Recognition using Convolutional Neural Network." International Journal for Research in Applied Science and Engineering Technology 9, no. 8 (August 31, 2021): 1483–89. http://dx.doi.org/10.22214/ijraset.2021.37610.

Full text
Abstract:
Abstract: This paper focuses on the task of recognizing handwritten Hindi characters using a Convolutional Neural Network (CNN) based. The recognized characters can then be stored digitally in the computer or used for other purposes. The dataset used is obtained from the UC Irvine Machine Learning Repository which contains 92,000 images divided into training (80%) and test set (20%). It contains different forms of handwritten Devanagari characters written by different individuals which can be used to train and test handwritten text recognizers. It contains four CNN layers followed by three fully connected layers for recognition. Grayscale handwritten character images are used as input. Filters are applied on the images to extract different features at each layer. This is done by the Convolution operation. The two other main operations involved are Pooling and Flattening. The output of the CNN layers is fed to the fully connected layers. Finally, the chance or probability score of each character is determined and the character with the highest probability score is shown as the output. A recognition accuracy of 98.94% is obtained. Similar models exist for the purpose, but the proposed model achieved a better performance and accuracy than some of the earlier models. Keywords: Devanagari characters, Convolutional Neural Networks, Image Processing
APA, Harvard, Vancouver, ISO, and other styles
24

Sadasivan, Anju K., and T. Senthilkumar. "Automatic Character Recognition in Complex Images." Procedia Engineering 30 (2012): 218–25. http://dx.doi.org/10.1016/j.proeng.2012.01.854.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Shy-Shyan Chen and F. Y. Shih. "Skeletonization for fuzzy degraded character images." IEEE Transactions on Image Processing 5, no. 10 (1996): 1481–85. http://dx.doi.org/10.1109/83.536896.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Shanmugavel, Subramanian, Jagadeesh Kannan, Arjun Vaithilingam Sudhakar, and . "Handwritten Optical Character Extraction and Recognition from Catalogue Sheets." International Journal of Engineering & Technology 7, no. 4.5 (September 22, 2018): 36. http://dx.doi.org/10.14419/ijet.v7i4.5.20005.

Full text
Abstract:
The dataset consists of 20000 scanned catalogues of fossils and other artifacts compiled by the Geological Sciences Department. The images look like a scanned form filled with blue ink ball pen. The character extraction and identification is the first phase of the research and in the second phase we are planning to use the HMM model to extract the entire text from the form and store it in a digitized format. We used various image processing and computer vision techniques to extract characters from the 20000 handwritten catalogues. Techniques used for character extraction are Erode, MorphologyEx, Dilate, canny edge detection, find Counters, Counter Area etc. We used Histogram of Gradients (HOGs) to extract features from the character images and applied k-means and agglomerative clustering to perform unsupervised learning. This would allow us to prepare a labelled training dataset for the second phase. We also tried converting images from RGB to CMYK to improve k-means clustering performance. We also used thresholding to extract blue ink characters from the form after converting the image in HSV color format, but the background noise was significant, and results obtained were not promising. We are researching a more robust method to extract characters that doesn’t deform the characters and takes alignment into consideration.
APA, Harvard, Vancouver, ISO, and other styles
27

BAG, SOUMEN, and GAURAV HARIT. "SKELETONIZING CHARACTER IMAGES USING A MODIFIED MEDIAL AXIS-BASED STRATEGY." International Journal of Pattern Recognition and Artificial Intelligence 25, no. 07 (November 2011): 1035–54. http://dx.doi.org/10.1142/s0218001411009020.

Full text
Abstract:
In this paper we propose a thinning methodology applicable to character images. It is novel in terms of its ability to adapt to local character shape while constructing the thinned skeleton. Our method does not produce many of the distortions in the character shapes which normally result from the use of existing thinning algorithms. The proposed thinning methodology is based on the medial axis of the character. The skeleton has a width of one pixel. As a by-product of our thinning approach, the skeleton also gets segmented into strokes in vector form. Hence further stroke segmentation is not required. We have conducted experiments with printed and handwritten characters in several scripts such as English, Bengali, Hindi, Kannada and Tamil. We obtain less spurious branches compared to other thinning methods. Our method does not use any kind of post processing.
APA, Harvard, Vancouver, ISO, and other styles
28

Kamel, M., and A. Zhao. "Extraction of Binary Character/Graphics Images from Grayscale Document Images." CVGIP: Graphical Models and Image Processing 55, no. 3 (May 1993): 203–17. http://dx.doi.org/10.1006/cgip.1993.1015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Liu, Wen Bo, and Tao Wang. "The Character Recognition of Vehicle's License Plate Based on BP Neural Networks." Applied Mechanics and Materials 513-517 (February 2014): 3805–8. http://dx.doi.org/10.4028/www.scientific.net/amm.513-517.3805.

Full text
Abstract:
This paper based on license plate image preprocessing ,license plate localization, and character segment ,using BP neural network algorithm to identify the license plate characters. Through k-l algorithm of characters on the feature extraction and recognition of license plate character respectively then taking the extraction of license plate character features into the character classifier to the training. When the end of training, extracting the net-work weights and offset matrix, and storing in the computer. To take the identified character images input to the MATLAB, and with the preservation weights and offset matrix operations, obtain the final results of recognition.
APA, Harvard, Vancouver, ISO, and other styles
30

Jbrail, Mohammed Widad, and Mehmet Emin Tenekeci. "Character Recognition of Arabic Handwritten Characters Using Deep Learning." Journal of Studies in Science and Engineering 2, no. 1 (March 19, 2022): 32–40. http://dx.doi.org/10.53898/josse2022213.

Full text
Abstract:
Optical character recognition (OCR) is used to digitize texts in printed documents and camera images. The most basic step in the OCR process is character recognition. The Arabic language is more complex than other alphabets, as the cursive is written in cursive and the characters have different spellings. Our research has improved a character recognition model for Arabic texts with 28 different characters. Character recognition was performed using Convolutional Neural Network models, which are accepted as effective in image processing and recognition. Three different CNN models have been proposed. In the study, training and testing of the models were carried out using the Hijja data set. Among the proposed models, Model C with a 99.3% accuracy rate has obtained results that can compete with the studies in the literature.
APA, Harvard, Vancouver, ISO, and other styles
31

Kharicheva, Dina. "Using Neural Natural Network for Image Recognition in Bioinformatics." International Journal of Applied Research in Bioinformatics 9, no. 2 (July 2019): 35–41. http://dx.doi.org/10.4018/ijarb.2019070103.

Full text
Abstract:
Automatic image recognition is very useful in bioinformatics. This article presents a novel technique to recognize the characters in the number plate automatically by using connected component analysis (CCA), artificial neural network (ANN) and neural natural network (Triple N). The preprocessing steps, Sobel edge detection technique and CCA are applied to the captured image of the vehicle to obtain character images. ANN technique can be used over these images to recognize the characters of the image in bioinformatics. The preprocessing steps are used to remove the noise and to enhance the image for recognizing the characters effectively. After performing the preprocessing steps, the edge detection technique and CCA are carried out to separate the character images from the whole image which can be recognized using ANN. These text characters can be compared with database to find authentication of vehicle, identifying the owner of the vehicle, penalty bill generation, etc.
APA, Harvard, Vancouver, ISO, and other styles
32

Thippeswamy G. and Chandrakala H. T. "Recognition of Historical Handwritten Kannada Characters Using Local Binary Pattern Features." International Journal of Natural Computing Research 9, no. 3 (July 2020): 1–15. http://dx.doi.org/10.4018/ijncr.2020070101.

Full text
Abstract:
Archaeological departments throughout the world have undertaken massive digitization projects to digitize their historical document corpus. In order to provide worldwide visibility to these historical documents residing in the digital libraries, a character recognition system is an inevitable tool. Automatic character recognition is a challenging problem as it needs a cautious blend of enhancement, segmentation, feature extraction, and classification techniques. This work presents a novel holistic character recognition system for the digitized Estampages of Historical Handwritten Kannada Stone Inscriptions (EHHKSI) belonging to 11th century. First, the EHHKSI images are enhanced using Retinex and Morphological operations to remove the degradations. Second, the images are segmented into characters by connected component labeling. Third, LBP features are extracted from these characters. Finally, decision tree is used to learn these features and classify the characters into appropriate classes. The LBP features improved the performance of the system significantly.
APA, Harvard, Vancouver, ISO, and other styles
33

LU, YUE, and CHEW LIM TAN. "CHINESE WORD SEARCHING IN IMAGED DOCUMENTS." International Journal of Pattern Recognition and Artificial Intelligence 18, no. 02 (March 2004): 229–46. http://dx.doi.org/10.1142/s0218001404003137.

Full text
Abstract:
An approach to searching for user-specified words in imaged Chinese documents, without the requirements of layout analysis and OCR processing of the entire documents, is proposed in this paper. A small number of Chinese characters that cannot be successfully bounded using connected component analysis due to larger gaps between elements within the characters are blacklisted. A suitable character that is not included in the blacklist is chosen from the user-specified word as the initial character to search for a matching candidate in the document. Once a matched candidate is found, the adjacent characters in the horizontal and vertical directions are examined for matching with other corresponding characters in the user-specified word, subject to the constraints of alignment (either horizontal or vertical direction) and size similarity. A weighted Hausdorff distance is proposed for the character matching. Experimental results show that the present method can effectively search the user-specified Chinese words from the document images with the format of either horizontal or vertical text lines, or both appearing on the same image.
APA, Harvard, Vancouver, ISO, and other styles
34

Gao, Shang Bing, and Dong Jin. "A Nighttime Vehicle License Character Segmentation Algorithm." Advanced Materials Research 532-533 (June 2012): 1583–87. http://dx.doi.org/10.4028/www.scientific.net/amr.532-533.1583.

Full text
Abstract:
Chan-Vese model often leads to poor segmentation results for images with intensity inhomogeneity. Aiming at the gray uneven distribution in the night vehicle images, a new local Chan–Vese (LCV) model is proposed for image segmentation. The energy functional for the proposed model consists of three terms, i.e., global term, local term and regularization term. By incorporating the local image information into the proposed model, the images with intensity inhomogeneity can be efficiently segmented. Finally, experiments on nighttime plate images have demonstrated that our model can segment the nighttime plate images efficently. Moreover, comparisons with recent popular local binary fitting (LBF) model also show that our LCV model can segment images with few iteration times.
APA, Harvard, Vancouver, ISO, and other styles
35

Magfiroh, Alfin. "Pengenalan Kepribadian Seseorang Melalui Bentuk Tulisan Tangan Menggunakan Metode Radial Basis Function Neural Network (RBFNN)." Zeta - Math Journal 7, no. 1 (May 31, 2022): 34–41. http://dx.doi.org/10.31102/zeta.2022.7.1.34-41.

Full text
Abstract:
Humans are created with a variety of different characters and characters, so that the personality that is formed is very diverse according to the character and character of each person. With a variety of personalities, it appears that there are differences in quality and quantity between humans from one another in the view of society. There are various ways to get to know a person's personality, one of which is through handwriting with specific letters of the alphabet. In this study, handwriting is limited to the letter 't' only and has been generated from 26 images of 't' training which were previously processed using the PCA method, there are 10 images according to the type 't' 1 and 16 images according to the type 't ' 2nd and no image matches the 3rd 't' type.
APA, Harvard, Vancouver, ISO, and other styles
36

Kholifah, Desiana Nur, Hendri Mahmud Nawawi, and Indra Jiwana Thira. "IMAGE BACKGROUND PROCESSING FOR COMPARING ACCURACY VALUES OF OCR PERFORMANCE." Jurnal Pilar Nusa Mandiri 16, no. 1 (March 15, 2020): 33–38. http://dx.doi.org/10.33480/pilar.v16i1.1076.

Full text
Abstract:
Optical Character Recognition (OCR) is an application used to process digital text images into text. Many documents that have a background in the form of images in the visual context of the background image increase the security of documents that state authenticity, but the background image causes difficulties with OCR performance because it makes it difficult for OCR to recognize characters overwritten by background images. By removing background images can maximize OCR performance compared to document images that are still background. Using the thresholding method to eliminate background images and look for recall values, precision, and character recognition rates to determine the performance value of OCR that is used as the object of research. From eliminating the background image with thresholding, an increase in performance on the three types of OCR is used as the object of research.
APA, Harvard, Vancouver, ISO, and other styles
37

Cai, Nian, Yuchao Chen, Gen Liu, Guandong Cen, Han Wang, and Xindu Chen. "A vision-based character inspection system for tire mold." Assembly Automation 37, no. 2 (April 3, 2017): 230–37. http://dx.doi.org/10.1108/aa-07-2016-066.

Full text
Abstract:
Purpose This paper aims to design an automatic inspection system for the characters on tire molds, which involves a vision-based inspection method for the characters on tire molds. Design/methodology/approach An automatic inspection equipment is designed according to the features of the tire mold. To implement the inspection task, the corresponding image processing methods are designed, including image preprocessing, image mosaic, image locating and character inspection. Image preprocessing mainly contains fitting the contours of the acquired tire mold images and those of the computer aided design (CAD) as the arcs of two circles and polar transformation of the acquired images and the CAD. Then, the authors propose a novel framework to locate the acquired images into the corresponding mosaicked tire mold image. Finally, a character inspection scheme is proposed by combining an support-vector-machine-based character recognition method and a string matching approach. At the stages of image locating and character inspection, image mosaic is simultaneously used to label the defects in the mosaicked tire mold image, which is based on histograms-of-gradients features. Findings The experimental results indicate that the designed automatic inspection system can inspect the characters on the tire mold with a high accuracy at a reasonable time consumption. Practical implications The designed automatic inspection system can detect the carving faults for the characters on the tire molds, which are the cases that the characters are wrongly added, deleted or modified on the tire mold. Originality/value To the best of the authors’ knowledge, this is the first automatic vision-based inspection system for the characters on tire molds. An inspection equipment is designed and many novel image processing methods are proposed to implement the inspection task. The designed system can be widely applied in the industry.
APA, Harvard, Vancouver, ISO, and other styles
38

De, Soumya, R. Joe Stanley, Beibei Cheng, Sameer Antani, Rodney Long, and George Thoma. "Automated Text Detection and Recognition in Annotated Biomedical Publication Images." International Journal of Healthcare Information Systems and Informatics 9, no. 2 (April 2014): 34–63. http://dx.doi.org/10.4018/ijhisi.2014040103.

Full text
Abstract:
Images in biomedical publications often convey important information related to an article's content. When referenced properly, these images aid in clinical decision support. Annotations such as text labels and symbols, as provided by medical experts, are used to highlight regions of interest within the images. These annotations, if extracted automatically, could be used in conjunction with either the image caption text or the image citations (mentions) in the articles to improve biomedical information retrieval. In the current study, automatic detection and recognition of text labels in biomedical publication images was investigated. This paper presents both image analysis and feature-based approaches to extract and recognize specific regions of interest (text labels) within images in biomedical publications. Experiments were performed on 6515 characters extracted from text labels present in 200 biomedical publication images. These images are part of the data set from ImageCLEF 2010. Automated character recognition experiments were conducted using geometry-, region-, exemplar-, and profile-based correlation features and Fourier descriptors extracted from the characters. Correct recognition as high as 92.67% was obtained with a support vector machine classifier, compared to a 75.90% correct recognition rate with a benchmark Optical Character Recognition technique.
APA, Harvard, Vancouver, ISO, and other styles
39

A. Jain, Sajan, N. Shobha Rani, and N. Chandan. "Image Enhancement of Complex Document Images Using Histogram of Gradient Features." International Journal of Engineering & Technology 7, no. 4.36 (December 9, 2018): 780. http://dx.doi.org/10.14419/ijet.v7i4.36.24244.

Full text
Abstract:
Enhancement of document images is an interesting research challenge in the process of character recognition. It is quite significant to have a document with uniform illumination gradient to achieve higher recognition accuracies through a document processing system like Optical Character Recognition (OCR). Complex document images are one of the varied image categories that are difficult to process compared to other types of images. It is the quality of document that decides the precision of a character recognition system. Hence transforming the complex document images to a uniform illumination gradient is foreseen. In the proposed research, ancient document images of UMIACS Tobacco 800 database are considered for removal of marginal noise. The proposed technique carries out the block wise interpretation of document contents to remove the marginal noise that is present usually at the borders of images. Further, Hu moment’s features are computed for the detection of marginal noise in every block. An empirical analysis is carried out for classification of blocks into noisy or non-noisy and the outcomes produced by algorithm are satisfactory and feasible for subsequent analysis.
APA, Harvard, Vancouver, ISO, and other styles
40

Miyao, Hidetoshi, Yasuaki Nakano, Atsuhiko Tani, Hirosato Tabaru, and Toshihiro Hananoi. "Printed Japanese Character Recognition Using Multiple Commercial OCRs." Journal of Advanced Computational Intelligence and Intelligent Informatics 8, no. 2 (March 20, 2004): 200–207. http://dx.doi.org/10.20965/jaciii.2004.p0200.

Full text
Abstract:
This paper proposes two algorithms for maintaining matching between lines and characters in text documents output by multiple commercial optical character readers (OCRs). (1) a line matching algorithm using dynamic programming (DP) matching and (2) a character matching algorithm using character string division and standard character strings. The paper proposes a method that introduces majority logic and reject processing in character recognition. To demonstrate the feasibility of the method, we conducted experiments on line matching recognition for 127 document images using five commercial OCRs. Results demonstrated that the method extracted character areas with more accuracy than a single OCR along with appropriate line matching. The proposed method enhanced recognition from 97.61% provided by a single OCR to 98.83% in experiments using the character matching algorithm and character recognition. This method is expected to be highly useful in correcting locations at which unwanted lines or characters occur or required lines or characters disappear.
APA, Harvard, Vancouver, ISO, and other styles
41

Mukherjee, Sujata. "The female character in Indian moving images." Social Change 34, no. 1 (March 2004): 1–10. http://dx.doi.org/10.1177/004908570403400101.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Pareek, Jyoti, Dimple Singhania, Rashmi Rekha Kumari, and Suchit Purohit. "Gujarati Handwritten Character Recognition from Text Images." Procedia Computer Science 171 (2020): 514–23. http://dx.doi.org/10.1016/j.procs.2020.04.055.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Du, Songbo, Fang Yang, and Xuedong Tian. "Ancient Chinese Character Image Retrieval Based on Dual Hesitant Fuzzy Sets." Scientific Programming 2021 (February 22, 2021): 1–9. http://dx.doi.org/10.1155/2021/6621037.

Full text
Abstract:
The complex and changeable structures of ancient Chinese characters result in the decreasing accuracy of their image retrieval. To resolve this problem, a new retrieval method based on dual hesitant fuzzy sets is proposed. Dual hesitation fuzzy sets that can express uncertain information more comprehensively are employed in the feature extraction process of directional line elements. The multiattribute evaluation index of adjacent grids for the current grid and its corresponding membership and nonmembership functions are established, and the weight of each attribute is calculated by the dual hesitation fuzzy entropy, such that the proposed features can fully reflect the topological structure of ancient Chinese characters. Using the dual hesitation fuzzy correlation coefficient to measure the similarity between the ancient Chinese character images to be retrieved and the candidate images, the retrieval of ancient Chinese character images is realized. Experiments show that when the t0hreshold value of the correlation coefficient is 0.9, the average retrieval accuracy is 90.4%.
APA, Harvard, Vancouver, ISO, and other styles
44

WANG, DACHENG, and SARGUR N. SRIHARI. "ANALYSIS OF FORM IMAGES." International Journal of Pattern Recognition and Artificial Intelligence 08, no. 05 (October 1994): 1031–52. http://dx.doi.org/10.1142/s0218001494000528.

Full text
Abstract:
Automatic analysis of images of forms is a problem of both practical and theoretical interest; due to its importance in office automation, and due to the conceptual challenges posed for document image analysis, respectively. We describe an approach to the extraction of text, both typed and handwritten, from scanned and digitized images of filled-out forms. In decomposing a filled-out form into three basic components of boxes, line segments and the remainder (handwritten and typed characters, words, and logos), the method does not use a priori knowledge of form structure. The input binary image is first segmented into small and large connected components. Complex boxes are decomposed into elementary regions using an approach based on key-point analysis. Handwritten and machine-printed text that touches or overlaps guide lines and boxes are separated by removing lines. Characters broken by line removal are rejoined using a character patching method. Experimental results with filled-out forms, from several different domains (insurance, banking, tax, retail and postal) are given.
APA, Harvard, Vancouver, ISO, and other styles
45

Masuhara, Tsukasa, Hideaki Kawano, Hideaki Orii, and Hiroshi Maeda. "Decorated Character Recognition Employing Modified SOM Matching." Applied Mechanics and Materials 103 (September 2011): 649–57. http://dx.doi.org/10.4028/www.scientific.net/amm.103.649.

Full text
Abstract:
Character recognition is a classical issue which has been devoted by a lot of researchers.Making character recognition system more widely available in natural scene images might open upinteresting possibility to use as an input interface of characters and an annotation method for images.Nevertheless, it is still difficult to recognize all sorts of fonts including decorated characters such ascharacters depicted on signboards. The decorated characters are constructed by using some specialtechniques for attracting viewers' attentions. Therefore, it is hard to obtain good recognition results bythe existingOCRs. In this paper,we propose a newcharacter recognition systemusing SOM. The SOMis employed to extract an essential structure concerning the topology from a character. The extractedtopological structure from each character is used to matching and the recognition is performed on thebasis of the topological matching. Experimental results show the effectiveness of the proposed methodin most forms of characters.
APA, Harvard, Vancouver, ISO, and other styles
46

Sung, Jae-Kyung, Sang-Min Park, Sang-Yun Sin, and Yung Bok Kim. "Deep Neural Network for Product Classification System with Korean Character Image." International Journal of Engineering & Technology 7, no. 3.33 (August 29, 2018): 179. http://dx.doi.org/10.14419/ijet.v7i3.33.21008.

Full text
Abstract:
This paper proposes a product classification system based on deep learning using Korean character images (Hangul) to search for products in the shopping mall. Generally, an online shopping mall customer searches through a category classification or a product name to purchase a product. When the exact product name or category is not clear, the user has to search its name. However, the product image classification is degraded because the product logos and characters in the package often interfere. To solve such problems, we propose a classification system based on Deep Learning using Korean character images. The learning data of this system uses Korean character images of PHD08, a Hangul (Korean-language) database. The experimental is carried out using product names collected on the web. For the performance experiment, 10 categories of online shopping mall are selected and the classification accuracy is measured and compared with the previous systems.
APA, Harvard, Vancouver, ISO, and other styles
47

CHERIET, MOHAMED, JEAN-CHRISTOPHE DEMERS, and SYLVAIN DEBLOIS. "SHOCK FILTER-BASED DIFFUSION FIELDS — APPLICATION TO GRAYSCALE CHARACTER IMAGE PROCESSING." International Journal of Image and Graphics 05, no. 02 (April 2005): 209–45. http://dx.doi.org/10.1142/s0219467805001732.

Full text
Abstract:
In this article, the new concept of diffusion fields based on partial differential equations is applied to character image processing. Specific diffusion fields are developed according to character image structures and features, depending, on the scope of application. Doing so allows the application of a straightforward one-dimensional numerical scheme to image enhancement, erosion, dilation and thinning. The strength of this approach is the flexibility brought by the diffusion field, which can be defined taking into account specific difficulties of grayscale character images with a minimum of prior information. Thus, the application of the algorithm is shown to be robust to singularity points, the creation of spurious branches, variations in stroke thickness and intensity, multimodality, noise and image background patterns. The resulting enhanced images are noise free with sharp edges and the local typical intensity levels preserved. Thinned characters are connected skeletons located on the ridge of the initial character. Again, the typical intensity of the character and background are preserved.
APA, Harvard, Vancouver, ISO, and other styles
48

Osipova, E. P. "MYTHOLOGEMES OF THE RYAZAN-MORDOVIAN BORDER." Onomastics of the Volga Region, no. 2 (2020): 263–70. http://dx.doi.org/10.34216/2020-2.onomast.263-270.

Full text
Abstract:
The article covers mythologemes recorded in the Shatsk area of the Ryazan region. The dialect material demonstrates different semantic content of mythologemes in Shatsk dialects, which shows reduction of mythological images in rural residents' consciousness: a mythological character itself - a character of disguise - a scarecrow or a bogeyman - a character for scaring children. The ritual function of the characters in modern dialects is mainly lost and preserved only in the older generation's memory.
APA, Harvard, Vancouver, ISO, and other styles
49

Rathore, Priti Singh, and Dr Pawan Kumar. "Hand-written Character Recognition with Neural Network." International Journal for Research in Applied Science and Engineering Technology 10, no. 3 (March 31, 2022): 1709–15. http://dx.doi.org/10.22214/ijraset.2022.40951.

Full text
Abstract:
Abstract: Various handwriting styles are unique in this manner, making it challenging to identify characters that were written by hand. Handwritten character recognition has become the subject of exploration over the last few decades through an exploration of neural networks. Languages written from left-to-right, such as Hindi, are read from start-to-finish design. To recognize these types of writing, we present a Deep Learning-based handwritten Hindi character recognition system utilizing deep learning techniques such as Convolutional Neural Networks (CNN) with Optimizer Adaptive Moment Estimation (Adam) and Deep Neural Networks (DNN) in this paper. The suggested system was trained on samples from a large number of database images and then evaluated on images from a user-defined data set, yielding extremely high accuracy rates. Keywords: Deep learning, CNN, Adam Optimizer, Handwritten character recognition, Accuracy, Training Time.
APA, Harvard, Vancouver, ISO, and other styles
50

Luo, Peng, Xuekai Hu, Yuhao Zhao, Yi Jiang, Fanglin Lu, Jifeng Liang, and Liang Xu. "Smear character recognition method of side-end power meter based on PCA image enhancement." Nonlinear Engineering 11, no. 1 (January 1, 2022): 232–40. http://dx.doi.org/10.1515/nleng-2022-0028.

Full text
Abstract:
Abstract Since it is difficult for manual recording to track the rapid change of indication of the power meter, the power meter images are collected by the camera and automatically recognized and recorded to effectively overcome the disadvantages of manual recording. However, the complex scene lighting environment and smearing character shadows make it difficult to transfer captured images directly to convolutional neural networks for character recognition. A smear character recognition method of side-end power meter under complex lighting conditions is proposed in this article. First, the uneven illumination image enhancement algorithm is studied. Through the estimation of the illumination component of the image, the fusion weight is calculated by the principal component analysis for multiscale fusion, and the up-sampling and down-sampling are adopted to reduce the calculation of the algorithm and achieve the rapid image enhancement. A convolution neural network framework based on deep learning is proposed to realize the segmentation of smear characters, and the final segmented individual characters are fed into a network to identify the meter readings. The experimental results show that the proposed smear character recognition method has fast recognition speed, and the recognition rate of samples with the smear character and complex illumination reaches 99.8%, which meets the requirements of power meter character recognition and is better than other algorithms.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography