Academic literature on the topic 'Character images'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Character images.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Character images"

1

Chang, Yasheng, and Weiku Wang. "Text recognition in radiographic weld images." Insight - Non-Destructive Testing and Condition Monitoring 61, no. 10 (October 1, 2019): 597–602. http://dx.doi.org/10.1784/insi.2019.61.10.597.

Full text
Abstract:
Automatic recognition of text characters on radiographic images based on computer vision would be a very useful step forward as it could improve and simplify the file handling of digitised radiographs. Text recognition in radiographic weld images is challenging since there is no uniform font or character size and each character may tilt in different directions and by different amounts. Deep learning approaches for text recognition have recently achieved breakthrough performance using convolutional neural networks (CNNs). CNNs can recognise normalised characters in different fonts. However, the tilt of a character still has a strong influence on the accuracy of recognition. In this paper, a new improved algorithm is proposed based on the Radon transform, which is very effective at character rectification. The improved algorithm increases the accuracy of character recognition from 86.25% to 98.48% in the current experiments. The CNN is used to recognise the rectified characters, which achieves good accuracy and improves character recognition in radiographic weld images. A CNN greatly improves the efficiency of digital scanning and filing of radiographic film. The method proposed in this paper is also compared with other methods that are commonly used in other fields and the results show that the proposed method is better than state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
2

Wilterdink, Nico. "Images of national character." Society 32, no. 1 (November 1994): 43–51. http://dx.doi.org/10.1007/bf02693352.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wang, Zeya. "Discussion on the Significance of Algirdas Julien Greimas's Semiotic Square in Character Shaping — A Case Study of the Novel Mo Dao Zu Shi." Arts Studies and Criticism 3, no. 2 (June 27, 2022): 110. http://dx.doi.org/10.32629/asc.v3i2.832.

Full text
Abstract:
Algirdas Julien Greimas uses semiotic square to clarify the relationship between characters in the text, and studies the original value judgment of the text through logical deduction. This complete and effective interpretation method has been gradually applied to more disciplines. Today, in works such as Mo Dao Zu Shi, the characterization of characters is increasingly complicated, while the flat characterization is gradually declining. Therefore, the internal factors that form character images in literary works begin to become more and more complex. The use of Algirdas Julien Greimas's semiotic square to analyze the relationship between multiple factors in a character's personality not only reveals the multifaceted nature of the character in front of readers, but also helps to build a feasible frame for the creation of character images in the future.
APA, Harvard, Vancouver, ISO, and other styles
4

Tian, Xue Dong, Xue Sha Jia, Fang Yang, Xin Fu Li, and Xiu Fen Miao. "A Retrieval Method of Ancient Chinese Character Images." Applied Mechanics and Materials 462-463 (November 2013): 432–37. http://dx.doi.org/10.4028/www.scientific.net/amm.462-463.432.

Full text
Abstract:
Global and local image retrieval of ancient Chinese characters is a helpful means for character research work. Because of the characteristics of ancient Chinese characters such as complex structures and variant shapes, there exist many theoretical and technical problems in feature extraction, clustering and retrieval of these images. A retrieval method was established with the strategy of “clustering before matching” for the ancient Chinese character images by integrating the structural features of them. Firstly, preprocessed character image area was divided into elastic meshes and the directional elements were extracted to form the feature vectors. Then, K-means algorithm was employed to cluster the character images within global and local areas. Finally, the similar images within selected areas were searched in corresponding cluster and the obtained images were provided to users. The experimental results show that this method is helpful for the improvement of the efficiency of ancient Chinese character study.
APA, Harvard, Vancouver, ISO, and other styles
5

V. Seeri, Shivananda, J. D. Pujari, and P. S. Hiremath. "PNN Based Character Recognition in Natural Scene Images." Bonfring International Journal of Software Engineering and Soft Computing 6, Special Issue (October 31, 2016): 109–13. http://dx.doi.org/10.9756/bijsesc.8254.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Yanagisawa, Hideaki, Takuro Yamashita, and Hiroshi Watanabe. "Clustering of Comic Character Images for Extraction of Major Characters." Journal of The Institute of Image Information and Television Engineers 73, no. 1 (2019): 199–204. http://dx.doi.org/10.3169/itej.73.199.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Li, Xue Yong, and Chang Hou Lu. "A Gabor Filter Based Image Acquisition Method for Raised Characters." Applied Mechanics and Materials 373-375 (August 2013): 459–63. http://dx.doi.org/10.4028/www.scientific.net/amm.373-375.459.

Full text
Abstract:
This paper presents a novel image acquisition technology for raised characters. First, laser stripes modulated with height information of raised character are captured using the 3D vision technique; then, laser stripes are combined into agrating image, in which are complete character images with grating background; Finally, a rational design Gabor kernel is applied to filter thegrating image. Through this way, the background was removed and the grayscale images of the raised characters are reserved. Experiments show that the proposed method can get the well-separated character images. Also, it is more simple and efficient than the existing methods.
APA, Harvard, Vancouver, ISO, and other styles
8

Wu, Wei, Zheng Liu, Mo Chen, Zhiming Liu, Xi Wu, and Xiaohai He. "A New Framework for Container Code Recognition by Using Segmentation-Based and HMM-Based Approaches." International Journal of Pattern Recognition and Artificial Intelligence 29, no. 01 (January 4, 2015): 1550004. http://dx.doi.org/10.1142/s0218001415500044.

Full text
Abstract:
Traditional methods for automatic recognition of container code in visual images are based on segmentation and recognition of isolated characters. However, when the segment fails to separate each character from the others, those methods will not function properly. Sometimes the container code characters are printed or arranged very closely, which makes it a challenge to isolate each character. To address this issue, a new framework for automatic container code recognition (ACCR) in visual images is proposed in this paper. In this framework, code-character regions are first located by applying a horizontal high-pass filter and scan line analysis. Then, character blocks are extracted from the code-character regions and further classified into two categories, i.e. single-character block and multi-character block. Finally, a segmentation-based approach is implemented for recognition of the characters in single-character blocks, and a hidden Markov model (HMM)-based method is proposed for the multi-character blocks. The experimental results demonstrate the effectiveness of the proposed method, which can successfully recognize the container code with closely arranged characters.
APA, Harvard, Vancouver, ISO, and other styles
9

Rai, Laxmisha, and Hong Li. "MyOcrTool: Visualization System for Generating Associative Images of Chinese Characters in Smart Devices." Complexity 2021 (May 7, 2021): 1–14. http://dx.doi.org/10.1155/2021/5583287.

Full text
Abstract:
Majority of Chinese characters are pictographic characters with strong associative ability and when a character appears for Chinese readers, they usually associate with the objects, or actions related to the character immediately. Having this background, we propose a system to visualize the simplified Chinese characters, so that developing any skills of either reading or writing Chinese characters is not necessary. Considering the extensive use and application of mobile devices, automatic identification of Chinese characters and display of associative images are made possible in smart devices to facilitate quick overview of a Chinese text. This work is of practical significance considering the research and development of real-time Chinese text recognition, display of associative images and for such users who would like to visualize the text with only images. The proposed Chinese character recognition system and visualization tool is named as MyOcrTool and developed for Android platform. The application recognizes the Chinese characters through OCR engine, and uses the internal voice playback interface to realize the audio functions and display the visual images of Chinese characters in real-time.
APA, Harvard, Vancouver, ISO, and other styles
10

Angadi, S. A., and M. M. Kodabagi. "A Robust Segmentation Technique for Line, Word and Character Extraction from Kannada Text in Low Resolution Display Board Images." International Journal of Image and Graphics 14, no. 01n02 (January 2014): 1450003. http://dx.doi.org/10.1142/s021946781450003x.

Full text
Abstract:
Reliable extraction/segmentation of text lines, words and characters is one of the very important steps for development of automated systems for understanding the text in low resolution display board images. In this paper, a new approach for segmentation of text lines, words and characters from Kannada text in low resolution display board images is presented. The proposed method uses projection profile features and on pixel distribution statistics for segmentation of text lines. The method also detects text lines containing consonant modifiers and merges them with corresponding text lines, and efficiently separates overlapped text lines as well. The character extraction process computes character boundaries using vertical profile features for extracting character images from every text line. Further, the word segmentation process uses k-means clustering to group inter character gaps into character and word cluster spaces, which are used to compute thresholds for extracting words. The method also takes care of variations in character and word gaps. The proposed methodology is evaluated on a data set of 1008 low resolution images of display boards containing Kannada text captured from 2 mega pixel cameras on mobile phones at various sizes 240 × 320, 480 × 640 and 960 × 1280. The method achieves text line segmentation accuracy of 97.17%, word segmentation accuracy of 97.54% and character extraction accuracy of 99.09%. The proposed method is tolerant to font variability, spacing variations between characters and words, absence of free segmentation path due to consonant and vowel modifiers, noise and other degradations. The experimentation with images containing overlapped text lines has given promising results.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Character images"

1

Viklund, Alexander, and Emma Nimstad. "Character Recognition in Natural Images Utilising TensorFlow." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-208385.

Full text
Abstract:
Convolutional Neural Networks (CNNs) are commonly used for character recognition. They achieve the lowest error rates for popular datasets such as SVHN and MNIST. Usage of CNN is lacking in research about character classification in natural images regarding the whole English alphabet. This thesis conducts an experiment where TensorFlow is used to construct a CNN that is trained and tested on the Chars74K dataset, with 15 images per class for training and 15 images per class for testing. This is done with the aim of achieving a higher accuracy than the non-CNN approach by de Campos et al. [1], that achieved 55.26%. The thesis explores data augmentation techniques for expanding the small training set and evaluates the result of applying rotation, stretching, translation and noise-adding. The result of this is that all of these methods apart from adding noise gives a positive effect on the accuracy of the network. Furthermore, the experiment shows that with a three layered convolutional neural network it is possible to create a character classifier that is as good as de Campos et al.'s. It is believed that even better results can be achieved if more experiments would be conducted on the parameters of the network and the augmentation.
Det är vanligt att använda konvolutionära artificiella neuronnät (CNN) för bildigenkänning, då de ger de minsta felmarginalerna på kända datamängder som SVHN och MNIST. Dock saknas det forskning om användning av CNN för klassificering av bokstäver i naturliga bilder när det gäller hela det engelska alfabetet. Detta arbete beskriver ett experiment där TensorFlow används för att bygga ett CNN som tränas och testas med bilder från Chars74K. 15 bilder per klass används för träning och 15 per klass för testning. Målet med detta är att uppnå högre noggrannhet än 55.26%, vilket är vad de campos et al. [1] uppnådde med en metod utan artificiella neuronnät. I rapporten utforskas olika tekniker för att artificiellt utvidga den lilla datamängden, och resultatet av att applicera rotation, utdragning, translation och bruspåslag utvärderas. Resultatet av det är att alla dessa metoder utom bruspåslag ger en positiv effekt på nätverkets noggrannhet. Vidare visar experimentet att med ett CNN med tre lager går det att skapa en bokstavsklassificerare som är lika bra som de Campos et al.s klassificering. Om fler experiment skulle genomföras på nätverkets och utvidgningens parametrar är det troligt att ännu bättre resultat kan uppnås.
APA, Harvard, Vancouver, ISO, and other styles
2

Granlund, Oskar, and Kai Böhrnsen. "Improving character recognition by thresholding natural images." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-208899.

Full text
Abstract:
The current state of the art optical character recognition (OCR) algorithms are capable of extracting text from images in predefined conditions. OCR is extremely reliable for interpreting machine-written text with minimal distortions, but images taken in a natural scene are still challenging. In recent years the topic of improving recognition rates in natural images has gained interest because more powerful handheld devices are used. The main problem faced dealing with recognition in natural images are distortions like illuminations, font textures, and complex backgrounds. Different preprocessing approaches to separate text from its background have been researched lately. In our study, we assess the improvement reached by two of these preprocessing methods called k-means and Otsu by comparing their results from an OCR algorithm. The study showed that the preprocessing made some improvement on special occasions, but overall gained worse accuracy compared to the unaltered images.
Dagens optisk teckeninläsnings (OCR) algoritmer är kapabla av att extrahera text från bilder inom fördefinierade förhållanden. De moderna metoderna har uppnått en hög träffsäkerhet för maskinskriven text med minimala förvrängningar, men bilder tagna i en naturlig scen är fortfarande svåra att hantera. De senaste åren har ett stort intresse för att förbättra tecken igenkännings algoritmerna uppstått, eftersom fler kraftfulla och handhållna enheter används. Det huvudsakliga problemet när det kommer till igenkänning i naturliga bilder är olika förvrängningar som infallande ljus, textens textur och komplicerade bakgrunder. Olika metoder för förbehandling och därmed separation av texten och dess bakgrund har studerats under den senaste tiden. I våran studie bedömer vi förbättringen som uppnås vid förbehandlingen med två metoder som kallas för k-means och Otsu genom att jämföra svaren från en OCR algoritm. Studien visar att Otsu och k-means kan förbättra träffsäkerheten i vissa förhållanden men generellt sett ger det ett sämre resultat än de oförändrade bilderna.
APA, Harvard, Vancouver, ISO, and other styles
3

Nahar, Vikas. "Content based image retrieval for bio-medical images." Diss., Rolla, Mo. : Missouri University of Science and Technology, 2010. http://scholarsmine.mst.edu/thesis/pdf/Nahar_09007dcc80721e0b.pdf.

Full text
Abstract:
Thesis (M.S.)--Missouri University of Science and Technology, 2010.
Vita. The entire thesis text is included in file. Title from title screen of thesis/dissertation PDF file (viewed Dec. 23, 2009). Includes bibliographical references (p. 82-83).
APA, Harvard, Vancouver, ISO, and other styles
4

Lomelin, Stoupignan Mauricio. "Character template estimation from document images and their transcriptions." Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/36566.

Full text
Abstract:
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1995.
Includes bibliographical references (p. 124-126).
by Mauricio Lomelin Stoupignan.
M.S.
APA, Harvard, Vancouver, ISO, and other styles
5

Sundin, Hannes, and Jakob Josefsson. "Evaluating synthetic training data for character recognition in natural images." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-280292.

Full text
Abstract:
This thesis is centered around character recognition in natural images. More specifically, evaluating the use of synthetic font images for training a Convolutional Neural Network (CNN), compared to natural training data. Training a CNN to recognize characters in natural images often demands a large amount of labeled data. One alternative is to instead generate synthetic data by using digital fonts. A total of 41,664 font images were generated, which in combination with already existing data yielded around 99,000 images. Using this synthetic dataset, the CNN was trained by incrementally increasing synthetic training data and tested on natural images. At the same time, different preprocessing methods were applied to the synthetic data in order to observe the effect on accuracy. Results show that even when using the best performing pre-processing method and having access to 99,000 synthetic training images, a smaller set of natural training data yielded better results. However, results also show that synthetic data can perform better than natural data, provided that a good preprocessing method is used and if the supply of natural images is limited.
I det här kandidatexamensarbetet behandlas bokstavigenkänning i naturliga bilder. Mer specifikt jämförs syntetiska typsnittsbilder med naturliga bilder för träning av ett Convolutional Neural Network (CNN). Att träna ett CNN för att känna igen bokstäver i naturliga bilder kräver oftast mycket betecknad naturlig data. Ett alternativ till detta är att producera syntetisk träningsdata i form av typsnittsbilder. I denna studie skapades 41664 typsnittsbilder, vilket i kombination med existerande data gav oss omkring 99 tusen syntetiska träningsbilder. Därefter tränades ett CNN med typsnittsbilder i ökande mängd för att sedan testas på naturliga bilder av bokstäver. Resultatet av detta jämfördes sedan med resultatet av att träna med naturliga bilder. Dessutom experimenterades med olika förbehandlingsmetoder för att observera förbehandlingens påverkan på klassifikationsgraden. Resultaten visade att även med den förbehandlingsmetoden som gav bäst resultat och med mycket mer data, var träning med syntetiska bilder inte lika effektivt som med naturliga bilder. Dock så visades det att med en bra förbehandlingsmetod kan syntetiska bilder ersätta naturliga bilder, givet att tillgången till naturliga bilder är begränsat.
APA, Harvard, Vancouver, ISO, and other styles
6

Hanson, Adam. "Character recognition of optically blurred textual images using moment invariants /." Online version of thesis, 1993. http://hdl.handle.net/1850/11748.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Waters, Keith. "The computer synthesis of expressive three-dimensional facial character animation." Thesis, Middlesex University, 1988. http://eprints.mdx.ac.uk/8095/.

Full text
Abstract:
This present research is concerned with the design, development and implementation of three-dimensional computer-generated facial images capable of expression gesture and speech. A review of previous work in chapter one shows that to date the model of computer-generated faces has been one in which construction and animation were not separated and which therefore possessed only a limited expressive range. It is argued in chapter two that the physical description of the face cannot be seen as originating from a single generic mould. Chapter three therefore describes data acquisition techniques employed in the computer generation of free-form surfaces which are applicable to three-dimensional faces. Expressions are the result of the distortion of the surface of the skin by the complex interactions of bone, muscle and skin. Chapter four demonstrates with static images and short animation sequences in video that a muscle model process algorithm can simulate the primary characteristics of the facial muscles. Three-dimensional speech synchronization was the most complex problem to achieve effectively. Chapter five describes two successful approaches: the direct mapping of mouth shapes in two dimensions to the model in three dimensions, and geometric distortions of the mouth created by the contraction of specified muscle combinations. Chapter six describes the implementation of software for this research and argues the case for a parametric approach. Chapter seven is concerned with the control of facial articulations and discusses a more biological approach to these. Finally chapter eight draws conclusions from the present research and suggests further extensions.
APA, Harvard, Vancouver, ISO, and other styles
8

Kraljevic, Matija. "Character recognition in natural images : Testing the accuracy of OCR and potential improvement by image segmentation." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-187991.

Full text
Abstract:
In recent years, reading text from natural images has gained renewed research attention. One of the main reasons for this is the rapid growth of camera-based applications on smart phones and other portable devices. With the increasing availability of high performance, low-priced, image-capturing devices, the application of scene text recognition is rapidly expanding and becoming increasingly popular. Despite many efforts, character recognition in natural images, is still considered a challenging and unresolved problem. The difficulties stem from the fact that natural images suffer from a wide variety of obstacles such as complex backgrounds, font variation, uneven illumination, resolution problems, occlusions, perspective effects, just to mention a few. This paper aims to test the accuracy of OCR in character recognition of natural images as well as testing the possible improvement in accuracy after implementing three different segmentation methods.The results showed that the accuracy of OCR was very poor and no improvments in accuracy were found after implementing the chosen segmentation methods.
APA, Harvard, Vancouver, ISO, and other styles
9

Peng, Qiu. "Characters Extraction for Traffic Sign Destination boards in video and still images." Thesis, Högskolan Dalarna, Datateknik, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:du-5381.

Full text
Abstract:
Traffic Control Signs or destination boards on roadways offer significant information for drivers. Regulation signs tell something like your speed, turns, etc; Warning signs warn drivers of conditions ahead to help them avoid accidents; Destination signs show distances and directions to various locations; Service signs display location of hospitals, gas and rest areas etc. Because the signs are so important and there is always a certain distance from them to drivers, to let the drivers get information clearly and easily even in bad weather or other situations. The idea is to develop software which can collect useful information from a special camera which is mounted in the front of a moving car to extract the important information and finally show it to the drivers. For example, when a frame contains on a destination drive sign board it will be text something like "Linkoping 50",so the software should extract every character of "Linkoping 50", compare them with the already known character data in the database. if there is extracted character match "k" in the database then output the destination name and show to the driver. In this project C++ will be used to write the code for this software.
APA, Harvard, Vancouver, ISO, and other styles
10

Watanabe, Toyohide, and Rui Zhang. "Recognition of character strings from color urban map images on the basis of validation mechanism." IEEE, 1997. http://hdl.handle.net/2237/6936.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Character images"

1

Reinelt, Sabine. Magic of character dolls: Images of children. Grantsville, Md: Hobby House Press, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Graven images. New York: G.P. Putnam's Sons, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Graven images. Thorndike, Me: Thorndike Press, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Graven images. New York: Berkley Publishing Group, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Cleverly, Barbara. Strange images of death. New York: Soho Constable, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Strange images of death. New York: Soho Constable, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Feynman, Richard Phillips. The art of Richard P. Feynman: Images by a curious character. Basel: GB Science Publishers SA, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Lois, Potter, and Calhoun Joshua 1979-, eds. Images of Robin Hood: Medieval to modern. Newark: University of Delaware, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Jesus as the Son of Man, the literary character: A progression of images. Claremont, CA: Institute for Antiquity and Christianity, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Dave, Williams. Misreading the Chinese character: Images of the Chinese in Euroamerican drama to 1925. New York: P. Lang, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Character images"

1

Bakar, Norsharina Abu, and Siti Mariyam Shamsuddin. "United Zernike Invariants for Character Images." In Lecture Notes in Computer Science, 498–509. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-05036-7_47.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Flusser, Jan, and Tomáš Suk. "Character recognition by affine moment invariants." In Computer Analysis of Images and Patterns, 572–77. Berlin, Heidelberg: Springer Berlin Heidelberg, 1993. http://dx.doi.org/10.1007/3-540-57233-3_76.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Voráček, Jan. "Tree neural classifier for character recognition." In Computer Analysis of Images and Patterns, 631–36. Berlin, Heidelberg: Springer Berlin Heidelberg, 1995. http://dx.doi.org/10.1007/3-540-60268-2_356.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

You, Xinge, Yuan Y. Tang, Weipeng Zhang, and Lu Sun. "Skeletonization of Character Based on Wavelet Transform." In Computer Analysis of Images and Patterns, 140–48. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/978-3-540-45179-2_18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Shaher, Abdullah Al, and Edwin R. Hancock. "Arabic Character Recognition Using Structural Shape Decomposition." In Computer Analysis of Images and Patterns, 478–86. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/978-3-540-45179-2_59.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Andrianasy, Fidimahery, and Maurice Milgram. "Dynamic character recognition using an elastic matching." In Computer Analysis of Images and Patterns, 888–93. Berlin, Heidelberg: Springer Berlin Heidelberg, 1995. http://dx.doi.org/10.1007/3-540-60268-2_398.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Park, Jong-Hyun, and Il-Seok Oh. "Wavelet-Based Feature Extraction from Character Images." In Intelligent Data Engineering and Automated Learning, 1092–96. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/978-3-540-45080-1_157.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Allier, Bénédicte, and Hubert Emptoz. "Character Prototyping in Document Images Using Gabor Filters." In Image Analysis, 28–35. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/3-540-45103-x_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Shchepin, E. V., and G. M. Nepomnyashchii. "On the method of critical points in character recognition." In Computer Analysis of Images and Patterns, 594–98. Berlin, Heidelberg: Springer Berlin Heidelberg, 1993. http://dx.doi.org/10.1007/3-540-57233-3_79.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Eikvil, Line, Kjersti Aas, and Marit Holden. "Tools for automatic recognition of character strings in maps." In Computer Analysis of Images and Patterns, 741–46. Berlin, Heidelberg: Springer Berlin Heidelberg, 1995. http://dx.doi.org/10.1007/3-540-60268-2_374.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Character images"

1

Fang, Shancheng, Hongtao Xie, Jianjun Chen, Jianlong Tan, and Yongdong Zhang. "Learning to Draw Text in Natural Images with Conditional Adversarial Networks." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/101.

Full text
Abstract:
In this work, we propose an entirely learning-based method to automatically synthesize text sequence in natural images leveraging conditional adversarial networks. As vanilla GANs are clumsy to capture structural text patterns, directly employing GANs for text image synthesis typically results in illegible images. Therefore, we design a two-stage architecture to generate repeated characters in images. Firstly, a character generator attempts to synthesize local character appearance independently, so that the legible characters in sequence can be obtained. To achieve style consistency of characters, we propose a novel style loss based on variance-minimization. Secondly, we design a pixel-manipulation word generator constrained by self-regularization, which learns to convert local characters to plausible word image. Experiments on SVHN dataset and ICDAR, IIIT5K datasets demonstrate our method is able to synthesize visually appealing text images. Besides, we also show the high-quality images synthesized by our method can be used to boost the performance of a scene text recognition algorithm.
APA, Harvard, Vancouver, ISO, and other styles
2

"CHARACTER RECOGNITION IN NATURAL IMAGES." In International Conference on Computer Vision Theory and Applications. SciTePress - Science and and Technology Publications, 2009. http://dx.doi.org/10.5220/0001770102730280.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Takagi, Noboru, and Jianjun Chen. "Character string extraction from scene images by eliminating non-character elements." In 2014 IEEE International Conference on Systems, Man and Cybernetics - SMC. IEEE, 2014. http://dx.doi.org/10.1109/smc.2014.6974503.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Narang, Vipin, Sujoy Roy, O. V. R. Murthy, and M. Hanmandlu. "Devanagari Character Recognition in Scene Images." In 2013 12th International Conference on Document Analysis and Recognition (ICDAR). IEEE, 2013. http://dx.doi.org/10.1109/icdar.2013.184.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Akbani, O., A. Gokrani, M. Quresh, Furqan M. Khan, Sadaf I. Behlim, and Tahir Q. Syed. "Character recognition in natural scene images." In 2015 International Conference on Information and Communication Technologies (ICICT). IEEE, 2015. http://dx.doi.org/10.1109/icict.2015.7469575.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Chen, Bing-Yu, Shih-Chiang Dai, Shuen-Huei Guan, and Tomoyuki Nishita. "Animating character images in 3D space." In SIGGRAPH '09: Posters. New York, New York, USA: ACM Press, 2009. http://dx.doi.org/10.1145/1599301.1599302.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Swindall, Matthew I., Timothy Player, Ben Keener, Alex C. Williams, James H. Brusuelas, Federica Nicolardi, Marzia D'Angelo, Claudio Vergara, Michael McOsker, and John F. Wallin. "Dataset Augmentation in Papyrology with Generative Models: A Study of Synthetic Ancient Greek Character Images." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/689.

Full text
Abstract:
Character recognition models rely substantially on image datasets that maintain a balance of class samples. However, achieving a balance of classes is particularly challenging for ancient manuscript contexts as character instances may be significantly limited. In this paper, we present findings from a study that assess the efficacy of using synthetically generated character instances to augment an existing dataset of ancient Greek character images for use in machine learning models. We complement our model exploration by engaging professional papyrologists to better understand the practical opportunities afforded by synthetic instances. Our results suggest that synthetic instances improve model performance for limited character classes, and may have unexplored effects on character classes more generally. We also find that trained papyrologists are unable to distinguish between synthetic and non-synthetic images and regard synthetic instances as valuable assets for professional and educational contexts. We conclude by discussing the practical implications of our research.
APA, Harvard, Vancouver, ISO, and other styles
8

Kumar, Teerath, Muhammad Turab, Shahnawaz Talpur, Rob Brennan, and Malika Bendechache. "Detection Datasets: Forged Characters for Passport and Driving Licence." In 6th International Conference on Artificial Intelligence, Soft Computing and Applications (AISCA 2022). Academy and Industry Research Collaboration Center (AIRCC), 2022. http://dx.doi.org/10.5121/csit.2022.120204.

Full text
Abstract:
Forged characters detection from personal documents including a passport or a driving licence is an extremely important and challenging task in digital image forensics, as forged information on personal documents can be used for fraud purposes including theft, robbery etc. For any detection task i.e. forged character detection, deep learning models are data hungry and getting the forged characters dataset for personal documents is very difficult due to various reasons, including information privacy, unlabeled data or existing work is evaluated on private datasets with limited access and getting data labelled is another big challenge. To address these issues, we propose a new algorithm that generates two new datasets named forged characters detection on passport (FCD-P) and forged characters detection on driving licence (FCD-D). To the best of our knowledge, we are the first to release these datasets. The proposed algorithm first reads the plain image, then performs forging tasks i.e. randomly changes the position of the random character or randomly adds little noise. At the same time, the algorithm also records the bounding boxes of the forged characters. To meet real world situations, we perform multiple data augmentation on cards very carefully. Overall, each dataset consists of 15000 images, each image with size of 950 x 550. Our algorithm code, FCD-P and FCD-D are publicly available.
APA, Harvard, Vancouver, ISO, and other styles
9

Xu, Lianli, Hiroto Nagayoshi, and Hiroshi Sako. "Kanji Character Detection from Complex Real Scene Images based on Character Properties." In 2008 The Eighth IAPR International Workshop on Document Analysis Systems (DAS). IEEE, 2008. http://dx.doi.org/10.1109/das.2008.34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Sawaki, M., H. Murase, and N. Hagita. "Character recognition in bookshelf images using context-based image templates." In Proceedings of the Fifth International Conference on Document Analysis and Recognition. ICDAR '99 (Cat. No.PR00318). IEEE, 1999. http://dx.doi.org/10.1109/icdar.1999.791729.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Character images"

1

Бережна, Маргарита Василівна. The Traitor Psycholinguistic Archetype. Premier Publishing, 2022. http://dx.doi.org/10.31812/123456789/6051.

Full text
Abstract:
Film studies have recently begun to employ Jung’s concept of archetypes prototypical characters which play the role of blueprint in constructing clear-cut characters. New typologies of archetype characters appear to reflect the changes in the constantly developing world of literature, theater, film, comics and other forms of entertainment. Among those, there is the classification of forty-five master characters by V. Schmidt , which is the basis for defining the character’s archetype in the present article. The aim of the research is to identify the elements of the psycholinguistic image of Justin Hammer in the superhero film Iron Man 2 based on the Marvel Comics and directed by Jon Favreau (2010). The task consists of three stages, namely identification of the psychological characteristics of the character, subsequent determination of Hammer’s archetype and definition of speech elements that reveal the character’s psychological image. This paper explores 92 Hammer’s turns of dialogues in the film. According to V. Schmidt’s classification, Hammer belongs to the Traitor archetype, which is a villainous representation of the Businessman archetype.
APA, Harvard, Vancouver, ISO, and other styles
2

BIZIKOEVA, L. S., and M. I. BALIKOEVA. LEXICO-STYLISTIC MEANS OF CREATING CHARACTERS (BASED ON THE STORY “THE POOL” BY W.S. MAUGHAM). Science and Innovation Center Publishing House, 2021. http://dx.doi.org/10.12731/2077-1770-2021-13-4-3-62-70.

Full text
Abstract:
Purpose. The article deals with various lexico-stylistic means of portraying a literary character. The analysis is based on the empirical study of the story “The Pool” by a famous English writer William Somerset Maugham. The main methods used in the research are: the method of contextual analysis and the descriptive-analytical method. Results. The results of the research revealed that the peculiar characteristic of the story “The Pool” as well as of many other Maugham’s stories is the author’s strong presence. The portrayal characteristics of the protagonists, their manner of speech, the surrounding nature greatly contribute to creating the unforgettable characters of Lawson and his wife Ethel. Somerset Maugham employs various lexico-stylistic means to create the images of Lawson and Ethel, allowing the reader to vividly portray their personalities. Practical implications. The received results can be used in teaching Stylistics of the English language, stylistic analysis of the text as well as theory and practice of translation, in writing course and graduation papers.
APA, Harvard, Vancouver, ISO, and other styles
3

AKHADOVA, R. A., and M. L. SHTUKKERT. ‘THE DREAM OF A RIDICULOUS MAN’ F.M. DOSTOEVSKY AND A. PETROV: POETICS OF THE FEAR. Science and Innovation Center Publishing House, 2021. http://dx.doi.org/10.12731/978-0-615-67323-3-8-21.

Full text
Abstract:
The purpose of the study is to determine the features of the structure and functioning of the fear motive in F.M. Dostoevsky’s “fantastic story” “The Dream of a Ridiculous Man” and in A. Petrov’s cartoon of the same name. The report first examines the images, details, etc., with which the fear motive is created in the story, and then analyzes the ways of embodying this motive and transmitting a certain frightening atmosphere in the cinema. There is revealed and determined the ontological significance and the main character of fear in the “The Dream of a Ridiculous Man”, which, in our opinion, manifests itself in the fundamental nature and primacy of this feeling in human nature. It can be observed in the transition from the dream world, not defiled by the fall, to the terrible reality of St. Petersburg, described in the story. The scientific novelty lies in the fact that a comprehensive analysis of the fear motive in the story of F. M. Dostoevsky and the peculiarities of the interpretation of this motive in the animated film by A. Petrov has not been carried out before. The study revealed, firstly, a number of repetitive means by which the fear motive is formed in both works (and their functions were determined), and secondly, there was noted the originality of the representation of the analyzed motive in the cinema.
APA, Harvard, Vancouver, ISO, and other styles
4

Бережна, Маргарита Василівна. The Destroyer Psycholinguistic Archetype. Baltija Publishing, 2021. http://dx.doi.org/10.31812/123456789/6036.

Full text
Abstract:
The aim of the research is to identify the elements of the psycholinguistic image of the main antagonist Hela in the superhero film Thor: Ragnarok based on the Marvel Comics and directed by Taika Waititi (2017). The task consists of two stages, at the first of which I identify the psychological characteristics of the character to determine to which of the archetypes Hela belongs. As the basis, I take the classification of film archetypes by V. Schmidt. At the second stage, I distinguish the speech peculiarities of the character that reflect her psychological image.
APA, Harvard, Vancouver, ISO, and other styles
5

Бережна, Маргарита Василівна. Maleficent: from the Matriarch to the Scorned Woman (Psycholinguistic Image). Baltija Publishing, 2021. http://dx.doi.org/10.31812/123456789/5766.

Full text
Abstract:
The aim of the research is to identify the elements of the psycholinguistic image of the leading character in the dark fantasy adventure film Maleficent directed by Robert Stromberg (2014). The task consists of two stages, at the first of which I identify the psychological characteristics of the character to determine to which of the archetypes Maleficent belongs. As the basis, I take the classification of film archetypes by V. Schmidt. At the second stage, I distinguish the speech peculiarities of the character that reflex her psychological image. This paper explores 98 Maleficent’s turns of dialogues in the film. According to V. Schmidt’s classification, Maleficent belongs first to the Matriarch archetype and later in the plot to the Scorned Woman archetype. These archetypes are representations of the powerful goddess of marriage and fertility Hera, being respectively her heroic and villainous embodiments. There are several crucial characteristics revealed by speech elements.
APA, Harvard, Vancouver, ISO, and other styles
6

Garris, Michael D., Stanley Janet, and William W. Klein. Impact of image quality on machine print optical character recognition. Gaithersburg, MD: National Institute of Standards and Technology, 1997. http://dx.doi.org/10.6028/nist.ir.6101.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Бережна, Маргарита Василівна. Psycholinguistic Image of Joy (in the Computer-Animated Film Inside Out). Psycholinguistics in a Modern World, 2021. http://dx.doi.org/10.31812/123456789/5827.

Full text
Abstract:
The paper is focused on the correlation between the psychological archetype of a film character and the linguistic elements composing their speech. The Nurturer archetype is represented in the film Inside Out by the personalized emotion Joy. Joy is depicted as an antropomorphous female character, whose purpose is to keep her host, a young girl Riley, happy. As the Nurturer, Joy is completely focused on Riley’s happiness, which is expressed by lexico-semantic group ‘happy’, positive evaluative tokens, exclamatory sentences, promissive speech acts, and repetitions. She needs the feeling of connectedness with other members of her family, which is revealed by lexico-semantic groups ‘support’ and ‘help’. She is ready to sacrifice everything to save the girl in her care, which is demonstrated by modal verbs, frequent word-combination ‘for Riley’, and directives.
APA, Harvard, Vancouver, ISO, and other styles
8

Methods for evaluating the performance of systems intended to recognize characters from image data scanned from forms. Gaithersburg, MD: National Institute of Standards and Technology, 1993. http://dx.doi.org/10.6028/nist.ir.5129.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography