Academic literature on the topic 'BINARIZATION TECHNIQUE'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'BINARIZATION TECHNIQUE.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "BINARIZATION TECHNIQUE"

1

Thepade, Sudeep, Rik Das, and Saurav Ghosh. "A Novel Feature Extraction Technique Using Binarization of Bit Planes for Content Based Image Classification." Journal of Engineering 2014 (2014): 1–13. http://dx.doi.org/10.1155/2014/439218.

Full text
Abstract:
A number of techniques have been proposed earlier for feature extraction using image binarization. Efficiency of the techniques was dependent on proper threshold selection for the binarization method. In this paper, a new feature extraction technique using image binarization has been proposed. The technique has binarized the significant bit planes of an image by selecting local thresholds. The proposed algorithm has been tested on a public dataset and has been compared with existing widely used techniques using binarization for extraction of features. It has been inferred that the proposed method has outclassed all the existing techniques and has shown consistent classification performance.
APA, Harvard, Vancouver, ISO, and other styles
2

Yu, Young-Jung. "Document Image Binarization Technique using MSER." Journal of the Korea Institute of Information and Communication Engineering 18, no. 8 (August 31, 2014): 1941–47. http://dx.doi.org/10.6109/jkiice.2014.18.8.1941.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

MAKRIDIS, MICHAEL, and N. PAPAMARKOS. "AN ADAPTIVE LAYER-BASED LOCAL BINARIZATION TECHNIQUE FOR DEGRADED DOCUMENTS." International Journal of Pattern Recognition and Artificial Intelligence 24, no. 02 (March 2010): 245–79. http://dx.doi.org/10.1142/s0218001410007889.

Full text
Abstract:
This paper presents a new technique for adaptive binarization of degraded document images. The proposed technique focuses on degraded documents with various background patterns and noise. It involves a preprocessing local background estimation stage, which detects for each pixel that is considered as background one, a proper grayscale value. Then, the estimated background is used to produce a new enhanced image having uniform background layers and increased local contrast. That is, the new image is a combination of background and foreground layers. Foreground and background layers are then separated by using a new transformation which exploits efficiently, both grayscale and spatial information. The final binary document is obtained by combining all foreground layers. The proposed binarization technique has been extensively tested on numerous documents and successfully compared with other well-known binarization techniques. Experimental results, which are based on statistical, visual and OCR criteria, verify the effectiveness of the technique.
APA, Harvard, Vancouver, ISO, and other styles
4

CHI, ZHERU, and QING WANG. "DOCUMENT IMAGE BINARIZATION WITH FEEDBACK FOR IMPROVING CHARACTER SEGMENTATION." International Journal of Image and Graphics 05, no. 02 (April 2005): 281–309. http://dx.doi.org/10.1142/s0219467805001768.

Full text
Abstract:
Binarization of gray scale document images is one of the most important steps in automatic document image processing. In this paper, we present a two-stage document image binarization approach, which includes a top-down region-based binarization at the first stage and a neural network based binarization technique for the problematic blocks at the second stage after a feedback checking. Our two-stage approach is particularly effective for binarizing text images of highlighted or marked text. The region-based binarization method is fast and suitable for processing large document images. However, the block effect and regional edge noise are two unavoidable problems resulting in poor character segmentation and recognition. The neural network based classifier can achieve good performance in two-class classification problem such as the binarization of gray level document images. However, it is computationally costly. In our two-stage binarization approach, the feedback criteria are employed to keep the well binarized blocks from the first stage binarization and to re-binarize the problematic blocks at the second stage using the neural network binarizer to improve the character segmentation quality. Experimental results on a number of document images show that our two-stage binarization approach performs better than the single-stage binarization techniques tested in terms of character segmentation quality and computational cost.
APA, Harvard, Vancouver, ISO, and other styles
5

Pagare, Mr Aniket. "Document Image Binarization using Image Segmentation Technique." International Journal for Research in Applied Science and Engineering Technology 9, no. VII (July 15, 2021): 1173–76. http://dx.doi.org/10.22214/ijraset.2021.36597.

Full text
Abstract:
Segmentation of text from badly degraded document images is an extremely difficult assignment because of the high inter/Intra variety between the record foundation and the frontal area text of various report pictures. Picture preparing and design acknowledgment algorithms set aside more effort for execution on a solitary center processor. Designs Preparing Unit (GPU) is more mainstream these days because of its speed, programmability, minimal expense and more inbuilt execution centers in it. The primary objective of this exploration work is to make binarization quicker for acknowledgment of a huge number of corrupted report pictures on GPU. In this framework, we give another picture division calculation that every pixel in the picture has its own limit proposed. We are accomplishing equal work on a window of m*n size and separate article pixel of text stroke of that window. The archive text is additionally sectioned by a nearby edge that is assessed dependent on the forces of identified content stroke edge pixels inside a nearby window.
APA, Harvard, Vancouver, ISO, and other styles
6

Abbood, Alaa Ahmed, Mohammed Sabbih Hamoud Al-Tamimi, Sabine U. Peters, and Ghazali Sulong. "New Combined Technique for Fingerprint Image Enhancement." Modern Applied Science 11, no. 1 (December 19, 2016): 222. http://dx.doi.org/10.5539/mas.v11n1p222.

Full text
Abstract:
This paper presents a combination of enhancement techniques for fingerprint images affected by different type of noise. These techniques were applied to improve image quality and come up with an acceptable image contrast. The proposed method included five different enhancement techniques: Normalization, Histogram Equalization, Binarization, Skeletonization and Fusion. The Normalization process standardized the pixel intensity which facilitated the processing of subsequent image enhancement stages. Subsequently, the Histogram Equalization technique increased the contrast of the images. Furthermore, the Binarization and Skeletonization techniques were implemented to differentiate between the ridge and valley structures and to obtain one pixel-wide lines. Finally, the Fusion technique was used to merge the results of the Histogram Equalization process with the Skeletonization process to obtain the new high contrast images. The proposed method was tested in different quality images from National Institute of Standard and Technology (NIST) special database 14. The experimental results are very encouraging and the current enhancement method appeared to be effective by improving different quality images.
APA, Harvard, Vancouver, ISO, and other styles
7

Adhari, Firman Maulana, Taufik Fuadi Abidin, and Ridha Ferdhiana. "License Plate Character Recognition using Convolutional Neural Network." Journal of Information Systems Engineering and Business Intelligence 8, no. 1 (April 26, 2022): 51–60. http://dx.doi.org/10.20473/jisebi.8.1.51-60.

Full text
Abstract:
Background: In the last decade, the number of registered vehicles has grown exponentially. With more vehicles on the road, traffic jams, accidents, and violations also increase. A license plate plays a key role in solving such problems because it stores a vehicle’s historical information. Therefore, automated license-plate character recognition is needed. Objective: This study proposes a recognition system that uses convolutional neural network (CNN) architectures to recognize characters from a license plate’s images. We called it a modified LeNet-5 architecture. Methods: We used four different CNN architectures to recognize license plate characters: AlexNet, LeNet-5, modified LeNet-5, and ResNet-50 architectures. We evaluated the performance based on their accuracy and computation time. We compared the deep learning methods with the Freeman chain code (FCC) extraction with support vector machine (SVM). We also evaluated the Otsu and the threshold binarization performances when applied in the FCC extraction method. Results: The ResNet-50 and modified LeNet-5 produces the best accuracy during the training at 0.97. The precision and recall scores of the ResNet-50 are both 0.97, while the modified LeNet-5’s values are 0.98 and 0.96, respectively. The modified LeNet-5 shows a slightly higher precision score but a lower recall score. The modified LeNet-5 shows a slightly lower accuracy during the testing than ResNet-50. Meanwhile, the Otsu binarization’s FCC extraction is better than the threshold binarization. Overall, the FCC extraction technique performs less effectively than CNN. The modified LeNet-5 computes the fastest at 7 mins and 57 secs, while ResNet-50 needs 42 mins and 11 secs. Conclusion: We discovered that CNN is better than the FCC extraction method with SVM. Both ResNet-50 and the modified LeNet-5 perform best during the training, with F measure scoring 0.97. However, ResNet-50 outperforms the modified LeNet-5 during the testing, with F-measure at 0.97 and 1.00, respectively. In addition, the FCC extraction using the Otsu binarization is better than the threshold binarization. Otsu binarization reached 0.91, higher than the static threshold binarization at 127. In addition, Otsu binarization produces a dynamic threshold value depending on the images’ light intensity. Keywords: Convolutional Neural Network, Freeman Chain Code, License Plate Character Recognition, Support Vector Machine
APA, Harvard, Vancouver, ISO, and other styles
8

García, José, Paola Moraga, Matias Valenzuela, Broderick Crawford, Ricardo Soto, Hernan Pinto, Alvaro Peña, Francisco Altimiras, and Gino Astorga. "A Db-Scan Binarization Algorithm Applied to Matrix Covering Problems." Computational Intelligence and Neuroscience 2019 (September 16, 2019): 1–16. http://dx.doi.org/10.1155/2019/3238574.

Full text
Abstract:
The integration of machine learning techniques and metaheuristic algorithms is an area of interest due to the great potential for applications. In particular, using these hybrid techniques to solve combinatorial optimization problems (COPs) to improve the quality of the solutions and convergence times is of great interest in operations research. In this article, the db-scan unsupervised learning technique is explored with the goal of using it in the binarization process of continuous swarm intelligence metaheuristic algorithms. The contribution of the db-scan operator to the binarization process is analyzed systematically through the design of random operators. Additionally, the behavior of this algorithm is studied and compared with other binarization methods based on clusters and transfer functions (TFs). To verify the results, the well-known set covering problem is addressed, and a real-world problem is solved. The results show that the integration of the db-scan technique produces consistently better results in terms of computation time and quality of the solutions when compared with TFs and random operators. Furthermore, when it is compared with other clustering techniques, we see that it achieves significantly improved convergence times.
APA, Harvard, Vancouver, ISO, and other styles
9

Rozen, Tal, Moshe Kimhi, Brian Chmiel, Avi Mendelson, and Chaim Baskin. "Bimodal-Distributed Binarized Neural Networks." Mathematics 10, no. 21 (November 3, 2022): 4107. http://dx.doi.org/10.3390/math10214107.

Full text
Abstract:
Binary neural networks (BNNs) are an extremely promising method for reducing deep neural networks’ complexity and power consumption significantly. Binarization techniques, however, suffer from ineligible performance degradation compared to their full-precision counterparts. Prior work mainly focused on strategies for sign function approximation during the forward and backward phases to reduce the quantization error during the binarization process. In this work, we propose a bimodal-distributed binarization method (BD-BNN). The newly proposed technique aims to impose a bimodal distribution of the network weights by kurtosis regularization. The proposed method consists of a teacher–trainer training scheme termed weight distribution mimicking (WDM), which efficiently imitates the full-precision network weight distribution to their binary counterpart. Preserving this distribution during binarization-aware training creates robust and informative binary feature maps and thus it can significantly reduce the generalization error of the BNN. Extensive evaluations on CIFAR-10 and ImageNet demonstrate that our newly proposed BD-BNN outperforms current state-of-the-art schemes.
APA, Harvard, Vancouver, ISO, and other styles
10

Joseph, Manju, and Jijina K. P. Jijina K.P. "Simple and Efficient Document Image Binarization Technique For Degraded Document Images." International Journal of Scientific Research 3, no. 5 (June 1, 2012): 217–20. http://dx.doi.org/10.15373/22778179/may2014/65.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "BINARIZATION TECHNIQUE"

1

Ringdahl, Benjamin. "Gaussian Process Multiclass Classification : Evaluation of Binarization Techniques and Likelihood Functions." Thesis, Linnéuniversitetet, Institutionen för matematik (MA), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-87952.

Full text
Abstract:
In binary Gaussian process classification the prior class membership probabilities are obtained by transforming a Gaussian process to the unit interval, typically either with the logistic likelihood function or the cumulative Gaussian likelihood function. Multiclass classification problems can be handled by any binary classifier by means of so-called binarization techniques, which reduces the multiclass problem into a number of binary problems. Other than introducing the mathematics behind the theory and methods behind Gaussian process classification, we compare the binarization techniques one-against-all and one-against-one in the context of Gaussian process classification, and we also compare the performance of the logistic likelihood and the cumulative Gaussian likelihood. This is done by means of two experiments: one general experiment where the methods are tested on several publicly available datasets, and one more specific experiment where the methods are compared with respect to class imbalance and class overlap on several artificially generated datasets. The results indicate that there is no significant difference in the choices of binarization technique and likelihood function for typical datasets, although the one-against-one technique showed slightly more consistent performance. However the second experiment revealed some differences in how the methods react to varying degrees of class imbalance and class overlap. Most notably the logistic likelihood was a dominant factor and the one-against-one technique performed better than one-against-all.
APA, Harvard, Vancouver, ISO, and other styles
2

Lowther, Scott Andrew. "Document sorting and logo recognition using image processing techniques." Thesis, Queensland University of Technology, 2002. https://eprints.qut.edu.au/36188/1/36188_Lowther_2002.pdf.

Full text
Abstract:
In recent years, document analysis has become an ever-increasing area of image processing. This has been primarily due to the interest in the notion of a paperless office but also due to the exponential increase in personal computer power. Document analysis broadly covers such areas as document pre-processing, Optical Character Recognition (OCR), script/language recognition and document understanding. One important goal of many document analysis systems is to automatically process document images and to extract meaningful information. Systems such as document sorting applications aim to automatically sort large volumes of paper (or scanned) documents into smaller groups to allow human operators to quickly index document databases for relevant documents. The development of such a system relies on selection of numerous other document analysis tasks such as pre-processing, script I language recognition, OCR, image recognition and document layout analysis, dependent upon the information of interest. This thesis presents a study of document analysis techniques applicable to a document sorting application using logo recognition. The primary goal of the research performed was to develop algorithms that can recognise logo images in documents and sort these documents according to corporate identity. The thesis outlines many of the current techniques required for sorting documents by corporate identity and presents some original techniques that were developed to address problems with current techniques. The areas investigated within the scope of this thesis are document pre-processing, logo detection and logo recognition, all of which are used extensively in a logo recognition document sorting system. An analysis of the pre-processing tasks of noise removal, binarization and skew determination is made and a new skew determination technique presented. A rule-based logo detection technique is developed and results provided. Bispectral invariant features are analysed for suitability in logo recognition and a technique developed to both recognise and verify logo images. The integration of all these techniques is then examined and a number of compatible techniques chosen for inclusion in a complete system. Results are provided to show the validity of each of the main techniques used and also of the complete system.
APA, Harvard, Vancouver, ISO, and other styles
3

Kesiman, Made Windu Antara. "Document image analysis of Balinese palm leaf manuscripts." Thesis, La Rochelle, 2018. http://www.theses.fr/2018LAROS013/document.

Full text
Abstract:
Les collections de manuscrits sur feuilles de palmier sont devenues une partie intégrante de la culture et de la vie des peuples de l'Asie du Sud-Est. Avec l’augmentation des projets de numérisation des documents patrimoniaux à travers le monde, les collections de manuscrits sur feuilles de palmier ont finalement attiré l'attention des chercheurs en analyse d'images de documents (AID). Les travaux de recherche menés dans le cadre de cette thèse ont porté sur les manuscrits d'Indonésie, et en particulier sur les manuscrits de Bali. Nos travaux visent à proposer des méthodes d’analyse pour les manuscrits sur feuilles de palmier. En effet, ces collections offrent de nouveaux défis car elles utilisent, d’une part, un support spécifique : les feuilles de palmier, et d’autre part, un langage et un script qui n'ont jamais été analysés auparavant. Prenant en compte, le contexte et les conditions de stockage des collections de manuscrits sur feuilles de palmier à Bali, nos travaux ont pour objectif d’apporter une valeur ajoutée aux manuscrits numérisés en développant des outils pour analyser, translittérer et indexer le contenu des manuscrits sur feuilles de palmier. Ces systèmes rendront ces manuscrits plus accessibles, lisibles et compréhensibles à un public plus large ainsi que pour les chercheurs et les étudiants du monde entier. Cette thèse a permis de développer un système d’AID pour les images de documents sur feuilles de palmier, comprenant plusieurs tâches de traitement d'images : numérisation du document, construction de la vérité terrain, binarisation, segmentation des lignes de texte et des glyphes, la reconnaissance des glyphes et des mots, translittération et l’indexation de document. Nous avons ainsi créé le premier corpus et jeu de données de manuscrits balinais sur feuilles de palmier. Ce corpus est actuellement disponible pour les chercheurs en AID. Nous avons également développé un système de reconnaissance des glyphes et un système de translittération automatique des manuscrits balinais. Cette thèse propose un schéma complet de reconnaissance de glyphes spatialement catégorisé pour la translittération des manuscrits balinais sur feuilles de palmier. Le schéma proposé comprend six tâches : la segmentation de lignes de texte et de glyphes, un processus de classification de glyphes, la détection de la position spatiale pour la catégorisation des glyphes, une reconnaissance globale et catégorisée des glyphes, la sélection des glyphes et la translittération basée sur des règles phonologiques. La translittération automatique de l'écriture balinaise nécessite de mettre en œuvre des mécanismes de représentation des connaissances et des règles phonologiques. Nous proposons un système de translittération sans segmentation basée sur la méthode LSTM. Celui-ci a été testé sur des données réelles et synthétiques. Il comprend un schéma d'apprentissage à deux niveaux pouvant s’appliquer au niveau du mot et au niveau de la ligne de texte
The collection of palm leaf manuscripts is an important part of Southeast Asian people’s culture and life. Following the increasing of the digitization projects of heritage documents around the world, the collection of palm leaf manuscripts in Southeast Asia finally attracted the attention of researchers in document image analysis (DIA). The research work conducted for this dissertation focused on the heritage documents of the collection of palm leaf manuscripts from Indonesia, especially the palm leaf manuscripts from Bali. This dissertation took part in exploring DIA researches for palm leaf manuscripts collection. This collection offers new challenges for DIA researches because it uses palm leaf as writing media and also with a language and script that have never been analyzed before. Motivated by the contextual situations and real conditions of the palm leaf manuscript collections in Bali, this research tried to bring added value to digitized palm leaf manuscripts by developing tools to analyze, to transliterate and to index the content of palm leaf manuscripts. These systems aim at making palm leaf manuscripts more accessible, readable and understandable to a wider audience and, to scholars and students all over the world. This research developed a DIA system for document images of palm leaf manuscripts, that includes several image processing tasks, beginning with digitization of the document, ground truth construction, binarization, text line and glyph segmentation, ending with glyph and word recognition, transliteration and document indexing and retrieval. In this research, we created the first corpus and dataset of the Balinese palm leaf manuscripts for the DIA research community. We also developed the glyph recognition system and the automatic transliteration system for the Balinese palm leaf manuscripts. This dissertation proposed a complete scheme of spatially categorized glyph recognition for the transliteration of Balinese palm leaf manuscripts. The proposed scheme consists of six tasks: the text line and glyph segmentation, the glyph ordering process, the detection of the spatial position for glyph category, the global and categorized glyph recognition, the option selection for glyph recognition and the transliteration with phonological rules-based machine. An implementation of knowledge representation and phonological rules for the automatic transliteration of Balinese script on palm leaf manuscript is proposed. The adaptation of a segmentation-free LSTM-based transliteration system with the generated synthetic dataset and the training schemes at two different levels (word level and text line level) is also proposed
APA, Harvard, Vancouver, ISO, and other styles
4

DOBHAL, HEMU. "BINARIZATION TECHNIQUE FOR THE DEGRADED DOCUMENT IMAGES AND INSCRIPTION IMAGES." Thesis, 2014. http://dspace.dtu.ac.in:8080/jspui/handle/repository/15429.

Full text
Abstract:
Over a decade,text extraction of document has been a subject of intrest for the research,but a very few work has been done in digitizing inscription images of historical monuments. For unclear and complex archaeological inscription images,there is no sharp distinction between foreground and background.There are several problems in the text of inscription images,such problems are like there is low contrast between text and background thus the use of previously available method unsuitable. For the regions having high edge density and strength simple edge-based approaches are also considered useful. This edge-based method give good result if background is not complex,but for the inscription images background is complex,thus this method cannot be used directly. Badly degraded images which is having high inter/intravariation between the background and the foreground text,the segmentation of the text becomes a big challenge. This thesis propose a novel document image binarization technique for the monument inscription that earliar was used for the binarization of degrade document images.The proposed method is basically an adaptive image binariztion technique.In this first an adaptive contrast map is constructed for the input inscription image.The contrast map is then binarizd and then combined with canny’s edge map to identify the text stroke edge pixels.Local threshold that is estimated on the intensities of detected text stroke edge pixels within a local window,is further applied for the document tex segmentation.the proposed method is very simple, robust and it involves minimum parameter tuning. It has been applied on different monuments inscription images and have given good results.
APA, Harvard, Vancouver, ISO, and other styles
5

Gao, Jhe-Wei, and 高哲偉. "Improving SIFT Matching Efficiency Using Hashing and Descriptor Binarization Techniques." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/zhrdjt.

Full text
Abstract:
碩士
大同大學
資訊工程學系(所)
102
Scale-invariant feature transform (SIFT) is an algorithm in computer vision. Although it can achieve high accuracy in image matching, the speed of image matching is slow. The thesis presents a method that uses hashing and descriptor binarization to improve SIFT matching efficiency. Our method applies SIFT descriptor binarization to reduce the cost of image matching. It decreases the computational complexity with only a little loss of matching accuracy. Also, our method utilizes hashing to decrease the quantity of the matching pairs substantially and hence reduce the matching time. The experimental result demonstrates that, with only a small decrease in accuracy, the matching speed of our method is about 2500 times faster than that of SIFT linear matching. Moreover, our hashing method can be applied to other methods that adopt SIFT descriptor binarization.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "BINARIZATION TECHNIQUE"

1

Chaki, Nabendu, Soharab Hossain Shaikh, and Khalid Saeed. Exploring Image Binarization Techniques. New Delhi: Springer India, 2014. http://dx.doi.org/10.1007/978-81-322-1907-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Saeed, Khalid, Nabendu Chaki, and Soharab Hossain Shaikh. Exploring Image Binarization Techniques. Springer London, Limited, 2014.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Saeed, Khalid, Nabendu Chaki, and Soharab Hossain Shaikh. Exploring Image Binarization Techniques. Springer, 2014.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Saeed, Khalid, Nabendu Chaki, and Soharab Hossain Shaikh. Exploring Image Binarization Techniques. Springer (India) Private Limited, 2016.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "BINARIZATION TECHNIQUE"

1

Chaki, Nabendu, Soharab Hossain Shaikh, and Khalid Saeed. "A New Image Binarization Technique Using Iterative Partitioning." In Exploring Image Binarization Techniques, 17–44. New Delhi: Springer India, 2014. http://dx.doi.org/10.1007/978-81-322-1907-1_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Sokratis, Vavilis, Ergina Kavallieratou, Roberto Paredes, and Kostas Sotiropoulos. "A Hybrid Binarization Technique for Document Images." In Learning Structure and Schemas from Documents, 165–79. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-22913-8_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Datta, Soumik, Pawan Kumar Singh, Ram Sarkar, and MitaNasipuri. "A New Image Binarization Technique by Classifying Document Images." In Lecture Notes in Computer Science, 539–44. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-45062-4_74.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Gatos, Basilios, Ioannis Pratikakis, and Stavros J. Perantonis. "An Adaptive Binarization Technique for Low Quality Historical Documents." In Document Analysis Systems VI, 102–13. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-28640-0_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Antony, P. J., C. K. Savitha, and U. J. Ujwal. "Efficient Binarization Technique for Handwritten Archive of South Dravidian Tulu Script." In Emerging Research in Computing, Information, Communication and Applications, 651–66. Singapore: Springer Singapore, 2017. http://dx.doi.org/10.1007/978-981-10-4741-1_56.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Choudhary, Amit, Savita Ahlawat, and Rahul Rishi. "A Neural Approach to Cursive Handwritten Character Recognition Using Features Extracted from Binarization Technique." In Complex System Modelling and Control Through Intelligent Soft Computations, 745–71. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-12883-2_26.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Saha, Satadal, Subhadip Basu, and Mita Nasipuri. "Binarization of Document Images Using Hierarchical Histogram Equalization Technique with Linearly Merged Membership Function." In Advances in Intelligent and Soft Computing, 639–47. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-27443-5_74.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Chaki, Nabendu, Soharab Hossain Shaikh, and Khalid Saeed. "Introduction." In Exploring Image Binarization Techniques, 1–4. New Delhi: Springer India, 2014. http://dx.doi.org/10.1007/978-81-322-1907-1_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Chaki, Nabendu, Soharab Hossain Shaikh, and Khalid Saeed. "A Comprehensive Survey on Image Binarization Techniques." In Exploring Image Binarization Techniques, 5–15. New Delhi: Springer India, 2014. http://dx.doi.org/10.1007/978-81-322-1907-1_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Chaki, Nabendu, Soharab Hossain Shaikh, and Khalid Saeed. "A Framework for Creating Reference Image for Degraded Document Images." In Exploring Image Binarization Techniques, 45–63. New Delhi: Springer India, 2014. http://dx.doi.org/10.1007/978-81-322-1907-1_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "BINARIZATION TECHNIQUE"

1

Papamarkos, Nikos. "A technique for fuzzy document binarization." In the 2001 ACM Symposium. New York, New York, USA: ACM Press, 2001. http://dx.doi.org/10.1145/502187.502210.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ranganatha D and Ganga Holi. "Hybrid binarization technique for degraded document images." In 2015 IEEE International Advance Computing Conference (IACC). IEEE, 2015. http://dx.doi.org/10.1109/iadcc.2015.7154834.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

"IMPROVED ADAPTIVE BINARIZATION TECHNIQUE FOR DOCUMENT IMAGE ANALYSIS." In International Conference on Computer Vision Theory and Applications. SciTePress - Science and and Technology Publications, 2007. http://dx.doi.org/10.5220/0002057003170321.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Inbar, H., E. Marom, and N. Konforti. "Error diffusion binarization methods for joint transform correlators." In OSA Annual Meeting. Washington, D.C.: Optica Publishing Group, 1992. http://dx.doi.org/10.1364/oam.1992.thq6.

Full text
Abstract:
Optical pattern recognition techniques based on the optical joint transform correlator (JTC) scheme are attractive because of their simplicity. Recent improvements in spatial light modulators (SLMs) increased the popularity of the JTC, providing a means for real time operation. Using a binary SLM for the display of the Fourier spectrum first requires binarization of the joint power spectrum distribution. Although hard clipping (HC) is the simplest and most common binarization method used, we suggest to applying error diffusion (ED) as an improved binarization technique. The performance of a binary JTC, whose input image is considered to contain additive white noise, is investigated. Various ways for nonlinearly modifying the joint power spectrum prior to the binarization step, based on either ED or HC techniques, are discussed. These nonlinear modifications aim at increasing the contrast of the interference fringes at the joint power spectrum plane, leading to better definition of the correlation signal. Mathematical analysis, computer simulations, and experimental results are presented.
APA, Harvard, Vancouver, ISO, and other styles
5

Mousa, Usama W. A., Hossam E. Abd El Munim, and Mahmoud I. Khalil. "A Multistage Binarization Technique for the Degraded Document Images." In 2018 13th International Conference on Computer Engineering and Systems (ICCES). IEEE, 2018. http://dx.doi.org/10.1109/icces.2018.8639459.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Jindal, Harshit, Manoj Kumar, Akhil Tomar, and Ayush Malik. "Degraded Document Image Binarization using Novel Background Estimation Technique." In 2021 6th International Conference for Convergence in Technology (I2CT). IEEE, 2021. http://dx.doi.org/10.1109/i2ct51068.2021.9418084.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Munshi, Paridhi, and Suman K. Mitra. "A rough-set based binarization technique for fingerprint images." In 2012 IEEE International Conference on Signal Processing, Computing and Control (ISPCC). IEEE, 2012. http://dx.doi.org/10.1109/ispcc.2012.6224360.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kamath K. M., Shreyas, Rahul Rajendran, Karen Panetta, and Sos Agaian. "A human visual based binarization technique for histological images." In SPIE Commercial + Scientific Sensing and Imaging, edited by Sos S. Agaian and Sabah A. Jassim. SPIE, 2017. http://dx.doi.org/10.1117/12.2262815.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Tamilselvan, S., and S. G. Sowmya. "Content retrieval from degraded document images using binarization technique." In 2014 International Conference On Computation of Power , Energy, Information and Communication (ICCPEIC). IEEE, 2014. http://dx.doi.org/10.1109/iccpeic.2014.6915401.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Srivastava, Saumya, and Sudip Sanyal. "Unsupervised learning technique for binarization of gray scale text images." In 2014 Annual IEEE India Conference (INDICON). IEEE, 2014. http://dx.doi.org/10.1109/indicon.2014.7030453.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography