To see the other types of publications on this topic, follow the link: BINARIZATION TECHNIQUE.

Journal articles on the topic 'BINARIZATION TECHNIQUE'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'BINARIZATION TECHNIQUE.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Thepade, Sudeep, Rik Das, and Saurav Ghosh. "A Novel Feature Extraction Technique Using Binarization of Bit Planes for Content Based Image Classification." Journal of Engineering 2014 (2014): 1–13. http://dx.doi.org/10.1155/2014/439218.

Full text
Abstract:
A number of techniques have been proposed earlier for feature extraction using image binarization. Efficiency of the techniques was dependent on proper threshold selection for the binarization method. In this paper, a new feature extraction technique using image binarization has been proposed. The technique has binarized the significant bit planes of an image by selecting local thresholds. The proposed algorithm has been tested on a public dataset and has been compared with existing widely used techniques using binarization for extraction of features. It has been inferred that the proposed method has outclassed all the existing techniques and has shown consistent classification performance.
APA, Harvard, Vancouver, ISO, and other styles
2

Yu, Young-Jung. "Document Image Binarization Technique using MSER." Journal of the Korea Institute of Information and Communication Engineering 18, no. 8 (August 31, 2014): 1941–47. http://dx.doi.org/10.6109/jkiice.2014.18.8.1941.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

MAKRIDIS, MICHAEL, and N. PAPAMARKOS. "AN ADAPTIVE LAYER-BASED LOCAL BINARIZATION TECHNIQUE FOR DEGRADED DOCUMENTS." International Journal of Pattern Recognition and Artificial Intelligence 24, no. 02 (March 2010): 245–79. http://dx.doi.org/10.1142/s0218001410007889.

Full text
Abstract:
This paper presents a new technique for adaptive binarization of degraded document images. The proposed technique focuses on degraded documents with various background patterns and noise. It involves a preprocessing local background estimation stage, which detects for each pixel that is considered as background one, a proper grayscale value. Then, the estimated background is used to produce a new enhanced image having uniform background layers and increased local contrast. That is, the new image is a combination of background and foreground layers. Foreground and background layers are then separated by using a new transformation which exploits efficiently, both grayscale and spatial information. The final binary document is obtained by combining all foreground layers. The proposed binarization technique has been extensively tested on numerous documents and successfully compared with other well-known binarization techniques. Experimental results, which are based on statistical, visual and OCR criteria, verify the effectiveness of the technique.
APA, Harvard, Vancouver, ISO, and other styles
4

CHI, ZHERU, and QING WANG. "DOCUMENT IMAGE BINARIZATION WITH FEEDBACK FOR IMPROVING CHARACTER SEGMENTATION." International Journal of Image and Graphics 05, no. 02 (April 2005): 281–309. http://dx.doi.org/10.1142/s0219467805001768.

Full text
Abstract:
Binarization of gray scale document images is one of the most important steps in automatic document image processing. In this paper, we present a two-stage document image binarization approach, which includes a top-down region-based binarization at the first stage and a neural network based binarization technique for the problematic blocks at the second stage after a feedback checking. Our two-stage approach is particularly effective for binarizing text images of highlighted or marked text. The region-based binarization method is fast and suitable for processing large document images. However, the block effect and regional edge noise are two unavoidable problems resulting in poor character segmentation and recognition. The neural network based classifier can achieve good performance in two-class classification problem such as the binarization of gray level document images. However, it is computationally costly. In our two-stage binarization approach, the feedback criteria are employed to keep the well binarized blocks from the first stage binarization and to re-binarize the problematic blocks at the second stage using the neural network binarizer to improve the character segmentation quality. Experimental results on a number of document images show that our two-stage binarization approach performs better than the single-stage binarization techniques tested in terms of character segmentation quality and computational cost.
APA, Harvard, Vancouver, ISO, and other styles
5

Pagare, Mr Aniket. "Document Image Binarization using Image Segmentation Technique." International Journal for Research in Applied Science and Engineering Technology 9, no. VII (July 15, 2021): 1173–76. http://dx.doi.org/10.22214/ijraset.2021.36597.

Full text
Abstract:
Segmentation of text from badly degraded document images is an extremely difficult assignment because of the high inter/Intra variety between the record foundation and the frontal area text of various report pictures. Picture preparing and design acknowledgment algorithms set aside more effort for execution on a solitary center processor. Designs Preparing Unit (GPU) is more mainstream these days because of its speed, programmability, minimal expense and more inbuilt execution centers in it. The primary objective of this exploration work is to make binarization quicker for acknowledgment of a huge number of corrupted report pictures on GPU. In this framework, we give another picture division calculation that every pixel in the picture has its own limit proposed. We are accomplishing equal work on a window of m*n size and separate article pixel of text stroke of that window. The archive text is additionally sectioned by a nearby edge that is assessed dependent on the forces of identified content stroke edge pixels inside a nearby window.
APA, Harvard, Vancouver, ISO, and other styles
6

Abbood, Alaa Ahmed, Mohammed Sabbih Hamoud Al-Tamimi, Sabine U. Peters, and Ghazali Sulong. "New Combined Technique for Fingerprint Image Enhancement." Modern Applied Science 11, no. 1 (December 19, 2016): 222. http://dx.doi.org/10.5539/mas.v11n1p222.

Full text
Abstract:
This paper presents a combination of enhancement techniques for fingerprint images affected by different type of noise. These techniques were applied to improve image quality and come up with an acceptable image contrast. The proposed method included five different enhancement techniques: Normalization, Histogram Equalization, Binarization, Skeletonization and Fusion. The Normalization process standardized the pixel intensity which facilitated the processing of subsequent image enhancement stages. Subsequently, the Histogram Equalization technique increased the contrast of the images. Furthermore, the Binarization and Skeletonization techniques were implemented to differentiate between the ridge and valley structures and to obtain one pixel-wide lines. Finally, the Fusion technique was used to merge the results of the Histogram Equalization process with the Skeletonization process to obtain the new high contrast images. The proposed method was tested in different quality images from National Institute of Standard and Technology (NIST) special database 14. The experimental results are very encouraging and the current enhancement method appeared to be effective by improving different quality images.
APA, Harvard, Vancouver, ISO, and other styles
7

Adhari, Firman Maulana, Taufik Fuadi Abidin, and Ridha Ferdhiana. "License Plate Character Recognition using Convolutional Neural Network." Journal of Information Systems Engineering and Business Intelligence 8, no. 1 (April 26, 2022): 51–60. http://dx.doi.org/10.20473/jisebi.8.1.51-60.

Full text
Abstract:
Background: In the last decade, the number of registered vehicles has grown exponentially. With more vehicles on the road, traffic jams, accidents, and violations also increase. A license plate plays a key role in solving such problems because it stores a vehicle’s historical information. Therefore, automated license-plate character recognition is needed. Objective: This study proposes a recognition system that uses convolutional neural network (CNN) architectures to recognize characters from a license plate’s images. We called it a modified LeNet-5 architecture. Methods: We used four different CNN architectures to recognize license plate characters: AlexNet, LeNet-5, modified LeNet-5, and ResNet-50 architectures. We evaluated the performance based on their accuracy and computation time. We compared the deep learning methods with the Freeman chain code (FCC) extraction with support vector machine (SVM). We also evaluated the Otsu and the threshold binarization performances when applied in the FCC extraction method. Results: The ResNet-50 and modified LeNet-5 produces the best accuracy during the training at 0.97. The precision and recall scores of the ResNet-50 are both 0.97, while the modified LeNet-5’s values are 0.98 and 0.96, respectively. The modified LeNet-5 shows a slightly higher precision score but a lower recall score. The modified LeNet-5 shows a slightly lower accuracy during the testing than ResNet-50. Meanwhile, the Otsu binarization’s FCC extraction is better than the threshold binarization. Overall, the FCC extraction technique performs less effectively than CNN. The modified LeNet-5 computes the fastest at 7 mins and 57 secs, while ResNet-50 needs 42 mins and 11 secs. Conclusion: We discovered that CNN is better than the FCC extraction method with SVM. Both ResNet-50 and the modified LeNet-5 perform best during the training, with F measure scoring 0.97. However, ResNet-50 outperforms the modified LeNet-5 during the testing, with F-measure at 0.97 and 1.00, respectively. In addition, the FCC extraction using the Otsu binarization is better than the threshold binarization. Otsu binarization reached 0.91, higher than the static threshold binarization at 127. In addition, Otsu binarization produces a dynamic threshold value depending on the images’ light intensity. Keywords: Convolutional Neural Network, Freeman Chain Code, License Plate Character Recognition, Support Vector Machine
APA, Harvard, Vancouver, ISO, and other styles
8

García, José, Paola Moraga, Matias Valenzuela, Broderick Crawford, Ricardo Soto, Hernan Pinto, Alvaro Peña, Francisco Altimiras, and Gino Astorga. "A Db-Scan Binarization Algorithm Applied to Matrix Covering Problems." Computational Intelligence and Neuroscience 2019 (September 16, 2019): 1–16. http://dx.doi.org/10.1155/2019/3238574.

Full text
Abstract:
The integration of machine learning techniques and metaheuristic algorithms is an area of interest due to the great potential for applications. In particular, using these hybrid techniques to solve combinatorial optimization problems (COPs) to improve the quality of the solutions and convergence times is of great interest in operations research. In this article, the db-scan unsupervised learning technique is explored with the goal of using it in the binarization process of continuous swarm intelligence metaheuristic algorithms. The contribution of the db-scan operator to the binarization process is analyzed systematically through the design of random operators. Additionally, the behavior of this algorithm is studied and compared with other binarization methods based on clusters and transfer functions (TFs). To verify the results, the well-known set covering problem is addressed, and a real-world problem is solved. The results show that the integration of the db-scan technique produces consistently better results in terms of computation time and quality of the solutions when compared with TFs and random operators. Furthermore, when it is compared with other clustering techniques, we see that it achieves significantly improved convergence times.
APA, Harvard, Vancouver, ISO, and other styles
9

Rozen, Tal, Moshe Kimhi, Brian Chmiel, Avi Mendelson, and Chaim Baskin. "Bimodal-Distributed Binarized Neural Networks." Mathematics 10, no. 21 (November 3, 2022): 4107. http://dx.doi.org/10.3390/math10214107.

Full text
Abstract:
Binary neural networks (BNNs) are an extremely promising method for reducing deep neural networks’ complexity and power consumption significantly. Binarization techniques, however, suffer from ineligible performance degradation compared to their full-precision counterparts. Prior work mainly focused on strategies for sign function approximation during the forward and backward phases to reduce the quantization error during the binarization process. In this work, we propose a bimodal-distributed binarization method (BD-BNN). The newly proposed technique aims to impose a bimodal distribution of the network weights by kurtosis regularization. The proposed method consists of a teacher–trainer training scheme termed weight distribution mimicking (WDM), which efficiently imitates the full-precision network weight distribution to their binary counterpart. Preserving this distribution during binarization-aware training creates robust and informative binary feature maps and thus it can significantly reduce the generalization error of the BNN. Extensive evaluations on CIFAR-10 and ImageNet demonstrate that our newly proposed BD-BNN outperforms current state-of-the-art schemes.
APA, Harvard, Vancouver, ISO, and other styles
10

Joseph, Manju, and Jijina K. P. Jijina K.P. "Simple and Efficient Document Image Binarization Technique For Degraded Document Images." International Journal of Scientific Research 3, no. 5 (June 1, 2012): 217–20. http://dx.doi.org/10.15373/22778179/may2014/65.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Chamchong, Rapeeporn, and Chun Che Fung. "A Framework for the Selection of Binarization Techniques on Palm Leaf Manuscripts Using Support Vector Machine." Advances in Decision Sciences 2015 (January 26, 2015): 1–7. http://dx.doi.org/10.1155/2015/925935.

Full text
Abstract:
Challenges for text processing in ancient document images are mainly due to the high degree of variations in foreground and background. Image binarization is an image segmentation technique used to separate the image into text and background components. Although several techniques for binarizing text documents have been proposed, the performance of these techniques varies and depends on the image characteristics. Therefore, selecting binarization techniques can be a key idea to achieve improved results. This paper proposes a framework for selecting binarizing techniques of palm leaf manuscripts using Support Vector Machines (SVMs). The overall process is divided into three steps: (i) feature extraction: feature patterns are extracted from grayscale images based on global intensity, local contrast, and intensity; (ii) treatment of imbalanced data: imbalanced dataset is balanced by using Synthetic Minority Oversampling Technique as to improve the performance of prediction; and (iii) selection: SVM is applied in order to select the appropriate binarization techniques. The proposed framework has been evaluated with palm leaf manuscript images and benchmarking dataset from DIBCO series and compared the performance of prediction between imbalanced and balanced datasets. Experimental results showed that the proposed framework can be used as an integral part of an automatic selection process.
APA, Harvard, Vancouver, ISO, and other styles
12

Lokhande, Supriya, and N. A. Dawande N.A.Dawande. "Document Image Binarization Technique for Degraded Document Images." International Journal of Computer Applications 122, no. 22 (July 18, 2015): 22–29. http://dx.doi.org/10.5120/21858-5183.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Tomar, Kirti. "A Survey on Binarization Technique for Degraded Documents." International Journal for Research in Applied Science and Engineering Technology 7, no. 5 (May 31, 2019): 1115–22. http://dx.doi.org/10.22214/ijraset.2019.5185.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Singh, Brij Mohan, and Mridula. "Efficient binarization technique for severely degraded document images." CSI Transactions on ICT 2, no. 3 (September 10, 2014): 153–61. http://dx.doi.org/10.1007/s40012-014-0045-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Jacobs, B. A., and E. Momoniat. "A locally adaptive, diffusion based text binarization technique." Applied Mathematics and Computation 269 (October 2015): 464–72. http://dx.doi.org/10.1016/j.amc.2015.07.091.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

García, José, Paola Moraga, Matias Valenzuela, and Hernan Pinto. "A db-Scan Hybrid Algorithm: An Application to the Multidimensional Knapsack Problem." Mathematics 8, no. 4 (April 2, 2020): 507. http://dx.doi.org/10.3390/math8040507.

Full text
Abstract:
This article proposes a hybrid algorithm that makes use of the db-scan unsupervised learning technique to obtain binary versions of continuous swarm intelligence algorithms. These binary versions are then applied to large instances of the well-known multidimensional knapsack problem. The contribution of the db-scan operator to the binarization process is systematically studied. For this, two random operators are built that serve as a baseline for comparison. Once the contribution is established, the db-scan operator is compared with two other binarization methods that have satisfactorily solved the multidimensional knapsack problem. The first method uses the unsupervised learning technique k-means as a binarization method. The second makes use of transfer functions as a mechanism to generate binary versions. The results show that the hybrid algorithm using db-scan produces more consistent results compared to transfer function (TF) and random operators.
APA, Harvard, Vancouver, ISO, and other styles
17

Becerra-Rozas, Marcelo, José Lemus-Romani, Felipe Cisternas-Caneo, Broderick Crawford, Ricardo Soto, and José García. "Swarm-Inspired Computing to Solve Binary Optimization Problems: A Backward Q-Learning Binarization Scheme Selector." Mathematics 10, no. 24 (December 15, 2022): 4776. http://dx.doi.org/10.3390/math10244776.

Full text
Abstract:
In recent years, continuous metaheuristics have been a trend in solving binary-based combinatorial problems due to their good results. However, to use this type of metaheuristics, it is necessary to adapt them to work in binary environments, and in general, this adaptation is not trivial. The method proposed in this work evaluates the use of reinforcement learning techniques in the binarization process. Specifically, the backward Q-learning technique is explored to choose binarization schemes intelligently. This allows any continuous metaheuristic to be adapted to binary environments. The illustrated results are competitive, thus providing a novel option to address different complex problems in the industry.
APA, Harvard, Vancouver, ISO, and other styles
18

Kim, Min-Ki. "Adaptive Thresholding Technique for Binarization of License Plate Images." Journal of the Optical Society of Korea 14, no. 4 (December 25, 2010): 368–75. http://dx.doi.org/10.3807/josk.2010.14.4.368.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

lu, Sha. "A Review Paper on Character Recognition using Binarization technique." International Journal of Engineering Trends and Technology 22, no. 5 (April 25, 2015): 214–17. http://dx.doi.org/10.14445/22315381/ijett-v22p245.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Bolan Su, Shijian Lu, and Chew Lim Tan. "Robust Document Image Binarization Technique for Degraded Document Images." IEEE Transactions on Image Processing 22, no. 4 (April 2013): 1408–17. http://dx.doi.org/10.1109/tip.2012.2231089.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Sari, Toufik, Abderrahmane Kefali, and Halima Bahi. "Text Extraction from Historical Document Images by the Combination of Several Thresholding Techniques." Advances in Multimedia 2014 (2014): 1–10. http://dx.doi.org/10.1155/2014/934656.

Full text
Abstract:
This paper presents a new technique for the binarization of historical document images characterized by deteriorations and damages making their automatic processing difficult at several levels. The proposed method is based on hybrid thresholding combining the advantages of global and local methods and on the mixture of several binarization techniques. Two stages have been included. In the first stage, global thresholding is applied on the entire image and two different thresholds are determined from which the most of image pixels are classified intoforegroundorbackground. In the second stage, the remaining pixels are assigned toforegroundorbackgroundclasses based on local analysis. In this stage, several local thresholding methods are combined and the final binary value of each remaining pixel is chosen as the most probable one. The proposed technique has been tested on a large collection of standard and synthetic documents and compared with well-known methods using standard measures and was shown to be more powerful.
APA, Harvard, Vancouver, ISO, and other styles
22

Ovchinnikov, Andrey S., Vitaly V. Krasnov, Pavel A. Cheremkhin, Vladislav G. Rodin, Ekaterina A. Savchenkova, Rostislav S. Starikov, and Nikolay N. Evtikhiev. "What Binarization Method Is the Best for Amplitude Inline Fresnel Holograms Synthesized for Divergent Beams Using the Direct Search with Random Trajectory Technique?" Journal of Imaging 9, no. 2 (January 27, 2023): 28. http://dx.doi.org/10.3390/jimaging9020028.

Full text
Abstract:
Fast reconstruction of holographic and diffractive optical elements (DOE) can be implemented by binary digital micromirror devices (DMD). Since micromirrors of the DMD have two positions, the synthesized DOEs must be binary. This work studies the possibility of improving the method of synthesis of amplitude binary inline Fresnel holograms in divergent beams. The method consists of the modified Gerchberg–Saxton algorithm, Otsu binarization and direct search with random trajectory technique. To achieve a better quality of reconstruction, various binarization methods were compared. We performed numerical and optical experiments using the DMD. Holograms of halftone image with size up to 1024 × 1024 pixels were synthesized. It was determined that local and several global threshold methods provide the best quality. Compared to the Otsu binarization used in the original method of the synthesis, the reconstruction quality (MSE and SSIM values) is improved by 46% and the diffraction efficiency is increased by 27%.
APA, Harvard, Vancouver, ISO, and other styles
23

Natarajan, Jayanthi, and Indu Sreedevi. "Enhancement of ancient manuscript images by log based binarization technique." AEU - International Journal of Electronics and Communications 75 (May 2017): 15–22. http://dx.doi.org/10.1016/j.aeue.2017.03.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Becerra-Rozas, Marcelo, José Lemus-Romani, Felipe Cisternas-Caneo, Broderick Crawford, Ricardo Soto, Gino Astorga, Carlos Castro, and José García. "Continuous Metaheuristics for Binary Optimization Problems: An Updated Systematic Literature Review." Mathematics 11, no. 1 (December 27, 2022): 129. http://dx.doi.org/10.3390/math11010129.

Full text
Abstract:
For years, extensive research has been in the binarization of continuous metaheuristics for solving binary-domain combinatorial problems. This paper is a continuation of a previous review and seeks to draw a comprehensive picture of the various ways to binarize this type of metaheuristics; the study uses a standard systematic review consisting of the analysis of 512 publications from 2017 to January 2022 (5 years). The work will provide a theoretical foundation for novice researchers tackling combinatorial optimization using metaheuristic algorithms and for expert researchers analyzing the binarization mechanism’s impact on the metaheuristic algorithms’ performance. Structuring this information allows for improving the results of metaheuristics and broadening the spectrum of binary problems to be solved. We can conclude from this study that there is no single general technique capable of efficient binarization; instead, there are multiple forms with different performances.
APA, Harvard, Vancouver, ISO, and other styles
25

Amitha, Dr T., and Dr B. Raghu. "Robust Phase-Based Features Extracted From Image By A Binarization Technique." IOSR Journal of Computer Engineering 18, no. 04 (April 2016): 10–14. http://dx.doi.org/10.9790/0661-1804041014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Azani Mustafa, Wan, Haniza Yazid, Ahmed Alkhayyat, Mohd Aminudin Jamlos, and Hasliza A. Rahim. "Effect of Direct Statistical Contrast Enhancement Technique on Document Image Binarization." Computers, Materials & Continua 70, no. 2 (2022): 3549–64. http://dx.doi.org/10.32604/cmc.2022.019801.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Choudhary, Amit, Rahul Rishi, and Savita Ahlawat. "Off-line Handwritten Character Recognition Using Features Extracted from Binarization Technique." AASRI Procedia 4 (2013): 306–12. http://dx.doi.org/10.1016/j.aasri.2013.10.045.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Salama, Mostafa A., and Aboul Ella Hassanien. "Binarization and Validation in Formal Concept Analysis." International Journal of Systems Biology and Biomedical Technologies 1, no. 4 (October 2012): 16–27. http://dx.doi.org/10.4018/ijsbbt.2012100102.

Full text
Abstract:
Representation and visualization of continuous data using the Formal Concept Analysis (FCA) became an important requirement in real-life fields. Application of formal concept analysis (FCA) model on numerical data, a scaling or Discretization / binarization procedures should be applied as preprocessing stage. The Scaling procedure increases the complexity of computation of the FCA, while the binarization process leads to a distortion in the internal structure of the input data set. The proposed approach uses a binarization procedure prior to applying FCA model, and then applies a validation process to the generated lattice to measure or ensure its degree of accuracy. The introduced approach is based on the evaluation of each attribute according to the objects of its extent set. To prove the validity of the introduced approach, the technique is applied on two data sets in the medical field which are the Indian Diabetes and the Breast Cancer data sets. Both data sets show the generation of a valid lattice.
APA, Harvard, Vancouver, ISO, and other styles
29

Saddami, Khairun, Khairul Munadi, Yuwaldi Away, and Fitri Arnia. "Improvement of binarization performance using local otsu thresholding." International Journal of Electrical and Computer Engineering (IJECE) 9, no. 1 (February 1, 2019): 264. http://dx.doi.org/10.11591/ijece.v9i1.pp264-272.

Full text
Abstract:
<p><span>Ancient document usually contains multiple noises such as uneven-background, show-through, water-spilling, spots, and blur text. The noise will affect the binarization process. Binarization is an extremely important process in image processing, especially for character recognition. This paper presents an improvement to Nina binarization technique. Improvements were achieved by reducing processing steps and replacing median filtering by Wiener filtering. First, the document background was approximated by using Wiener filter, and then image subtraction was applied. Furthermore, the manuscript contrast was adjusted by mapping intensity of image value using intensity transformation method. Next, the local Otsu thresholding was applied. For removing spotting noise, we applied labeled connected component. The proposed method had been testing on H-DIBCO 2014 and degraded Jawi handwritten ancient documents. It performed better regarding recall and precision values, as compared to Otsu, Niblack, Sauvola, Lu, Su, and Nina, especially in the documents with show-through, water-spilling and combination noises.</span></p>
APA, Harvard, Vancouver, ISO, and other styles
30

Fang, Yating, and Baojiang Zhong. "Cell segmentation in fluorescence microscopy images based on multi-scale histogram thresholding." Mathematical Biosciences and Engineering 20, no. 9 (2023): 16259–78. http://dx.doi.org/10.3934/mbe.2023726.

Full text
Abstract:
<abstract><p>Cell segmentation from fluorescent microscopy images plays an important role in various applications, such as disease mechanism assessment and drug discovery research. Exiting segmentation methods often adopt image binarization as the first step, through which the foreground cell is separated from the background so that the subsequent processing steps can be greatly facilitated. To pursue this goal, a histogram thresholding can be performed on the input image, which first applies a Gaussian smoothing to suppress the jaggedness of the histogram curve and then exploits Rosin's method to determine a threshold for conducting image binarization. However, an inappropriate amount of smoothing could lead to the inaccurate segmentation of cells. To address this crucial problem, a multi-scale histogram thresholding (MHT) technique is proposed in the present paper, where the scale refers to the standard deviation of the Gaussian that determines the amount of smoothing. To be specific, the image histogram is smoothed at three chosen scales first, and then the smoothed histogram curves are fused to conduct image binarization via thresholding. To further improve the segmentation accuracy and overcome the difficulty of extracting overlapping cells, our proposed MHT technique is incorporated into a multi-scale cell segmentation framework, in which a region-based ellipse fitting technique is adopted to identify overlapping cells. Extensive experimental results obtained on benchmark datasets show that the new method can deliver superior performance compared to the current state-of-the-arts.</p></abstract>
APA, Harvard, Vancouver, ISO, and other styles
31

García, José, Gino Astorga, and Víctor Yepes. "An Analysis of a KNN Perturbation Operator: An Application to the Binarization of Continuous Metaheuristics." Mathematics 9, no. 3 (January 24, 2021): 225. http://dx.doi.org/10.3390/math9030225.

Full text
Abstract:
The optimization methods and, in particular, metaheuristics must be constantly improved to reduce execution times, improve the results, and thus be able to address broader instances. In particular, addressing combinatorial optimization problems is critical in the areas of operational research and engineering. In this work, a perturbation operator is proposed which uses the k-nearest neighbors technique, and this is studied with the aim of improving the diversification and intensification properties of metaheuristic algorithms in their binary version. Random operators are designed to study the contribution of the perturbation operator. To verify the proposal, large instances of the well-known set covering problem are studied. Box plots, convergence charts, and the Wilcoxon statistical test are used to determine the operator contribution. Furthermore, a comparison is made using metaheuristic techniques that use general binarization mechanisms such as transfer functions or db-scan as binarization methods. The results obtained indicate that the KNN perturbation operator improves significantly the results.
APA, Harvard, Vancouver, ISO, and other styles
32

Elvas, Luis B., Ana G. Almeida, Luís Rosario, Miguel Sales Dias, and João C. Ferreira. "Calcium Identification and Scoring Based on Echocardiography. An Exploratory Study on Aortic Valve Stenosis." Journal of Personalized Medicine 11, no. 7 (June 24, 2021): 598. http://dx.doi.org/10.3390/jpm11070598.

Full text
Abstract:
Currently, an echocardiography expert is needed to identify calcium in the aortic valve, and a cardiac CT-Scan image is needed for calcium quantification. When performing a CT-scan, the patient is subject to radiation, and therefore the number of CT-scans that can be performed should be limited, restricting the patient’s monitoring. Computer Vision (CV) has opened new opportunities for improved efficiency when extracting knowledge from an image. Applying CV techniques on echocardiography imaging may reduce the medical workload for identifying the calcium and quantifying it, helping doctors to maintain a better tracking of their patients. In our approach, a simple technique to identify and extract the calcium pixel count from echocardiography imaging, was developed by using CV. Based on anonymized real patient echocardiographic images, this approach enables semi-automatic calcium identification. As the brightness of echocardiography images (with the highest intensity corresponding to calcium) vary depending on the acquisition settings, echocardiographic adaptive image binarization has been performed. Given that blood maintains the same intensity on echocardiographic images—being always the darker region—blood areas in the image were used to create an adaptive threshold for binarization. After binarization, the region of interest (ROI) with calcium, was interactively selected by an echocardiography expert and extracted, allowing us to compute a calcium pixel count, corresponding to the spatial amount of calcium. The results obtained from these experiments are encouraging. With this technique, from echocardiographic images collected for the same patient with different acquisition settings and different brightness, obtaining a calcium pixel count, where pixel values show an absolute pixel value margin of error of 3 (on a scale from 0 to 255), achieving a Pearson Correlation of 0.92 indicating a strong correlation with the human expert assessment of calcium area for the same images.
APA, Harvard, Vancouver, ISO, and other styles
33

Mollah, Ayatullah Faruk, Subhadip Basu, Mita Nasipuri, and Dipak Kumar Basu. "Handheld Mobile Device Based Text Region Extraction and Binarization of Image Embedded Text Documents." Journal of Intelligent Systems 22, no. 1 (March 1, 2013): 25–47. http://dx.doi.org/10.1515/jisys-2012-0019.

Full text
Abstract:
Abstract.Effective text region extraction and binarization of image embedded text documents on mobile devices having limited computational resources is an open research problem. In this paper, we present one such technique for preprocessing images captured with built-in cameras of handheld devices with an aim of developing an efficient Business Card Reader. At first, the card image is processed for isolating foreground components. These foreground components are classified as either text or non-text using different feature descriptors of texts and images. The non-text components are removed and the textual ones are binarized with a fast adaptive algorithm. Specifically, we propose new techniques (targeted to mobile devices) for (i) foreground component isolation, (ii) text extraction and (iii) binarization of text regions from camera captured business card images. Experiments with business card images of various resolutions show that the present technique yields better accuracy and involves low computational overhead in comparison with the state-of-the-art. We achieve optimum text/non-text separation performance with images of resolution 800×600 pixels with an average recall rate of 93.90% and a precision rate of 96.84%. It involves a peak memory consumption of 0.68 MB and processing time of 0.102 seconds on a moderately powerful notebook, and 4 seconds of processing time on a PDA.
APA, Harvard, Vancouver, ISO, and other styles
34

Kim, Jung Hun, and Gibak Kim. "A Binarization Technique using Histogram Matching for License Plate with a Shadow." Journal of Broadcast Engineering 19, no. 1 (January 30, 2014): 56–63. http://dx.doi.org/10.5909/jbe.2014.19.1.56.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

., Prashali Chaudhary. "AN EFFECTIVE AND ROBUST TECHNIQUE FOR THE BINARIZATION OF DEGRADED DOCUMENT IMAGES." International Journal of Research in Engineering and Technology 03, no. 06 (June 25, 2014): 140–45. http://dx.doi.org/10.15623/ijret.2014.0306025.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

K. Bawa, Rajesh, and Ganesh K. Sethi. "A Binarization Technique for Extraction of Devanagari Text from Camera Based Images." Signal & Image Processing : An International Journal 5, no. 2 (April 30, 2014): 29–37. http://dx.doi.org/10.5121/sipij.2014.5203.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Wang, Chang, and Jing Jing Gao. "The Detection of Surface Quality On-Line Based on Machine Vision in the Production of Bearings." Applied Mechanics and Materials 319 (May 2013): 523–27. http://dx.doi.org/10.4028/www.scientific.net/amm.319.523.

Full text
Abstract:
The combining of digital image processing technique and pattern recognition technique, it can be wild used in the products of industry classification and recognition Line bearing assembly defects in this article for the detection and identification of needs, Automatic detection system based on machine vision, contrast measurement plane array camera on a different surfaceImage acquisition, binarization processing for subsequent pretreatment image pattern recognition, feature extraction and eigenvalue comparison, product line surface defect detection and identification.
APA, Harvard, Vancouver, ISO, and other styles
38

PAPAMARKOS, NIKOS. "DOCUMENT GRAY-SCALE REDUCTION USING A NEURO-FUZZY TECHNIQUE." International Journal of Pattern Recognition and Artificial Intelligence 17, no. 04 (June 2003): 505–27. http://dx.doi.org/10.1142/s0218001403002502.

Full text
Abstract:
This paper proposes a new neuro-fuzzy technique suitable for binarization and gray-scale reduction of digital documents. The proposed approach uses both the image gray-scales and additional local spatial features. Both, gray-scales and local feature values feed a Kohonen Self-Organized Feature Map (SOFM) neural network classifier. After training, the neurons of the output competition layer of the SOFM define two bilevel classes. Using the content of these classes, fuzzy membership functions are obtained that are next used by the fuzzy C-means (FCM) algorithm in order to reduce the character-blurring problem. The method is suitable for improving blurring and badly illuminated documents and can be easily modified to accommodate any type of spatial characteristics.
APA, Harvard, Vancouver, ISO, and other styles
39

Kowalczyk, Marek, Manuel Martínez-Corral, Tomasz Cichocki, and Pedro Andrés. "One-dimensional error-diffusion technique adapted for binarization of rotationally symmetric pupil filters." Optics Communications 114, no. 3-4 (February 1995): 211–18. http://dx.doi.org/10.1016/0030-4018(94)00607-v.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Thepade, Sudeep, Rik Das, and Saurav Ghosh. "Feature Extraction with Ordered Mean Values for Content Based Image Classification." Advances in Computer Engineering 2014 (December 17, 2014): 1–15. http://dx.doi.org/10.1155/2014/454876.

Full text
Abstract:
Categorization of images into meaningful classes by efficient extraction of feature vectors from image datasets has been dependent on feature selection techniques. Traditionally, feature vector extraction has been carried out using different methods of image binarization done with selection of global, local, or mean threshold. This paper has proposed a novel technique for feature extraction based on ordered mean values. The proposed technique was combined with feature extraction using discrete sine transform (DST) for better classification results using multitechnique fusion. The novel methodology was compared to the traditional techniques used for feature extraction for content based image classification. Three benchmark datasets, namely, Wang dataset, Oliva and Torralba (OT-Scene) dataset, and Caltech dataset, were used for evaluation purpose. Performance measure after evaluation has evidently revealed the superiority of the proposed fusion technique with ordered mean values and discrete sine transform over the popular approaches of single view feature extraction methodologies for classification.
APA, Harvard, Vancouver, ISO, and other styles
41

Vizilter, Y. V., A. Y. Rubis, S. Y. Zheltov, and O. V. Vygolov. "CHANGE DETECTION VIA MORPHOLOGICAL COMPARATIVE FILTERS." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences III-3 (June 3, 2016): 279–86. http://dx.doi.org/10.5194/isprsannals-iii-3-279-2016.

Full text
Abstract:
In this paper we propose the new change detection technique based on morphological comparative filtering. This technique generalizes the morphological image analysis scheme proposed by Pytiev. A new class of comparative filters based on guided contrasting is developed. Comparative filtering based on diffusion morphology is implemented too. The change detection pipeline contains: comparative filtering on image pyramid, calculation of morphological difference map, binarization, extraction of change proposals and testing change proposals using local morphological correlation coefficient. Experimental results demonstrate the applicability of proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
42

Vizilter, Y. V., A. Y. Rubis, S. Y. Zheltov, and O. V. Vygolov. "CHANGE DETECTION VIA MORPHOLOGICAL COMPARATIVE FILTERS." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences III-3 (June 3, 2016): 279–86. http://dx.doi.org/10.5194/isprs-annals-iii-3-279-2016.

Full text
Abstract:
In this paper we propose the new change detection technique based on morphological comparative filtering. This technique generalizes the morphological image analysis scheme proposed by Pytiev. A new class of comparative filters based on guided contrasting is developed. Comparative filtering based on diffusion morphology is implemented too. The change detection pipeline contains: comparative filtering on image pyramid, calculation of morphological difference map, binarization, extraction of change proposals and testing change proposals using local morphological correlation coefficient. Experimental results demonstrate the applicability of proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
43

Yanagisawa, Toshifumi, Kohki Kamiya, and Hirohisa Kurosaki. "New NEO Detection Techniques using the FPGA." Publications of the Astronomical Society of Japan 73, no. 3 (March 26, 2021): 519–29. http://dx.doi.org/10.1093/pasj/psab017.

Full text
Abstract:
Abstract We have developed a new method for detecting near-Earth objects (NEOs) based on a Field Programmable Gate-Array (FPGA). Unlike conventional methods, our technique uses 30–40 frames to detect faint NEOs that are almost invisible on a single frame. To reduce analysis time, image binarization and an FPGA-implemented algorithm were used. This method has aided in the discovery of 11 NEOs by analyzing frames captured with 20 cm class telescopes. This new method will contribute to discovering new NEOs that approach the Earth closely.
APA, Harvard, Vancouver, ISO, and other styles
44

AL-SADOUN, HUMOUD B., and ADNAN AMIN. "A NEW STRUCTURAL TECHNIQUE FOR RECOGNIZING PRINTED ARABIC TEXT." International Journal of Pattern Recognition and Artificial Intelligence 09, no. 01 (February 1995): 101–25. http://dx.doi.org/10.1142/s0218001495000067.

Full text
Abstract:
This paper proposes a new structural technique for Arabic text recognition. The technique can be divided into five major steps: (1) preprocessing and binarization; (2) thinning; (3) binary tree construction; (4) segmentation; and (5) recognition. The advantage of this technique is that its execution does not depend on either the font or size of character. Thus, this same technique might be utilized for the recognition of machine or hand printed text. The relevant algorithm is implemented on a microcomputer. Experiments were conducted to verify the accuracy and the speed of this algorithm using about 20,000 subwords each with an average length of 3 characters. The subwords used were written using different fonts. The recognition rate obtained in the experiments indicated an accuracy of 93.38 % with a speed of 2.7 characters per second.
APA, Harvard, Vancouver, ISO, and other styles
45

Santosh, K. C., Naved Alam, Partha Pratim Roy, Laurent Wendling, Sameer Antani, and George R. Thoma. "A Simple and Efficient Arrowhead Detection Technique in Biomedical Images." International Journal of Pattern Recognition and Artificial Intelligence 30, no. 05 (April 21, 2016): 1657002. http://dx.doi.org/10.1142/s0218001416570020.

Full text
Abstract:
In biomedical documents/publications, medical images tend to be complex by nature and often contain several regions that are annotated using arrows. In this context, an automated arrowhead detection is a critical precursor to region-of-interest (ROI) labeling and image content analysis. To detect arrowheads, in this paper, images are first binarized using fuzzy binarization technique to segment a set of candidates based on connected component (CC) principle. To select arrow candidates, we use convexity defect-based filtering, which is followed by template matching via dynamic time warping (DTW). The DTW similarity score confirms the presence of arrows in the image. Our test results on biomedical images from imageCLEF 2010 collection shows the interest of the technique, and can be compared with previously reported state-of-the-art results.
APA, Harvard, Vancouver, ISO, and other styles
46

Makoveichuk, Oleksandr, Igor Ruban, Nataliia Bolohova, Andriy Kovalenko, Vitalii Martovytskyi, and Tetiana Filimonchuk. "Development of a method for improving stability method of applying digital watermarks to digital images." Eastern-European Journal of Enterprise Technologies 3, no. 2 (111) (June 30, 2021): 45–56. http://dx.doi.org/10.15587/1729-4061.2021.235802.

Full text
Abstract:
A technique for increasing the stability of methods for applying digital watermark into digital images is presented. A technique for increasing the stability of methods for applying digital watermarks into digital images, based on pseudo-holographic coding and additional filtering of a digital watermark, has been developed. The technique described in this work using pseudo-holographic coding of digital watermarks is effective for all types of attacks that were considered, except for image rotation. The paper presents a statistical indicator for assessing the stability of methods for applying digital watermarks. The indicator makes it possible to comprehensively assess the resistance of the method to a certain number of attacks. An experimental study was carried out according to the proposed method. This technique is most effective when part of the image is lost. When pre-filtering a digital watermark, the most effective is the third filtering method, which is averaging over a cell with subsequent binarization. The least efficient is the first method, which is binarization and finding the statistical mode over the cell. For an affine type attack, which is an image rotation, this technique is effective only when the rotation is compensated. To estimate the rotation angle, an affine transformation matrix is found, which is obtained from a consistent set of corresponding ORB-descriptors. Using this method allows to accurately extract a digital watermark for the entire range of angles. A comprehensive assessment of the methodology for increasing the stability of the method of applying a digital watermark based on Wavelet transforms has shown that this method is 20 % better at counteracting various types of attacks
APA, Harvard, Vancouver, ISO, and other styles
47

Das, Rik, Sudeep Thepade, Subhajit Bhattacharya, and Saurav Ghosh. "Retrieval Architecture with Classified Query for Content Based Image Recognition." Applied Computational Intelligence and Soft Computing 2016 (2016): 1–9. http://dx.doi.org/10.1155/2016/1861247.

Full text
Abstract:
The consumer behavior has been observed to be largely influenced by image data with increasing familiarity of smart phones and World Wide Web. Traditional technique of browsing through product varieties in the Internet with text keywords has been gradually replaced by the easy accessible image data. The importance of image data has portrayed a steady growth in application orientation for business domain with the advent of different image capturing devices and social media. The paper has described a methodology of feature extraction by image binarization technique for enhancing identification and retrieval of information using content based image recognition. The proposed algorithm was tested on two public datasets, namely, Wang dataset and Oliva and Torralba (OT-Scene) dataset with 3688 images on the whole. It has outclassed the state-of-the-art techniques in performance measure and has shown statistical significance.
APA, Harvard, Vancouver, ISO, and other styles
48

Kim, Sang-Yup, and Seong-Whan Lee. "Gray-Scale Nonlinear Shape Normalization Method for Handwritten Oriental Character Recognition." International Journal of Pattern Recognition and Artificial Intelligence 12, no. 01 (February 1998): 81–95. http://dx.doi.org/10.1142/s0218001498000075.

Full text
Abstract:
In general, nonlinear shape normalization methods for binary images have been used in order to compensate for the shape distortions of handwritten characters. However, in most document image analysis and recognition systems, a gray-scale image is first captured and digitized using a scanner or a video camera, then a binary image is extracted from the original gray-scale image using a certain extraction technique. This binarization process may remove some useful information of character images such as topological features, and introduce noises to character background. These errors are accumulated in nonlinear shape normalization step and transferred to the following feature extraction or recognition step. They may eventually cause incorrect recognition results. In this paper, we propose nonlinear shape normalization methods for gray-scale handwritten Oriental characters in order to minimize the loss of information caused by binarization and compensate for the shape distortions of characters. Two-dimensional linear interpolation technique has been extended to nonlinear space and the extended interpolation technique has been adopted in the proposed methods to enhance the quality of normalized images. In order to verify the efficiency of the proposed methods, the recognition rate, the processing time and the computational complexity of the proposed algorithms have been considered. The experimental results demonstrate that the proposed methods are efficient not only to compensate for the shape distortions of handwritten Oriental characters but also to maintain the information in gray-scale Oriental characters.
APA, Harvard, Vancouver, ISO, and other styles
49

Lu, An Qun, Shou Zhi Zhang, and Qian Tian. "Matlab Image Processing Technique and Application in Pore Structure Characterization of Hardened Cement Pastes." Advanced Materials Research 785-786 (September 2013): 1374–79. http://dx.doi.org/10.4028/www.scientific.net/amr.785-786.1374.

Full text
Abstract:
Based on Matlab image processing technique and backscattered electron image analysis method, a characterization method is set up to make quantitative analysis on pore structure of hardened cement pastes. Adopt Matlab to acquire images, and carry out gradation and binarization processing for them; use the combination method of local threshold segmentation and histogram segmentation to obtain pore structure characteristics. The results showed that evolution law of pore structure of fly ash cement pastes via Matlab image analysis method is similar to the conclusion obtained through BET and DVS. Selecting different angle of backscattered electron images in the same sample, its statistic results are more representative.
APA, Harvard, Vancouver, ISO, and other styles
50

Lee, Seungju, Yoonjae Chung, Chunyoung Kim, and Wontae Kim. "Automatic Thinning Detection through Image Segmentation Using Equivalent Array-Type Lamp-based Lock-in Thermography." Sensors 23, no. 3 (January 22, 2023): 1281. http://dx.doi.org/10.3390/s23031281.

Full text
Abstract:
Among the non-destructive testing (NDT) techniques, infrared thermography (IRT) is an attractive and highly reliable technology that can measure the thermal response of a wide area in real-time. In this study, thinning defects in S275 specimens were detected using lock-in thermography (LIT). After acquiring phase and amplitude images using four-point signal processing, the optimal excitation frequency was calculated. After segmentation was performed on each defect area, binarization was performed using the Otsu algorithm. For automated detection, the boundary tracking algorithm was used. The number of pixels was calculated and the detectability using RMSE was evaluated. Clarification of defective objects using image segmentation detectability evaluation technique using RMSE was presented.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography