Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: BINARIZATION TECHNIQUE.

Artykuły w czasopismach na temat „BINARIZATION TECHNIQUE”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „BINARIZATION TECHNIQUE”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.

1

Thepade, Sudeep, Rik Das i Saurav Ghosh. "A Novel Feature Extraction Technique Using Binarization of Bit Planes for Content Based Image Classification". Journal of Engineering 2014 (2014): 1–13. http://dx.doi.org/10.1155/2014/439218.

Pełny tekst źródła
Streszczenie:
A number of techniques have been proposed earlier for feature extraction using image binarization. Efficiency of the techniques was dependent on proper threshold selection for the binarization method. In this paper, a new feature extraction technique using image binarization has been proposed. The technique has binarized the significant bit planes of an image by selecting local thresholds. The proposed algorithm has been tested on a public dataset and has been compared with existing widely used techniques using binarization for extraction of features. It has been inferred that the proposed method has outclassed all the existing techniques and has shown consistent classification performance.
Style APA, Harvard, Vancouver, ISO itp.
2

Yu, Young-Jung. "Document Image Binarization Technique using MSER". Journal of the Korea Institute of Information and Communication Engineering 18, nr 8 (31.08.2014): 1941–47. http://dx.doi.org/10.6109/jkiice.2014.18.8.1941.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

MAKRIDIS, MICHAEL, i N. PAPAMARKOS. "AN ADAPTIVE LAYER-BASED LOCAL BINARIZATION TECHNIQUE FOR DEGRADED DOCUMENTS". International Journal of Pattern Recognition and Artificial Intelligence 24, nr 02 (marzec 2010): 245–79. http://dx.doi.org/10.1142/s0218001410007889.

Pełny tekst źródła
Streszczenie:
This paper presents a new technique for adaptive binarization of degraded document images. The proposed technique focuses on degraded documents with various background patterns and noise. It involves a preprocessing local background estimation stage, which detects for each pixel that is considered as background one, a proper grayscale value. Then, the estimated background is used to produce a new enhanced image having uniform background layers and increased local contrast. That is, the new image is a combination of background and foreground layers. Foreground and background layers are then separated by using a new transformation which exploits efficiently, both grayscale and spatial information. The final binary document is obtained by combining all foreground layers. The proposed binarization technique has been extensively tested on numerous documents and successfully compared with other well-known binarization techniques. Experimental results, which are based on statistical, visual and OCR criteria, verify the effectiveness of the technique.
Style APA, Harvard, Vancouver, ISO itp.
4

CHI, ZHERU, i QING WANG. "DOCUMENT IMAGE BINARIZATION WITH FEEDBACK FOR IMPROVING CHARACTER SEGMENTATION". International Journal of Image and Graphics 05, nr 02 (kwiecień 2005): 281–309. http://dx.doi.org/10.1142/s0219467805001768.

Pełny tekst źródła
Streszczenie:
Binarization of gray scale document images is one of the most important steps in automatic document image processing. In this paper, we present a two-stage document image binarization approach, which includes a top-down region-based binarization at the first stage and a neural network based binarization technique for the problematic blocks at the second stage after a feedback checking. Our two-stage approach is particularly effective for binarizing text images of highlighted or marked text. The region-based binarization method is fast and suitable for processing large document images. However, the block effect and regional edge noise are two unavoidable problems resulting in poor character segmentation and recognition. The neural network based classifier can achieve good performance in two-class classification problem such as the binarization of gray level document images. However, it is computationally costly. In our two-stage binarization approach, the feedback criteria are employed to keep the well binarized blocks from the first stage binarization and to re-binarize the problematic blocks at the second stage using the neural network binarizer to improve the character segmentation quality. Experimental results on a number of document images show that our two-stage binarization approach performs better than the single-stage binarization techniques tested in terms of character segmentation quality and computational cost.
Style APA, Harvard, Vancouver, ISO itp.
5

Pagare, Mr Aniket. "Document Image Binarization using Image Segmentation Technique". International Journal for Research in Applied Science and Engineering Technology 9, nr VII (15.07.2021): 1173–76. http://dx.doi.org/10.22214/ijraset.2021.36597.

Pełny tekst źródła
Streszczenie:
Segmentation of text from badly degraded document images is an extremely difficult assignment because of the high inter/Intra variety between the record foundation and the frontal area text of various report pictures. Picture preparing and design acknowledgment algorithms set aside more effort for execution on a solitary center processor. Designs Preparing Unit (GPU) is more mainstream these days because of its speed, programmability, minimal expense and more inbuilt execution centers in it. The primary objective of this exploration work is to make binarization quicker for acknowledgment of a huge number of corrupted report pictures on GPU. In this framework, we give another picture division calculation that every pixel in the picture has its own limit proposed. We are accomplishing equal work on a window of m*n size and separate article pixel of text stroke of that window. The archive text is additionally sectioned by a nearby edge that is assessed dependent on the forces of identified content stroke edge pixels inside a nearby window.
Style APA, Harvard, Vancouver, ISO itp.
6

Abbood, Alaa Ahmed, Mohammed Sabbih Hamoud Al-Tamimi, Sabine U. Peters i Ghazali Sulong. "New Combined Technique for Fingerprint Image Enhancement". Modern Applied Science 11, nr 1 (19.12.2016): 222. http://dx.doi.org/10.5539/mas.v11n1p222.

Pełny tekst źródła
Streszczenie:
This paper presents a combination of enhancement techniques for fingerprint images affected by different type of noise. These techniques were applied to improve image quality and come up with an acceptable image contrast. The proposed method included five different enhancement techniques: Normalization, Histogram Equalization, Binarization, Skeletonization and Fusion. The Normalization process standardized the pixel intensity which facilitated the processing of subsequent image enhancement stages. Subsequently, the Histogram Equalization technique increased the contrast of the images. Furthermore, the Binarization and Skeletonization techniques were implemented to differentiate between the ridge and valley structures and to obtain one pixel-wide lines. Finally, the Fusion technique was used to merge the results of the Histogram Equalization process with the Skeletonization process to obtain the new high contrast images. The proposed method was tested in different quality images from National Institute of Standard and Technology (NIST) special database 14. The experimental results are very encouraging and the current enhancement method appeared to be effective by improving different quality images.
Style APA, Harvard, Vancouver, ISO itp.
7

Adhari, Firman Maulana, Taufik Fuadi Abidin i Ridha Ferdhiana. "License Plate Character Recognition using Convolutional Neural Network". Journal of Information Systems Engineering and Business Intelligence 8, nr 1 (26.04.2022): 51–60. http://dx.doi.org/10.20473/jisebi.8.1.51-60.

Pełny tekst źródła
Streszczenie:
Background: In the last decade, the number of registered vehicles has grown exponentially. With more vehicles on the road, traffic jams, accidents, and violations also increase. A license plate plays a key role in solving such problems because it stores a vehicle’s historical information. Therefore, automated license-plate character recognition is needed. Objective: This study proposes a recognition system that uses convolutional neural network (CNN) architectures to recognize characters from a license plate’s images. We called it a modified LeNet-5 architecture. Methods: We used four different CNN architectures to recognize license plate characters: AlexNet, LeNet-5, modified LeNet-5, and ResNet-50 architectures. We evaluated the performance based on their accuracy and computation time. We compared the deep learning methods with the Freeman chain code (FCC) extraction with support vector machine (SVM). We also evaluated the Otsu and the threshold binarization performances when applied in the FCC extraction method. Results: The ResNet-50 and modified LeNet-5 produces the best accuracy during the training at 0.97. The precision and recall scores of the ResNet-50 are both 0.97, while the modified LeNet-5’s values are 0.98 and 0.96, respectively. The modified LeNet-5 shows a slightly higher precision score but a lower recall score. The modified LeNet-5 shows a slightly lower accuracy during the testing than ResNet-50. Meanwhile, the Otsu binarization’s FCC extraction is better than the threshold binarization. Overall, the FCC extraction technique performs less effectively than CNN. The modified LeNet-5 computes the fastest at 7 mins and 57 secs, while ResNet-50 needs 42 mins and 11 secs. Conclusion: We discovered that CNN is better than the FCC extraction method with SVM. Both ResNet-50 and the modified LeNet-5 perform best during the training, with F measure scoring 0.97. However, ResNet-50 outperforms the modified LeNet-5 during the testing, with F-measure at 0.97 and 1.00, respectively. In addition, the FCC extraction using the Otsu binarization is better than the threshold binarization. Otsu binarization reached 0.91, higher than the static threshold binarization at 127. In addition, Otsu binarization produces a dynamic threshold value depending on the images’ light intensity. Keywords: Convolutional Neural Network, Freeman Chain Code, License Plate Character Recognition, Support Vector Machine
Style APA, Harvard, Vancouver, ISO itp.
8

García, José, Paola Moraga, Matias Valenzuela, Broderick Crawford, Ricardo Soto, Hernan Pinto, Alvaro Peña, Francisco Altimiras i Gino Astorga. "A Db-Scan Binarization Algorithm Applied to Matrix Covering Problems". Computational Intelligence and Neuroscience 2019 (16.09.2019): 1–16. http://dx.doi.org/10.1155/2019/3238574.

Pełny tekst źródła
Streszczenie:
The integration of machine learning techniques and metaheuristic algorithms is an area of interest due to the great potential for applications. In particular, using these hybrid techniques to solve combinatorial optimization problems (COPs) to improve the quality of the solutions and convergence times is of great interest in operations research. In this article, the db-scan unsupervised learning technique is explored with the goal of using it in the binarization process of continuous swarm intelligence metaheuristic algorithms. The contribution of the db-scan operator to the binarization process is analyzed systematically through the design of random operators. Additionally, the behavior of this algorithm is studied and compared with other binarization methods based on clusters and transfer functions (TFs). To verify the results, the well-known set covering problem is addressed, and a real-world problem is solved. The results show that the integration of the db-scan technique produces consistently better results in terms of computation time and quality of the solutions when compared with TFs and random operators. Furthermore, when it is compared with other clustering techniques, we see that it achieves significantly improved convergence times.
Style APA, Harvard, Vancouver, ISO itp.
9

Rozen, Tal, Moshe Kimhi, Brian Chmiel, Avi Mendelson i Chaim Baskin. "Bimodal-Distributed Binarized Neural Networks". Mathematics 10, nr 21 (3.11.2022): 4107. http://dx.doi.org/10.3390/math10214107.

Pełny tekst źródła
Streszczenie:
Binary neural networks (BNNs) are an extremely promising method for reducing deep neural networks’ complexity and power consumption significantly. Binarization techniques, however, suffer from ineligible performance degradation compared to their full-precision counterparts. Prior work mainly focused on strategies for sign function approximation during the forward and backward phases to reduce the quantization error during the binarization process. In this work, we propose a bimodal-distributed binarization method (BD-BNN). The newly proposed technique aims to impose a bimodal distribution of the network weights by kurtosis regularization. The proposed method consists of a teacher–trainer training scheme termed weight distribution mimicking (WDM), which efficiently imitates the full-precision network weight distribution to their binary counterpart. Preserving this distribution during binarization-aware training creates robust and informative binary feature maps and thus it can significantly reduce the generalization error of the BNN. Extensive evaluations on CIFAR-10 and ImageNet demonstrate that our newly proposed BD-BNN outperforms current state-of-the-art schemes.
Style APA, Harvard, Vancouver, ISO itp.
10

Joseph, Manju, i Jijina K. P. Jijina K.P. "Simple and Efficient Document Image Binarization Technique For Degraded Document Images". International Journal of Scientific Research 3, nr 5 (1.06.2012): 217–20. http://dx.doi.org/10.15373/22778179/may2014/65.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
11

Chamchong, Rapeeporn, i Chun Che Fung. "A Framework for the Selection of Binarization Techniques on Palm Leaf Manuscripts Using Support Vector Machine". Advances in Decision Sciences 2015 (26.01.2015): 1–7. http://dx.doi.org/10.1155/2015/925935.

Pełny tekst źródła
Streszczenie:
Challenges for text processing in ancient document images are mainly due to the high degree of variations in foreground and background. Image binarization is an image segmentation technique used to separate the image into text and background components. Although several techniques for binarizing text documents have been proposed, the performance of these techniques varies and depends on the image characteristics. Therefore, selecting binarization techniques can be a key idea to achieve improved results. This paper proposes a framework for selecting binarizing techniques of palm leaf manuscripts using Support Vector Machines (SVMs). The overall process is divided into three steps: (i) feature extraction: feature patterns are extracted from grayscale images based on global intensity, local contrast, and intensity; (ii) treatment of imbalanced data: imbalanced dataset is balanced by using Synthetic Minority Oversampling Technique as to improve the performance of prediction; and (iii) selection: SVM is applied in order to select the appropriate binarization techniques. The proposed framework has been evaluated with palm leaf manuscript images and benchmarking dataset from DIBCO series and compared the performance of prediction between imbalanced and balanced datasets. Experimental results showed that the proposed framework can be used as an integral part of an automatic selection process.
Style APA, Harvard, Vancouver, ISO itp.
12

Lokhande, Supriya, i N. A. Dawande N.A.Dawande. "Document Image Binarization Technique for Degraded Document Images". International Journal of Computer Applications 122, nr 22 (18.07.2015): 22–29. http://dx.doi.org/10.5120/21858-5183.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
13

Tomar, Kirti. "A Survey on Binarization Technique for Degraded Documents". International Journal for Research in Applied Science and Engineering Technology 7, nr 5 (31.05.2019): 1115–22. http://dx.doi.org/10.22214/ijraset.2019.5185.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
14

Singh, Brij Mohan, i Mridula. "Efficient binarization technique for severely degraded document images". CSI Transactions on ICT 2, nr 3 (10.09.2014): 153–61. http://dx.doi.org/10.1007/s40012-014-0045-5.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
15

Jacobs, B. A., i E. Momoniat. "A locally adaptive, diffusion based text binarization technique". Applied Mathematics and Computation 269 (październik 2015): 464–72. http://dx.doi.org/10.1016/j.amc.2015.07.091.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
16

García, José, Paola Moraga, Matias Valenzuela i Hernan Pinto. "A db-Scan Hybrid Algorithm: An Application to the Multidimensional Knapsack Problem". Mathematics 8, nr 4 (2.04.2020): 507. http://dx.doi.org/10.3390/math8040507.

Pełny tekst źródła
Streszczenie:
This article proposes a hybrid algorithm that makes use of the db-scan unsupervised learning technique to obtain binary versions of continuous swarm intelligence algorithms. These binary versions are then applied to large instances of the well-known multidimensional knapsack problem. The contribution of the db-scan operator to the binarization process is systematically studied. For this, two random operators are built that serve as a baseline for comparison. Once the contribution is established, the db-scan operator is compared with two other binarization methods that have satisfactorily solved the multidimensional knapsack problem. The first method uses the unsupervised learning technique k-means as a binarization method. The second makes use of transfer functions as a mechanism to generate binary versions. The results show that the hybrid algorithm using db-scan produces more consistent results compared to transfer function (TF) and random operators.
Style APA, Harvard, Vancouver, ISO itp.
17

Becerra-Rozas, Marcelo, José Lemus-Romani, Felipe Cisternas-Caneo, Broderick Crawford, Ricardo Soto i José García. "Swarm-Inspired Computing to Solve Binary Optimization Problems: A Backward Q-Learning Binarization Scheme Selector". Mathematics 10, nr 24 (15.12.2022): 4776. http://dx.doi.org/10.3390/math10244776.

Pełny tekst źródła
Streszczenie:
In recent years, continuous metaheuristics have been a trend in solving binary-based combinatorial problems due to their good results. However, to use this type of metaheuristics, it is necessary to adapt them to work in binary environments, and in general, this adaptation is not trivial. The method proposed in this work evaluates the use of reinforcement learning techniques in the binarization process. Specifically, the backward Q-learning technique is explored to choose binarization schemes intelligently. This allows any continuous metaheuristic to be adapted to binary environments. The illustrated results are competitive, thus providing a novel option to address different complex problems in the industry.
Style APA, Harvard, Vancouver, ISO itp.
18

Kim, Min-Ki. "Adaptive Thresholding Technique for Binarization of License Plate Images". Journal of the Optical Society of Korea 14, nr 4 (25.12.2010): 368–75. http://dx.doi.org/10.3807/josk.2010.14.4.368.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
19

lu, Sha. "A Review Paper on Character Recognition using Binarization technique". International Journal of Engineering Trends and Technology 22, nr 5 (25.04.2015): 214–17. http://dx.doi.org/10.14445/22315381/ijett-v22p245.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
20

Bolan Su, Shijian Lu i Chew Lim Tan. "Robust Document Image Binarization Technique for Degraded Document Images". IEEE Transactions on Image Processing 22, nr 4 (kwiecień 2013): 1408–17. http://dx.doi.org/10.1109/tip.2012.2231089.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
21

Sari, Toufik, Abderrahmane Kefali i Halima Bahi. "Text Extraction from Historical Document Images by the Combination of Several Thresholding Techniques". Advances in Multimedia 2014 (2014): 1–10. http://dx.doi.org/10.1155/2014/934656.

Pełny tekst źródła
Streszczenie:
This paper presents a new technique for the binarization of historical document images characterized by deteriorations and damages making their automatic processing difficult at several levels. The proposed method is based on hybrid thresholding combining the advantages of global and local methods and on the mixture of several binarization techniques. Two stages have been included. In the first stage, global thresholding is applied on the entire image and two different thresholds are determined from which the most of image pixels are classified intoforegroundorbackground. In the second stage, the remaining pixels are assigned toforegroundorbackgroundclasses based on local analysis. In this stage, several local thresholding methods are combined and the final binary value of each remaining pixel is chosen as the most probable one. The proposed technique has been tested on a large collection of standard and synthetic documents and compared with well-known methods using standard measures and was shown to be more powerful.
Style APA, Harvard, Vancouver, ISO itp.
22

Ovchinnikov, Andrey S., Vitaly V. Krasnov, Pavel A. Cheremkhin, Vladislav G. Rodin, Ekaterina A. Savchenkova, Rostislav S. Starikov i Nikolay N. Evtikhiev. "What Binarization Method Is the Best for Amplitude Inline Fresnel Holograms Synthesized for Divergent Beams Using the Direct Search with Random Trajectory Technique?" Journal of Imaging 9, nr 2 (27.01.2023): 28. http://dx.doi.org/10.3390/jimaging9020028.

Pełny tekst źródła
Streszczenie:
Fast reconstruction of holographic and diffractive optical elements (DOE) can be implemented by binary digital micromirror devices (DMD). Since micromirrors of the DMD have two positions, the synthesized DOEs must be binary. This work studies the possibility of improving the method of synthesis of amplitude binary inline Fresnel holograms in divergent beams. The method consists of the modified Gerchberg–Saxton algorithm, Otsu binarization and direct search with random trajectory technique. To achieve a better quality of reconstruction, various binarization methods were compared. We performed numerical and optical experiments using the DMD. Holograms of halftone image with size up to 1024 × 1024 pixels were synthesized. It was determined that local and several global threshold methods provide the best quality. Compared to the Otsu binarization used in the original method of the synthesis, the reconstruction quality (MSE and SSIM values) is improved by 46% and the diffraction efficiency is increased by 27%.
Style APA, Harvard, Vancouver, ISO itp.
23

Natarajan, Jayanthi, i Indu Sreedevi. "Enhancement of ancient manuscript images by log based binarization technique". AEU - International Journal of Electronics and Communications 75 (maj 2017): 15–22. http://dx.doi.org/10.1016/j.aeue.2017.03.002.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
24

Becerra-Rozas, Marcelo, José Lemus-Romani, Felipe Cisternas-Caneo, Broderick Crawford, Ricardo Soto, Gino Astorga, Carlos Castro i José García. "Continuous Metaheuristics for Binary Optimization Problems: An Updated Systematic Literature Review". Mathematics 11, nr 1 (27.12.2022): 129. http://dx.doi.org/10.3390/math11010129.

Pełny tekst źródła
Streszczenie:
For years, extensive research has been in the binarization of continuous metaheuristics for solving binary-domain combinatorial problems. This paper is a continuation of a previous review and seeks to draw a comprehensive picture of the various ways to binarize this type of metaheuristics; the study uses a standard systematic review consisting of the analysis of 512 publications from 2017 to January 2022 (5 years). The work will provide a theoretical foundation for novice researchers tackling combinatorial optimization using metaheuristic algorithms and for expert researchers analyzing the binarization mechanism’s impact on the metaheuristic algorithms’ performance. Structuring this information allows for improving the results of metaheuristics and broadening the spectrum of binary problems to be solved. We can conclude from this study that there is no single general technique capable of efficient binarization; instead, there are multiple forms with different performances.
Style APA, Harvard, Vancouver, ISO itp.
25

Amitha, Dr T., i Dr B. Raghu. "Robust Phase-Based Features Extracted From Image By A Binarization Technique". IOSR Journal of Computer Engineering 18, nr 04 (kwiecień 2016): 10–14. http://dx.doi.org/10.9790/0661-1804041014.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
26

Azani Mustafa, Wan, Haniza Yazid, Ahmed Alkhayyat, Mohd Aminudin Jamlos i Hasliza A. Rahim. "Effect of Direct Statistical Contrast Enhancement Technique on Document Image Binarization". Computers, Materials & Continua 70, nr 2 (2022): 3549–64. http://dx.doi.org/10.32604/cmc.2022.019801.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
27

Choudhary, Amit, Rahul Rishi i Savita Ahlawat. "Off-line Handwritten Character Recognition Using Features Extracted from Binarization Technique". AASRI Procedia 4 (2013): 306–12. http://dx.doi.org/10.1016/j.aasri.2013.10.045.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
28

Salama, Mostafa A., i Aboul Ella Hassanien. "Binarization and Validation in Formal Concept Analysis". International Journal of Systems Biology and Biomedical Technologies 1, nr 4 (październik 2012): 16–27. http://dx.doi.org/10.4018/ijsbbt.2012100102.

Pełny tekst źródła
Streszczenie:
Representation and visualization of continuous data using the Formal Concept Analysis (FCA) became an important requirement in real-life fields. Application of formal concept analysis (FCA) model on numerical data, a scaling or Discretization / binarization procedures should be applied as preprocessing stage. The Scaling procedure increases the complexity of computation of the FCA, while the binarization process leads to a distortion in the internal structure of the input data set. The proposed approach uses a binarization procedure prior to applying FCA model, and then applies a validation process to the generated lattice to measure or ensure its degree of accuracy. The introduced approach is based on the evaluation of each attribute according to the objects of its extent set. To prove the validity of the introduced approach, the technique is applied on two data sets in the medical field which are the Indian Diabetes and the Breast Cancer data sets. Both data sets show the generation of a valid lattice.
Style APA, Harvard, Vancouver, ISO itp.
29

Saddami, Khairun, Khairul Munadi, Yuwaldi Away i Fitri Arnia. "Improvement of binarization performance using local otsu thresholding". International Journal of Electrical and Computer Engineering (IJECE) 9, nr 1 (1.02.2019): 264. http://dx.doi.org/10.11591/ijece.v9i1.pp264-272.

Pełny tekst źródła
Streszczenie:
<p><span>Ancient document usually contains multiple noises such as uneven-background, show-through, water-spilling, spots, and blur text. The noise will affect the binarization process. Binarization is an extremely important process in image processing, especially for character recognition. This paper presents an improvement to Nina binarization technique. Improvements were achieved by reducing processing steps and replacing median filtering by Wiener filtering. First, the document background was approximated by using Wiener filter, and then image subtraction was applied. Furthermore, the manuscript contrast was adjusted by mapping intensity of image value using intensity transformation method. Next, the local Otsu thresholding was applied. For removing spotting noise, we applied labeled connected component. The proposed method had been testing on H-DIBCO 2014 and degraded Jawi handwritten ancient documents. It performed better regarding recall and precision values, as compared to Otsu, Niblack, Sauvola, Lu, Su, and Nina, especially in the documents with show-through, water-spilling and combination noises.</span></p>
Style APA, Harvard, Vancouver, ISO itp.
30

Fang, Yating, i Baojiang Zhong. "Cell segmentation in fluorescence microscopy images based on multi-scale histogram thresholding". Mathematical Biosciences and Engineering 20, nr 9 (2023): 16259–78. http://dx.doi.org/10.3934/mbe.2023726.

Pełny tekst źródła
Streszczenie:
<abstract><p>Cell segmentation from fluorescent microscopy images plays an important role in various applications, such as disease mechanism assessment and drug discovery research. Exiting segmentation methods often adopt image binarization as the first step, through which the foreground cell is separated from the background so that the subsequent processing steps can be greatly facilitated. To pursue this goal, a histogram thresholding can be performed on the input image, which first applies a Gaussian smoothing to suppress the jaggedness of the histogram curve and then exploits Rosin's method to determine a threshold for conducting image binarization. However, an inappropriate amount of smoothing could lead to the inaccurate segmentation of cells. To address this crucial problem, a multi-scale histogram thresholding (MHT) technique is proposed in the present paper, where the scale refers to the standard deviation of the Gaussian that determines the amount of smoothing. To be specific, the image histogram is smoothed at three chosen scales first, and then the smoothed histogram curves are fused to conduct image binarization via thresholding. To further improve the segmentation accuracy and overcome the difficulty of extracting overlapping cells, our proposed MHT technique is incorporated into a multi-scale cell segmentation framework, in which a region-based ellipse fitting technique is adopted to identify overlapping cells. Extensive experimental results obtained on benchmark datasets show that the new method can deliver superior performance compared to the current state-of-the-arts.</p></abstract>
Style APA, Harvard, Vancouver, ISO itp.
31

García, José, Gino Astorga i Víctor Yepes. "An Analysis of a KNN Perturbation Operator: An Application to the Binarization of Continuous Metaheuristics". Mathematics 9, nr 3 (24.01.2021): 225. http://dx.doi.org/10.3390/math9030225.

Pełny tekst źródła
Streszczenie:
The optimization methods and, in particular, metaheuristics must be constantly improved to reduce execution times, improve the results, and thus be able to address broader instances. In particular, addressing combinatorial optimization problems is critical in the areas of operational research and engineering. In this work, a perturbation operator is proposed which uses the k-nearest neighbors technique, and this is studied with the aim of improving the diversification and intensification properties of metaheuristic algorithms in their binary version. Random operators are designed to study the contribution of the perturbation operator. To verify the proposal, large instances of the well-known set covering problem are studied. Box plots, convergence charts, and the Wilcoxon statistical test are used to determine the operator contribution. Furthermore, a comparison is made using metaheuristic techniques that use general binarization mechanisms such as transfer functions or db-scan as binarization methods. The results obtained indicate that the KNN perturbation operator improves significantly the results.
Style APA, Harvard, Vancouver, ISO itp.
32

Elvas, Luis B., Ana G. Almeida, Luís Rosario, Miguel Sales Dias i João C. Ferreira. "Calcium Identification and Scoring Based on Echocardiography. An Exploratory Study on Aortic Valve Stenosis". Journal of Personalized Medicine 11, nr 7 (24.06.2021): 598. http://dx.doi.org/10.3390/jpm11070598.

Pełny tekst źródła
Streszczenie:
Currently, an echocardiography expert is needed to identify calcium in the aortic valve, and a cardiac CT-Scan image is needed for calcium quantification. When performing a CT-scan, the patient is subject to radiation, and therefore the number of CT-scans that can be performed should be limited, restricting the patient’s monitoring. Computer Vision (CV) has opened new opportunities for improved efficiency when extracting knowledge from an image. Applying CV techniques on echocardiography imaging may reduce the medical workload for identifying the calcium and quantifying it, helping doctors to maintain a better tracking of their patients. In our approach, a simple technique to identify and extract the calcium pixel count from echocardiography imaging, was developed by using CV. Based on anonymized real patient echocardiographic images, this approach enables semi-automatic calcium identification. As the brightness of echocardiography images (with the highest intensity corresponding to calcium) vary depending on the acquisition settings, echocardiographic adaptive image binarization has been performed. Given that blood maintains the same intensity on echocardiographic images—being always the darker region—blood areas in the image were used to create an adaptive threshold for binarization. After binarization, the region of interest (ROI) with calcium, was interactively selected by an echocardiography expert and extracted, allowing us to compute a calcium pixel count, corresponding to the spatial amount of calcium. The results obtained from these experiments are encouraging. With this technique, from echocardiographic images collected for the same patient with different acquisition settings and different brightness, obtaining a calcium pixel count, where pixel values show an absolute pixel value margin of error of 3 (on a scale from 0 to 255), achieving a Pearson Correlation of 0.92 indicating a strong correlation with the human expert assessment of calcium area for the same images.
Style APA, Harvard, Vancouver, ISO itp.
33

Mollah, Ayatullah Faruk, Subhadip Basu, Mita Nasipuri i Dipak Kumar Basu. "Handheld Mobile Device Based Text Region Extraction and Binarization of Image Embedded Text Documents". Journal of Intelligent Systems 22, nr 1 (1.03.2013): 25–47. http://dx.doi.org/10.1515/jisys-2012-0019.

Pełny tekst źródła
Streszczenie:
Abstract.Effective text region extraction and binarization of image embedded text documents on mobile devices having limited computational resources is an open research problem. In this paper, we present one such technique for preprocessing images captured with built-in cameras of handheld devices with an aim of developing an efficient Business Card Reader. At first, the card image is processed for isolating foreground components. These foreground components are classified as either text or non-text using different feature descriptors of texts and images. The non-text components are removed and the textual ones are binarized with a fast adaptive algorithm. Specifically, we propose new techniques (targeted to mobile devices) for (i) foreground component isolation, (ii) text extraction and (iii) binarization of text regions from camera captured business card images. Experiments with business card images of various resolutions show that the present technique yields better accuracy and involves low computational overhead in comparison with the state-of-the-art. We achieve optimum text/non-text separation performance with images of resolution 800×600 pixels with an average recall rate of 93.90% and a precision rate of 96.84%. It involves a peak memory consumption of 0.68 MB and processing time of 0.102 seconds on a moderately powerful notebook, and 4 seconds of processing time on a PDA.
Style APA, Harvard, Vancouver, ISO itp.
34

Kim, Jung Hun, i Gibak Kim. "A Binarization Technique using Histogram Matching for License Plate with a Shadow". Journal of Broadcast Engineering 19, nr 1 (30.01.2014): 56–63. http://dx.doi.org/10.5909/jbe.2014.19.1.56.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
35

., Prashali Chaudhary. "AN EFFECTIVE AND ROBUST TECHNIQUE FOR THE BINARIZATION OF DEGRADED DOCUMENT IMAGES". International Journal of Research in Engineering and Technology 03, nr 06 (25.06.2014): 140–45. http://dx.doi.org/10.15623/ijret.2014.0306025.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
36

K. Bawa, Rajesh, i Ganesh K. Sethi. "A Binarization Technique for Extraction of Devanagari Text from Camera Based Images". Signal & Image Processing : An International Journal 5, nr 2 (30.04.2014): 29–37. http://dx.doi.org/10.5121/sipij.2014.5203.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
37

Wang, Chang, i Jing Jing Gao. "The Detection of Surface Quality On-Line Based on Machine Vision in the Production of Bearings". Applied Mechanics and Materials 319 (maj 2013): 523–27. http://dx.doi.org/10.4028/www.scientific.net/amm.319.523.

Pełny tekst źródła
Streszczenie:
The combining of digital image processing technique and pattern recognition technique, it can be wild used in the products of industry classification and recognition Line bearing assembly defects in this article for the detection and identification of needs, Automatic detection system based on machine vision, contrast measurement plane array camera on a different surfaceImage acquisition, binarization processing for subsequent pretreatment image pattern recognition, feature extraction and eigenvalue comparison, product line surface defect detection and identification.
Style APA, Harvard, Vancouver, ISO itp.
38

PAPAMARKOS, NIKOS. "DOCUMENT GRAY-SCALE REDUCTION USING A NEURO-FUZZY TECHNIQUE". International Journal of Pattern Recognition and Artificial Intelligence 17, nr 04 (czerwiec 2003): 505–27. http://dx.doi.org/10.1142/s0218001403002502.

Pełny tekst źródła
Streszczenie:
This paper proposes a new neuro-fuzzy technique suitable for binarization and gray-scale reduction of digital documents. The proposed approach uses both the image gray-scales and additional local spatial features. Both, gray-scales and local feature values feed a Kohonen Self-Organized Feature Map (SOFM) neural network classifier. After training, the neurons of the output competition layer of the SOFM define two bilevel classes. Using the content of these classes, fuzzy membership functions are obtained that are next used by the fuzzy C-means (FCM) algorithm in order to reduce the character-blurring problem. The method is suitable for improving blurring and badly illuminated documents and can be easily modified to accommodate any type of spatial characteristics.
Style APA, Harvard, Vancouver, ISO itp.
39

Kowalczyk, Marek, Manuel Martínez-Corral, Tomasz Cichocki i Pedro Andrés. "One-dimensional error-diffusion technique adapted for binarization of rotationally symmetric pupil filters". Optics Communications 114, nr 3-4 (luty 1995): 211–18. http://dx.doi.org/10.1016/0030-4018(94)00607-v.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
40

Thepade, Sudeep, Rik Das i Saurav Ghosh. "Feature Extraction with Ordered Mean Values for Content Based Image Classification". Advances in Computer Engineering 2014 (17.12.2014): 1–15. http://dx.doi.org/10.1155/2014/454876.

Pełny tekst źródła
Streszczenie:
Categorization of images into meaningful classes by efficient extraction of feature vectors from image datasets has been dependent on feature selection techniques. Traditionally, feature vector extraction has been carried out using different methods of image binarization done with selection of global, local, or mean threshold. This paper has proposed a novel technique for feature extraction based on ordered mean values. The proposed technique was combined with feature extraction using discrete sine transform (DST) for better classification results using multitechnique fusion. The novel methodology was compared to the traditional techniques used for feature extraction for content based image classification. Three benchmark datasets, namely, Wang dataset, Oliva and Torralba (OT-Scene) dataset, and Caltech dataset, were used for evaluation purpose. Performance measure after evaluation has evidently revealed the superiority of the proposed fusion technique with ordered mean values and discrete sine transform over the popular approaches of single view feature extraction methodologies for classification.
Style APA, Harvard, Vancouver, ISO itp.
41

Vizilter, Y. V., A. Y. Rubis, S. Y. Zheltov i O. V. Vygolov. "CHANGE DETECTION VIA MORPHOLOGICAL COMPARATIVE FILTERS". ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences III-3 (3.06.2016): 279–86. http://dx.doi.org/10.5194/isprsannals-iii-3-279-2016.

Pełny tekst źródła
Streszczenie:
In this paper we propose the new change detection technique based on morphological comparative filtering. This technique generalizes the morphological image analysis scheme proposed by Pytiev. A new class of comparative filters based on guided contrasting is developed. Comparative filtering based on diffusion morphology is implemented too. The change detection pipeline contains: comparative filtering on image pyramid, calculation of morphological difference map, binarization, extraction of change proposals and testing change proposals using local morphological correlation coefficient. Experimental results demonstrate the applicability of proposed approach.
Style APA, Harvard, Vancouver, ISO itp.
42

Vizilter, Y. V., A. Y. Rubis, S. Y. Zheltov i O. V. Vygolov. "CHANGE DETECTION VIA MORPHOLOGICAL COMPARATIVE FILTERS". ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences III-3 (3.06.2016): 279–86. http://dx.doi.org/10.5194/isprs-annals-iii-3-279-2016.

Pełny tekst źródła
Streszczenie:
In this paper we propose the new change detection technique based on morphological comparative filtering. This technique generalizes the morphological image analysis scheme proposed by Pytiev. A new class of comparative filters based on guided contrasting is developed. Comparative filtering based on diffusion morphology is implemented too. The change detection pipeline contains: comparative filtering on image pyramid, calculation of morphological difference map, binarization, extraction of change proposals and testing change proposals using local morphological correlation coefficient. Experimental results demonstrate the applicability of proposed approach.
Style APA, Harvard, Vancouver, ISO itp.
43

Yanagisawa, Toshifumi, Kohki Kamiya i Hirohisa Kurosaki. "New NEO Detection Techniques using the FPGA". Publications of the Astronomical Society of Japan 73, nr 3 (26.03.2021): 519–29. http://dx.doi.org/10.1093/pasj/psab017.

Pełny tekst źródła
Streszczenie:
Abstract We have developed a new method for detecting near-Earth objects (NEOs) based on a Field Programmable Gate-Array (FPGA). Unlike conventional methods, our technique uses 30–40 frames to detect faint NEOs that are almost invisible on a single frame. To reduce analysis time, image binarization and an FPGA-implemented algorithm were used. This method has aided in the discovery of 11 NEOs by analyzing frames captured with 20 cm class telescopes. This new method will contribute to discovering new NEOs that approach the Earth closely.
Style APA, Harvard, Vancouver, ISO itp.
44

AL-SADOUN, HUMOUD B., i ADNAN AMIN. "A NEW STRUCTURAL TECHNIQUE FOR RECOGNIZING PRINTED ARABIC TEXT". International Journal of Pattern Recognition and Artificial Intelligence 09, nr 01 (luty 1995): 101–25. http://dx.doi.org/10.1142/s0218001495000067.

Pełny tekst źródła
Streszczenie:
This paper proposes a new structural technique for Arabic text recognition. The technique can be divided into five major steps: (1) preprocessing and binarization; (2) thinning; (3) binary tree construction; (4) segmentation; and (5) recognition. The advantage of this technique is that its execution does not depend on either the font or size of character. Thus, this same technique might be utilized for the recognition of machine or hand printed text. The relevant algorithm is implemented on a microcomputer. Experiments were conducted to verify the accuracy and the speed of this algorithm using about 20,000 subwords each with an average length of 3 characters. The subwords used were written using different fonts. The recognition rate obtained in the experiments indicated an accuracy of 93.38 % with a speed of 2.7 characters per second.
Style APA, Harvard, Vancouver, ISO itp.
45

Santosh, K. C., Naved Alam, Partha Pratim Roy, Laurent Wendling, Sameer Antani i George R. Thoma. "A Simple and Efficient Arrowhead Detection Technique in Biomedical Images". International Journal of Pattern Recognition and Artificial Intelligence 30, nr 05 (21.04.2016): 1657002. http://dx.doi.org/10.1142/s0218001416570020.

Pełny tekst źródła
Streszczenie:
In biomedical documents/publications, medical images tend to be complex by nature and often contain several regions that are annotated using arrows. In this context, an automated arrowhead detection is a critical precursor to region-of-interest (ROI) labeling and image content analysis. To detect arrowheads, in this paper, images are first binarized using fuzzy binarization technique to segment a set of candidates based on connected component (CC) principle. To select arrow candidates, we use convexity defect-based filtering, which is followed by template matching via dynamic time warping (DTW). The DTW similarity score confirms the presence of arrows in the image. Our test results on biomedical images from imageCLEF 2010 collection shows the interest of the technique, and can be compared with previously reported state-of-the-art results.
Style APA, Harvard, Vancouver, ISO itp.
46

Makoveichuk, Oleksandr, Igor Ruban, Nataliia Bolohova, Andriy Kovalenko, Vitalii Martovytskyi i Tetiana Filimonchuk. "Development of a method for improving stability method of applying digital watermarks to digital images". Eastern-European Journal of Enterprise Technologies 3, nr 2 (111) (30.06.2021): 45–56. http://dx.doi.org/10.15587/1729-4061.2021.235802.

Pełny tekst źródła
Streszczenie:
A technique for increasing the stability of methods for applying digital watermark into digital images is presented. A technique for increasing the stability of methods for applying digital watermarks into digital images, based on pseudo-holographic coding and additional filtering of a digital watermark, has been developed. The technique described in this work using pseudo-holographic coding of digital watermarks is effective for all types of attacks that were considered, except for image rotation. The paper presents a statistical indicator for assessing the stability of methods for applying digital watermarks. The indicator makes it possible to comprehensively assess the resistance of the method to a certain number of attacks. An experimental study was carried out according to the proposed method. This technique is most effective when part of the image is lost. When pre-filtering a digital watermark, the most effective is the third filtering method, which is averaging over a cell with subsequent binarization. The least efficient is the first method, which is binarization and finding the statistical mode over the cell. For an affine type attack, which is an image rotation, this technique is effective only when the rotation is compensated. To estimate the rotation angle, an affine transformation matrix is found, which is obtained from a consistent set of corresponding ORB-descriptors. Using this method allows to accurately extract a digital watermark for the entire range of angles. A comprehensive assessment of the methodology for increasing the stability of the method of applying a digital watermark based on Wavelet transforms has shown that this method is 20 % better at counteracting various types of attacks
Style APA, Harvard, Vancouver, ISO itp.
47

Das, Rik, Sudeep Thepade, Subhajit Bhattacharya i Saurav Ghosh. "Retrieval Architecture with Classified Query for Content Based Image Recognition". Applied Computational Intelligence and Soft Computing 2016 (2016): 1–9. http://dx.doi.org/10.1155/2016/1861247.

Pełny tekst źródła
Streszczenie:
The consumer behavior has been observed to be largely influenced by image data with increasing familiarity of smart phones and World Wide Web. Traditional technique of browsing through product varieties in the Internet with text keywords has been gradually replaced by the easy accessible image data. The importance of image data has portrayed a steady growth in application orientation for business domain with the advent of different image capturing devices and social media. The paper has described a methodology of feature extraction by image binarization technique for enhancing identification and retrieval of information using content based image recognition. The proposed algorithm was tested on two public datasets, namely, Wang dataset and Oliva and Torralba (OT-Scene) dataset with 3688 images on the whole. It has outclassed the state-of-the-art techniques in performance measure and has shown statistical significance.
Style APA, Harvard, Vancouver, ISO itp.
48

Kim, Sang-Yup, i Seong-Whan Lee. "Gray-Scale Nonlinear Shape Normalization Method for Handwritten Oriental Character Recognition". International Journal of Pattern Recognition and Artificial Intelligence 12, nr 01 (luty 1998): 81–95. http://dx.doi.org/10.1142/s0218001498000075.

Pełny tekst źródła
Streszczenie:
In general, nonlinear shape normalization methods for binary images have been used in order to compensate for the shape distortions of handwritten characters. However, in most document image analysis and recognition systems, a gray-scale image is first captured and digitized using a scanner or a video camera, then a binary image is extracted from the original gray-scale image using a certain extraction technique. This binarization process may remove some useful information of character images such as topological features, and introduce noises to character background. These errors are accumulated in nonlinear shape normalization step and transferred to the following feature extraction or recognition step. They may eventually cause incorrect recognition results. In this paper, we propose nonlinear shape normalization methods for gray-scale handwritten Oriental characters in order to minimize the loss of information caused by binarization and compensate for the shape distortions of characters. Two-dimensional linear interpolation technique has been extended to nonlinear space and the extended interpolation technique has been adopted in the proposed methods to enhance the quality of normalized images. In order to verify the efficiency of the proposed methods, the recognition rate, the processing time and the computational complexity of the proposed algorithms have been considered. The experimental results demonstrate that the proposed methods are efficient not only to compensate for the shape distortions of handwritten Oriental characters but also to maintain the information in gray-scale Oriental characters.
Style APA, Harvard, Vancouver, ISO itp.
49

Lu, An Qun, Shou Zhi Zhang i Qian Tian. "Matlab Image Processing Technique and Application in Pore Structure Characterization of Hardened Cement Pastes". Advanced Materials Research 785-786 (wrzesień 2013): 1374–79. http://dx.doi.org/10.4028/www.scientific.net/amr.785-786.1374.

Pełny tekst źródła
Streszczenie:
Based on Matlab image processing technique and backscattered electron image analysis method, a characterization method is set up to make quantitative analysis on pore structure of hardened cement pastes. Adopt Matlab to acquire images, and carry out gradation and binarization processing for them; use the combination method of local threshold segmentation and histogram segmentation to obtain pore structure characteristics. The results showed that evolution law of pore structure of fly ash cement pastes via Matlab image analysis method is similar to the conclusion obtained through BET and DVS. Selecting different angle of backscattered electron images in the same sample, its statistic results are more representative.
Style APA, Harvard, Vancouver, ISO itp.
50

Lee, Seungju, Yoonjae Chung, Chunyoung Kim i Wontae Kim. "Automatic Thinning Detection through Image Segmentation Using Equivalent Array-Type Lamp-based Lock-in Thermography". Sensors 23, nr 3 (22.01.2023): 1281. http://dx.doi.org/10.3390/s23031281.

Pełny tekst źródła
Streszczenie:
Among the non-destructive testing (NDT) techniques, infrared thermography (IRT) is an attractive and highly reliable technology that can measure the thermal response of a wide area in real-time. In this study, thinning defects in S275 specimens were detected using lock-in thermography (LIT). After acquiring phase and amplitude images using four-point signal processing, the optimal excitation frequency was calculated. After segmentation was performed on each defect area, binarization was performed using the Otsu algorithm. For automated detection, the boundary tracking algorithm was used. The number of pixels was calculated and the detectability using RMSE was evaluated. Clarification of defective objects using image segmentation detectability evaluation technique using RMSE was presented.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii