Journal articles on the topic 'Class-binarization'

To see the other types of publications on this topic, follow the link: Class-binarization.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 33 journal articles for your research on the topic 'Class-binarization.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Polyakova, Marina V., and Alexandr G. Nesteryuk. "IMPROVEMENT OF THE COLOR TEXT IMAGE BINARIZATION METHOD USING THE MINIMUM-DISTANCE CLASSIFIER." Applied Aspects of Information Technology 4, no. 1 (April 10, 2021): 57–70. http://dx.doi.org/10.15276/aait.01.2021.5.

Full text
Abstract:
Optical character recognition systems for the images are used to convert books and documents into electronic form, to automate accounting systems in business, when recognizing markers using augmented reality technologies and etс. The quality of optical character recognition, provided that binarization is applied, is largely determined by the quality of separation of the foreground pixels from the background. Methods of text image binarization are analyzed and insufficient quality of binarization is noted. As a way of research the minimum-distance classifier for the improvement of the existing method of binarization of color text images is used. To improve the quality of the binarization of color text images, it is advisable to divide image pixels into two classes, “Foreground” and “Background”, to use classification methods instead of heuristic threshold selection, namely, a minimum-distance classifier. To reduce the amount of processed information before applying the classifier, it is advisable to select blocks of pixels for subsequent processing. This was done by analyzing the connected components on the original image. An improved method of the color text image binarization with the use of analysis of connected components and minimum-distance classifier has been elaborated. The research of the elaborated method showed that it is better than existing binarization methods in terms of robustness of binarization, but worse in terms of the error of the determining the boundaries of objects. Among the recognition errors, the pixels of images from the class labeled “Foreground” were more often mistaken for the class labeled “Background”. The proposed method of binarization with the uniqueness of class prototypes is recommended to be used in problems of the processing of color images of the printed text, for which the error in determining the boundaries of characters as a result of binarization is compensated by the thickness of the letters. With a multiplicity of class prototypes, the proposed binarization method is recommended to be used in problems of processing color images of handwritten text, if high performance is not required. The improved binarization method has shown its efficiency in cases of slow changes in the color and illumination of the text and background, however, abrupt changes in color and illumination, as well as a textured background, do not allowing the binarization quality required for practical problems.
APA, Harvard, Vancouver, ISO, and other styles
2

CHI, ZHERU, and QING WANG. "DOCUMENT IMAGE BINARIZATION WITH FEEDBACK FOR IMPROVING CHARACTER SEGMENTATION." International Journal of Image and Graphics 05, no. 02 (April 2005): 281–309. http://dx.doi.org/10.1142/s0219467805001768.

Full text
Abstract:
Binarization of gray scale document images is one of the most important steps in automatic document image processing. In this paper, we present a two-stage document image binarization approach, which includes a top-down region-based binarization at the first stage and a neural network based binarization technique for the problematic blocks at the second stage after a feedback checking. Our two-stage approach is particularly effective for binarizing text images of highlighted or marked text. The region-based binarization method is fast and suitable for processing large document images. However, the block effect and regional edge noise are two unavoidable problems resulting in poor character segmentation and recognition. The neural network based classifier can achieve good performance in two-class classification problem such as the binarization of gray level document images. However, it is computationally costly. In our two-stage binarization approach, the feedback criteria are employed to keep the well binarized blocks from the first stage binarization and to re-binarize the problematic blocks at the second stage using the neural network binarizer to improve the character segmentation quality. Experimental results on a number of document images show that our two-stage binarization approach performs better than the single-stage binarization techniques tested in terms of character segmentation quality and computational cost.
APA, Harvard, Vancouver, ISO, and other styles
3

HRŮZA, JAN, and PETER šTĚPÁNEK. "Speedup of logic programs by binarization and partial deduction." Theory and Practice of Logic Programming 4, no. 3 (April 16, 2004): 355–69. http://dx.doi.org/10.1017/s147106840300190x.

Full text
Abstract:
Binary logic programs can be obtained from ordinary logic programs by a binarizing transformation. In most cases, binary programs obtained this way are less efficient than the original programs. (Demoen, 1992) showed an interesting example of a logic program whose computational behaviour was improved when it was transformed to a binary program and then specialized by partial deduction. The class of B-stratifiable logic programs is defined. It is shown that for every B-stratifiable logic program, binarization and subsequent partial deduction produce a binary program which does not contain variables for continuations introduced by binarization. Such programs usually have a better computational behaviour than the original ones. Both binarization and partial deduction can be easily automated. A comparison with other related approaches to program transformation is given.
APA, Harvard, Vancouver, ISO, and other styles
4

Syahrial, Syahrial, and Rizal Lamusu. "Pembentua Pola Desain Motif Karawo Gorontalo Menggunakan K-Means Color Quantization dan Structured Forest Edge Detecion." Jurnal Teknologi Informasi dan Ilmu Komputer 8, no. 3 (June 15, 2021): 625. http://dx.doi.org/10.25126/jtiik.2021834491.

Full text
Abstract:
<p class="Abstrak">Sulaman Karawo merupakan kerajinan tangan berupa sulaman khas dari daerah Gorontalo. Motif sulaman diterapkan secara detail berdasarkan suatu pola desain tertentu. Pola desain digambarkan pada kertas dengan berbagai panduannya. Gambar yang diterapkan pada pola memiliki resolusi sangat rendah dan harus mempertahankan bentuknya. Penelitian ini mengembangkan metode pembentukan pola desain motif Karawo dari citra digital. Proses dilakukan dengan pengolahan awal menggunakan <em>k-means color quantization (KMCQ)</em> dan deteksi tepi <em>structured forest</em>. Proses selanjutnya melakukan pengurangan resolusi menggunakan metode <em>pixelation</em> dan <em>binarization</em>. Luaran dari algoritma menghasilkan 3 citra berbeda dengan ukuran yang sama, yaitu: citra tepi, citra biner, dan citra berwarna. Ketiga citra tersebut selanjutnya dilakukan proses pembentukan pola desain motif Karawo dengan berbagai petunjuk pola bagi pengrajin. Hasil menunjukkan bahwa pola desain motif dapat digunakan dan dimengerti oleh para pengrajin dalam menerapkannya di sulaman Karawo. Pengujian nilai-nilai parameter dilakukan pada metode <em>k-means</em>, <em>gaussian filter</em>, <em>pixelation</em>, dan <em>binarization.</em> Parameter-parameter tersebut yaitu: k pada <em>k-means</em>, <em>kernel</em> pada <em>gaussian filter</em>, lebar piksel pada <em>pixelation</em>, dan nilai <em>threshold</em> pada <em>binarization</em>. Pengujian menunjukkan nilai terendah tiap parameter adalah k=4, kernel=3x3, lebar piksel=70, dan <em>threshold</em>=20. Hasil memperlihatkan makin tinggi nilai-nilai tersebut maka semakin baik pola desain motif yang dihasilkan. Nilai-nilai tersebut merupakan nilai parameter terendah dalam pembentukan pola desain motif berkualitas baik berdasarkan indikator-indikator dari desainer.</p><p class="Abstrak"> </p><p class="Abstrak"><em><strong>Abstract</strong></em></p><p class="Abstract"><em>Karawo embroidery is a unique handicraft from Gorontalo. The embroidery motif is applied in detail based on a certain design pattern. These patterns are depicted on paper with various guides. The image applied to the pattern is very low resolution and retains its shape. This study develops a method to generate a Karawo design pattern from a digital image. The process begins by using k-means color quantization (KMCQ) to reduce the number of colors and edge detection of the structured forest. The next process is to change the resolution using pixelation and binarization methods. The output algorithm produces 3 different state images of the same size, which are: edge image, binary image, and color image. These images are used in the formation of the Karawo motif design pattern. The motif contains various pattern instructions for the craftsman. The results show that it can be used and understood by the craftsmen in its application in Karawo embroidery. Testing parameter values on the k-means method, Gaussian filter, pixelation, and binarization. These parameters are k on KMCQ, the kernel on a gaussian filter, pixel width in pixelation, and threshold value in binarization. The results show that the lowest value of each parameter is k=4, kernel=3x3, pixel width=70, and threshold=20. The results show that the higher these values, the better the results of the pattern design motif. Those values are the lower input to generate a good quality pattern design based on the designer’s indicators.</em></p><p class="Abstrak"><em><strong><br /></strong></em></p>
APA, Harvard, Vancouver, ISO, and other styles
5

Saddami, Khairun, Fitri Arnia, Yuwaldi Away, and Khairul Munadi. "Kombinasi Metode Nilai Ambang Lokal dan Global untuk Restorasi Dokumen Jawi Kuno." Jurnal Teknologi Informasi dan Ilmu Komputer 7, no. 1 (February 4, 2020): 163. http://dx.doi.org/10.25126/jtiik.2020701741.

Full text
Abstract:
<p class="Abstrak">Dokumen Jawi kuno merupakan warisan budaya yang berisi informasi penting tentang peradaban masa lalu yang dapat dijadikan pedoman untuk masa sekarang ini. Dokumen Jawi kuno telah mengalami penurunan kualitas yang disebabkan oleh beberapa faktor seperti kualitas kertas atau karena proses penyimpanan. Penurunan kualitas ini menyebabkan informasi yang terdapat pada dokumen tersebut menghilang dan sulit untuk diakses. Artikel ini mengusulkan metode binerisasi untuk membangkitkan kembali informasi yang terdapat pada dokumen Jawi kuno. Metode usulan merupakan kombinasi antara metode binerisasi berbasis nilai ambang lokal dan global. Metode usulan diuji terhadap dokumen Jawi kuno dan dokumen uji standar yang dikenal dengan nama <em>Handwritten</em> <em>Document Image Binarization Contest</em> (HDIBCO) 2016. Citra hasil binerisasi dievaluasi menggunakan metode: <em>F-measure</em>, <em>pseudo F-measure</em>, <em>peak signal-to-noise ratio</em>, <em>distance reciprocal distortion</em>, dan <em>misclasification penalty metric</em>. Secara rata-rata, nilai evaluasi <em>F-measure</em> dari metode usulan mencapai 88,18 dan 89,04 masing-masing untuk dataset Jawi dan HDIBCO-2016. Hasil ini lebih baik dari metode pembanding yang menunjukkan bahwa metode usulan berhasil meningkatkan kinerja metode binerisasi untuk dataset Jawi dan HDIBCO-2016.</p><p class="Abstrak"> </p><p class="Abstrak"><em><strong>Abstract</strong></em></p><p class="Abstract"><em>Ancient Jawi document is a cultural heritage, which contains knowledge of past civilization for developing a better future. Ancient Jawi document suffers from severe degradation due to some factors such as paper quality or poor retention process. The degradation reduces information on the document and thus the information is difficult to access. This paper proposed a binarization method for restoring the information from degraded ancient Jawi document. The proposed method combined a local and global thresholding method for extracting the text from the background. The experiment was conducted on ancient Jawi document and Handwritten Document Image Binarization Contest (HDIBCO) 2016 datasets. The result was evaluated using F-measure, pseudo F-measure, peak signal-to-noise ratio, distance reciprocal distortion, dan misclassification penalty metric. The average result showed that the proposed method achieved 88.18 and 89.04 of F-measure, for Jawi and HDIBCO-2016, respectively. The proposed method resulted in better performance compared with several benchmarking methods. It can be concluded that the proposed method succeeded to enhance binarization performance.</em></p><p class="Abstrak"><em><strong><br /></strong></em></p>
APA, Harvard, Vancouver, ISO, and other styles
6

Ng, Selina, Peter Tse, and Kwok Tsui. "A One-Versus-All Class Binarization Strategy for Bearing Diagnostics of Concurrent Defects." Sensors 14, no. 1 (January 13, 2014): 1295–321. http://dx.doi.org/10.3390/s140101295.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ieno, Egidio, Luis Manuel Garcés, Alejandro José Cabrera, and Tales Cleber Pimenta. "Simple generation of threshold for images binarization on FPGA." Ingeniería e Investigación 35, no. 3 (December 14, 2015): 69–75. http://dx.doi.org/10.15446/ing.investig.v35n3.51750.

Full text
Abstract:
<p class="Abstractandkeywordscontent">This paper proposes the FPGA implementation of a threshold algorithm used in the process of image binarization by simple mathematical calculations. The implementation need only one image iteration and its processing time depends on the size of the image. The threshold values of different images obtained through the FPGA implementation are compared with those obtained by Otsu’s method, showing the differences and the visual results of binarization using both methods. The hardware implementation of the algorithm is performed by model-based design supported by the MATLAB<sup>®</sup>/Simulink<sup>®</sup> and Xilinx System Generator<sup>®</sup> tools. The results of the implementation proposal are presented in terms of resource consumption and maximum operating frequency in a Spartan-6 FPGA-based development board. The experimental results are obtained in co-simulation system and show the effectiveness of the proposed method.</p>
APA, Harvard, Vancouver, ISO, and other styles
8

Vizilter, Y. V., A. Y. Rubis, S. Y. Zheltov, and O. V. Vygolov. "CHANGE DETECTION VIA MORPHOLOGICAL COMPARATIVE FILTERS." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences III-3 (June 3, 2016): 279–86. http://dx.doi.org/10.5194/isprsannals-iii-3-279-2016.

Full text
Abstract:
In this paper we propose the new change detection technique based on morphological comparative filtering. This technique generalizes the morphological image analysis scheme proposed by Pytiev. A new class of comparative filters based on guided contrasting is developed. Comparative filtering based on diffusion morphology is implemented too. The change detection pipeline contains: comparative filtering on image pyramid, calculation of morphological difference map, binarization, extraction of change proposals and testing change proposals using local morphological correlation coefficient. Experimental results demonstrate the applicability of proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
9

Vizilter, Y. V., A. Y. Rubis, S. Y. Zheltov, and O. V. Vygolov. "CHANGE DETECTION VIA MORPHOLOGICAL COMPARATIVE FILTERS." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences III-3 (June 3, 2016): 279–86. http://dx.doi.org/10.5194/isprs-annals-iii-3-279-2016.

Full text
Abstract:
In this paper we propose the new change detection technique based on morphological comparative filtering. This technique generalizes the morphological image analysis scheme proposed by Pytiev. A new class of comparative filters based on guided contrasting is developed. Comparative filtering based on diffusion morphology is implemented too. The change detection pipeline contains: comparative filtering on image pyramid, calculation of morphological difference map, binarization, extraction of change proposals and testing change proposals using local morphological correlation coefficient. Experimental results demonstrate the applicability of proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
10

Yanagisawa, Toshifumi, Kohki Kamiya, and Hirohisa Kurosaki. "New NEO Detection Techniques using the FPGA." Publications of the Astronomical Society of Japan 73, no. 3 (March 26, 2021): 519–29. http://dx.doi.org/10.1093/pasj/psab017.

Full text
Abstract:
Abstract We have developed a new method for detecting near-Earth objects (NEOs) based on a Field Programmable Gate-Array (FPGA). Unlike conventional methods, our technique uses 30–40 frames to detect faint NEOs that are almost invisible on a single frame. To reduce analysis time, image binarization and an FPGA-implemented algorithm were used. This method has aided in the discovery of 11 NEOs by analyzing frames captured with 20 cm class telescopes. This new method will contribute to discovering new NEOs that approach the Earth closely.
APA, Harvard, Vancouver, ISO, and other styles
11

Zhang, Zhong-Liang, Xing-Gang Luo, Salvador García, and Francisco Herrera. "Cost-Sensitive back-propagation neural networks with binarization techniques in addressing multi-class problems and non-competent classifiers." Applied Soft Computing 56 (July 2017): 357–67. http://dx.doi.org/10.1016/j.asoc.2017.03.016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

HODOSEVICH, A., and R. BOHUSH. "CLIENT-SERVER SYSTEM FOR PARKING MANAGEMENT BASED ON VIDEO SURVEILLANCE DATA ANALYSIS." HERALD OF POLOTSK STATE UNIVERSITY. Series С FUNDAMENTAL SCIENCES 38, no. 4 (May 12, 2022): 32–37. http://dx.doi.org/10.52928/2070-1624-2022-38-4-32-37.

Full text
Abstract:
A web-based system for monitoring and managing parking lots based on analysis of images from surveillance cameras is presented. An approach has been developed for segmentation of parking spaces, that allows you to automatically determine the coordinates of parking spaces in the image using the Otsu adaptive binarization algorithm and the Hough transform algorithm for linking lines. The classification of parking spaces by occupancy type is performed using the convolutional neural network ResNet50. The features of the implementation of the server part are given, the developed class diagram and their main purpose are described. The client part of the system is a website that can be accessed using a personal computer or mobile device. Examples for main software functionality are shown.
APA, Harvard, Vancouver, ISO, and other styles
13

Wang, Zheng, Wei Yun Xie, Lu Wang, and Jian Xiong Liu. "A Fractal Study on Collective Evolution of Short Fatigue Cracks under Various Strain Amplitudes." Advanced Materials Research 476-478 (February 2012): 2444–48. http://dx.doi.org/10.4028/www.scientific.net/amr.476-478.2444.

Full text
Abstract:
The fractal dimension is a basic parameter to indicate the random self-similar shape and phenomena. The damage process is the result of co-effect of all the cracks, which shows a good collective behavior and random statistical complexity. The collective evolution of short fatigue cracks was experimentally studied through cylindrical specimens with annular notches with respect to the variation of nominal strain amplitudes. The maximum between-class variance method (Otsu's method) was adopted for the denoised image binarization and the fractal dimensions were obtained. The results show that: the collective behavior of short fatigue cracks possesses good fractal characteristics; as the evolution of the short fatigue cracks, fractal dimensions underwent two stages: a primary stage of high growth speed, a relatively stable stage of almost to zero growth speed; the critical cut-off point at about 30% of the fatigue life according to the experimental results can be used to represent a threshold value of the MSC and PSC of short fatigue cracks.
APA, Harvard, Vancouver, ISO, and other styles
14

Contreras, Victor, Niccolo Marini, Lora Fanda, Gaetano Manzo, Yazan Mualla, Jean-Paul Calbimonte, Michael Schumacher, and Davide Calvaresi. "A DEXiRE for Extracting Propositional Rules from Neural Networks via Binarization." Electronics 11, no. 24 (December 13, 2022): 4171. http://dx.doi.org/10.3390/electronics11244171.

Full text
Abstract:
Background: Despite the advancement in eXplainable Artificial Intelligence, the explanations provided by model-agnostic predictors still call for improvements (i.e., lack of accurate descriptions of predictors’ behaviors). Contribution: We present a tool for Deep Explanations and Rule Extraction (DEXiRE) to approximate rules for Deep Learning models with any number of hidden layers. Methodology: DEXiRE proposes the binarization of neural networks to induce Boolean functions in the hidden layers, generating as many intermediate rule sets. A rule set is inducted between the first hidden layer and the input layer. Finally, the complete rule set is obtained using inverse substitution on intermediate rule sets and first-layer rules. Statistical tests and satisfiability algorithms reduce the final rule set’s size and complexity (filtering redundant, inconsistent, and non-frequent rules). DEXiRE has been tested in binary and multiclass classifications with six datasets having different structures and models. Results: The performance is consistent (in terms of accuracy, fidelity, and rule length) with respect to the state-of-the-art rule extractors (i.e., ECLAIRE). Moreover, compared with ECLAIRE, DEXiRE has generated shorter rules (i.e., up to 74% fewer terms) and has shortened the execution time (improving up to 197% in the best-case scenario). Conclusions: DEXiRE can be applied for binary and multiclass classification of deep learning predictors with any number of hidden layers. Moreover, DEXiRE can identify the activation pattern per class and use it to reduce the search space for rule extractors (pruning irrelevant/redundant neurons)—shorter rules and execution times with respect to ECLAIRE.
APA, Harvard, Vancouver, ISO, and other styles
15

Xu, Zhenying, Ziqian Wu, and Wei Fan. "Improved SSD-assisted algorithm for surface defect detection of electromagnetic luminescence." Proceedings of the Institution of Mechanical Engineers, Part O: Journal of Risk and Reliability 235, no. 5 (February 18, 2021): 761–68. http://dx.doi.org/10.1177/1748006x21995388.

Full text
Abstract:
Defect detection of electromagnetic luminescence (EL) cells is the core step in the production and preparation of solar cell modules to ensure conversion efficiency and long service life of batteries. However, due to the lack of feature extraction capability for small feature defects, the traditional single shot multibox detector (SSD) algorithm performs not well in EL defect detection with high accuracy. Consequently, an improved SSD algorithm with modification in feature fusion in the framework of deep learning is proposed to improve the recognition rate of EL multi-class defects. A dataset containing images with four different types of defects through rotation, denoising, and binarization is established for the EL. The proposed algorithm can greatly improve the detection accuracy of the small-scale defect with the idea of feature pyramid networks. An experimental study on the detection of the EL defects shows the effectiveness of the proposed algorithm. Moreover, a comparison study shows the proposed method outperforms other traditional detection methods, such as the SIFT, Faster R-CNN, and YOLOv3, in detecting the EL defect.
APA, Harvard, Vancouver, ISO, and other styles
16

Bohush, Rykhard, Sergey Ablameyko, Tatiana Kalganova, and Pavel Yarashevich. "Extraction of image parking spaces in intelligent video surveillance systems." Machine Graphics and Vision 27, no. 1/4 (December 1, 2019): 47–62. http://dx.doi.org/10.22630/mgv.2018.27.1.3.

Full text
Abstract:
This paper discusses the algorithmic framework for image parking lot localization and classification for the video intelligent parking system. Perspective transformation, adaptive Otsu's binarization, mathematical morphology operations, representation of horizontal lines as vectors, creating and filtering vertical lines, and parking space coordinates determination are used for the localization of parking spaces in a~video frame. The algorithm for classification of parking spaces is based on the Histogram of Oriented Descriptors (HOG) and the Support Vector Machine (SVM) classifier. Parking lot descriptors are extracted based on HOG. The overall algorithmic framework consists of the following steps: vertical and horizontal gradient calculation for the image of the parking lot, gradient module vector and orientation calculation, power gradient accumulation in accordance with cell orientations, blocking of cells, second norm calculations, and normalization of cell orientation in blocks. The parameters of the descriptor have been optimized experimentally. The results demonstrate the improved classification accuracy over the class of similar algorithms and the proposed framework performs the best among the algorithms proposed earlier to solve the parking recognition problem.
APA, Harvard, Vancouver, ISO, and other styles
17

Yu, Yanzhen, Zhibin Qiu, Haoshuang Liao, Zixiang Wei, Xuan Zhu, and Zhibiao Zhou. "A Method Based on Multi-Network Feature Fusion and Random Forest for Foreign Objects Detection on Transmission Lines." Applied Sciences 12, no. 10 (May 14, 2022): 4982. http://dx.doi.org/10.3390/app12104982.

Full text
Abstract:
Foreign objects such as kites, nests and balloons, etc., suspended on transmission lines may shorten the insulation distance and cause short-circuits between phases. A detection method for foreign objects on transmission lines is proposed, which combines multi-network feature fusion and random forest. Firstly, the foreign object image dataset of balloons, kites, nests and plastic was established. Then, the Otus binarization threshold segmentation and morphology processing were applied to extract the target region of the foreign object. The features of the target region were extracted by five types of convolutional neural networks (CNN): GoogLeNet, DenseNet-201, EfficientNet-B0, ResNet-101, AlexNet and then fused by concatenation fusion strategy. Furthermore, the fused features in different schemes were used to train and test random forest, meanwhile, the gradient-weighted class activation mapping (Grad-CAM) was used to visualize the decision region of each network, which can verify the effectiveness of the optimal feature fusion scheme. Simulation results indicate that the detection accuracy of the proposed method can reach 95.88%, whose performance is better than the model of a single network. This study provides references for detection of foreign objects suspended on transmission lines.
APA, Harvard, Vancouver, ISO, and other styles
18

Kumalasanti, R. A. "Design of Someone's Character Identification Based on Handwriting Patterns Using Support Vector Machine." International Journal of Applied Sciences and Smart Technologies 4, no. 2 (December 21, 2022): 233–40. http://dx.doi.org/10.24071/ijasst.v4i2.5417.

Full text
Abstract:
Image processing has a fairly broad scope and is rich in innovation. Today, image processing has developed with various reliable methods in almost all aspects of life. One of the uses of technology in the field of image processing is biometric identification. Biometric is a system that utilizes specific data in the form of individual physical characters in the process of identifying and validating data. There is also a biometric attribute that will be developed in this study is handwriting. The handwriting pattern of each individual has a different character and uniqueness so that it can be used as an identity. The uniqueness of this handwriting will be studied with the aim of recognizing a person's character or personality. If someone's personality data has been obtained, this can help the process of recruiting prospective employees in a company by simply reading from handwriting patterns. Handwriting can be studied by combining the science of Psychology so that it can provide output in the form of a person's characteristics or personality. This research will be developed using the multi class Support Vector Machine (SVM) classification. The preprocessing stage in the form of binarization, thinning and data extraction will also greatly affect the reliability of the system. Simulations with variations of variables and parameters are expected to obtain optimal accuracy.
APA, Harvard, Vancouver, ISO, and other styles
19

Dovganich, A. A., A. V. Nasonov, Andrey S. Krylov, and N. V. Makhneva. "Immunofluorescence diagnosis and analysis of samples of its images in autoimmune pemphigus." Russian Journal of Skin and Venereal Diseases 19, no. 1 (February 15, 2016): 31–35. http://dx.doi.org/10.18821/1560-95882016-19-1-31-35.

Full text
Abstract:
Autoimmune Pemphigus is a group of autoimmune bullous dermatosis characterized by intraepithelial blister formation and the presence of specific IgG-antibodies to the antigens of the intercellular bonding substance (MCC) stratified squamous epithelium. Specific immunomorphological picture (fixing IgG in MCC epidermis) allows to diagnose this bullous dermatosis. However, in some cases, during the use of this diagnostic method visualization of specific features is difficult because of the use of mild and/or non-uniform specific immunohistochemical reaction that prevents to diagnose pemphigus with absolute precision. The analysis of immunofluorescence diagnosis in autoimmune pemphigus was performed. Skin tissue image analysis algorithm is proposed. The algorithm performs image quality enhancement and detects inter-cell structures that are typical for pemphigus assessment. The algorithm consists of alignment illumination, median filtering, Gaussian filter processing, ridge detection using Hessian, image binarization, separation, for a ridge map, connected components and removing components with a small radius. In cases of doubt this allows to differentiate and diagnose autoimmune pemphigus. In addition, a clear visualization of character (granular or linear) fixing the immunoglobulin class G in the intercellular spaces of the epidermis increases the accuracy of the prediction of further disease progression (favorable or torpid) providing timely and appropriate management of the patient prescribing pathogenetic treatment regimens. This work emphasizes importance of introducing the modern computer methods of medical images, that allow significantly to improve the methods of diagnosis of human diseases, including autoimmune bullous dermatosis.
APA, Harvard, Vancouver, ISO, and other styles
20

Zakharov, A., T. Selivyorstova, V. Selivyorstov, V. Balakin, and L. Kamkina. "Features of metal structures digital images containing carbides investigation." System technologies 6, no. 137 (December 10, 2021): 189–200. http://dx.doi.org/10.34185/1562-9945-6-137-2021-17.

Full text
Abstract:
The analysis of microsections requires the involvement of highly qualified experts in the field of materials science, which, in turn, does not exclude the influence of the "human factor". On the other hand, the issues of increasing the objectivity of identifying the properties of metals and alloys require the use of modern data processing methods, for example, artificial intelligence in solving problems of classification and identification of macro and micro structures.The paper presents an overview of studying macro and micro structures containing carbides process, determining the specific features inherent in these images, and proposing an information model for their processing. The article is devoted to the development of an information model intended for the analysis of metal structures digital images with carbide inclusions. The analysis of literary sources is carried out, it is established that the study of metal structures is an important tool for assessing qualitative characteristics. The presence of carbides in the metal structure has a significant impact on its quality. A review of the methodology for studying the structure of a metal is given, and the importance of metal structures image processing stage is determined. The main methods for obtaining digital images of the alloy structure are described. Samples of metal structures with carbides are presented. A procedure for digital processing of metal structures images with kibide inclusions is proposed, which consists of image conversion to grayscale, contrasting, and threshold binarization. An analysis of the results of metal structures processing images made it possible to identify areas with carbide inclusions, however, additional artifacts that were not carbides were found in some images. Balancing by the binarization threshold in this case does not improve the detection of carbide inclusions network due to the lack of contrast. Histograms demonstrate the presence of information features in a wide range of gray colors, so for this class of images, more sophisticated image processing technologies need to be developed. In the course of digital images features study of metals and alloys metal structures containing carbides, it was: an information model for processing metal structures containing carbide inclusions is proposed; the proposed information model is applied to digital images of metal structures; it was found that some images of metal structures are characterized by low contrast, which leads to the selection of background artifacts, except for areas with carbide inclusions; the development of complex mathematical methods for the detection of carbide inclusions in images of metal structures characterized by low contrast is proposed. Thus, the article shows the results of carbide inclusions of the using the digital image processing procedure. The advantages and disadvantages of the approach are shown, the directions for its improvement are determined.
APA, Harvard, Vancouver, ISO, and other styles
21

Mikhailov, Vladimir, Vladislav Sobolevskii, Leonid Kolpaschikov, Nikolay Soloviev, and Georgiy Yakushev. "Methodological approaches and algorithms for recognizing and counting animals in aerial photographs." Information and Control Systems, no. 5 (October 26, 2021): 20–32. http://dx.doi.org/10.31799/1684-8853-2021-5-20-32.

Full text
Abstract:
Introduction: The complexity of recognition and counting of objects in a photographic image is directly related to variability of related factors: physical difference of objects from the same class, presence of images similar to objects to be recognized, non-uniform background, change of shooting conditions and position of the objects when the photo was taken. Most challenging are the problems of identifying people in crowds, animals in natural environment, cars from surveillance cameras, objects of construction and infrastructure on aerial photo images, etc. These problems have their own specific factor space, but the methodological approaches to their solution are similar. Purpose: The development of methodologies and software implementations solving the problem of recognition and counting of objects with high variability, on the example of reindeer recognition in the natural environment. Methods: Two approaches are investigated: feature-based recognition based on binary pixel classification and reference-based recognition using convolutional neural networks. Results: Methodologies and programs have been developed for pixel-by-pixel recognition with subsequent binarization, image clustering and cluster counting and image recognition using the convolutional neural network of Mask R-CNN architecture. The network is first trained to recognize animals as a class from the array of MS COCO dataset images and then trained on the array of aerial photographs of reindeer herds. Analysis of the results shows that feature-based methods with pixel-by-pixel recognition give good results on relatively simple images (recognition error 10–15%). The presence of artifacts on the image that are close to the characteristics of the reindeer images leads to a significant increase in the error. The convolutional neural network showed higher accuracy, which on the test sample was 82%, with no false positives. Practical relevance: А software prototype has been created for the recognition system based on convolutional neural networks with a web interface, and the program itself has been put into limited operation.
APA, Harvard, Vancouver, ISO, and other styles
22

Sledevič, Tomyslav, and Artūras Serackis. "mNet2FPGA: A Design Flow for Mapping a Fixed-Point CNN to Zynq SoC FPGA." Electronics 9, no. 11 (November 2, 2020): 1823. http://dx.doi.org/10.3390/electronics9111823.

Full text
Abstract:
The convolutional neural networks (CNNs) are a computation and memory demanding class of deep neural networks. The field-programmable gate arrays (FPGAs) are often used to accelerate the networks deployed in embedded platforms due to the high computational complexity of CNNs. In most cases, the CNNs are trained with existing deep learning frameworks and then mapped to FPGAs with specialized toolflows. In this paper, we propose a CNN core architecture called mNet2FPGA that places a trained CNN on a SoC FPGA. The processing system (PS) is responsible for convolution and fully connected core configuration according to the list of prescheduled instructions. The programmable logic holds cores of convolution and fully connected layers. The hardware architecture is based on the advanced extensible interface (AXI) stream processing with simultaneous bidirectional transfers between RAM and the CNN core. The core was tested on a cost-optimized Z-7020 FPGA with 16-bit fixed-point VGG networks. The kernel binarization and merging with the batch normalization layer were applied to reduce the number of DSPs in the multi-channel convolutional core. The convolutional core processes eight input feature maps at once and generates eight output channels of the same size and composition at 50 MHz. The core of the fully connected (FC) layer works at 100 MHz with up to 4096 neurons per layer. In a current version of the CNN core, the size of the convolutional kernel is fixed to 3×3. The estimated average performance is 8.6 GOPS for VGG13 and near 8.4 GOPS for VGG16/19 networks.
APA, Harvard, Vancouver, ISO, and other styles
23

Mukherjee, Jayati, Swapan K. Parui, and Utpal Roy. "An Unsupervised and Robust Line and Word Segmentation Method for Handwritten and Degraded Printed Document." ACM Transactions on Asian and Low-Resource Language Information Processing 21, no. 2 (March 31, 2022): 1–31. http://dx.doi.org/10.1145/3474118.

Full text
Abstract:
Segmentation of text lines and words in an unconstrained handwritten or a machine-printed degraded document is a challenging document analysis problem due to the heterogeneity in the document structure. Often there is un-even skew between the lines and also broken words in a document. In this article, the contribution lies in segmentation of a document page image into lines and words. We have proposed an unsupervised, robust, and simple statistical method to segment a document image that is either handwritten or machine-printed (degraded or otherwise). In our proposed method, the segmentation is treated as a two-class classification problem. The classification is done by considering the distribution of gap size (between lines and between words) in a binary page image. Our method is very simple and easy to implement. Other than the binarization of the input image, no pre-processing is necessary. There is no need of high computational resources. The proposed method is unsupervised in the sense that no annotated document page images are necessary. Thus, the issue of a training database does not arise. In fact, given a document page image, the parameters that are needed for segmentation of text lines and words are learned in an unsupervised manner. We have applied our proposed method on several popular publicly available handwritten and machine-printed datasets (ISIDDI, IAM-Hist, IAM, PBOK) of different Indian and other languages containing different fonts. Several experimental results are presented to show the effectiveness and robustness of our method. We have experimented on ICDAR-2013 handwriting segmentation contest dataset and our method outperforms the winning method. In addition to this, we have suggested a quantitative measure to compute the level of degradation of a document page image.
APA, Harvard, Vancouver, ISO, and other styles
24

Wang, Yafei, Xiaoxue Du, Guoxin Ma, Yong Liu, Bin Wang, and Hanping Mao. "Classification Methods for Airborne Disease Spores from Greenhouse Crops Based on Multifeature Fusion." Applied Sciences 10, no. 21 (November 5, 2020): 7850. http://dx.doi.org/10.3390/app10217850.

Full text
Abstract:
Airborne fungal spores have always played an important role in the spread of fungal crop diseases, causing great concern. The traditional microscopic spore classification method mainly relies on naked eye observations and classification by professional and technical personnel in a laboratory. Due to the large number of spores captured, this method is labor-intensive, time-consuming, and inefficient, and sometimes leads to huge errors. Thus, an alternative method is required. In this study, a method was proposed to identify airborne disease spores from greenhouse crops using digital image processing. First, in an indoor simulation, images of airborne disease spores from three greenhouse crops were collected using portable volumetric spore traps. Then, a series of image preprocessing methods were used to identify the spores, including mean filtering, Gaussian filtering, OTSU (maximum between-class variance) method binarization, morphological operations, and mask operations. After image preprocessing, 90 features of the spores were extracted, including color, shape, and texture features. Based on these features, logistics regression (LR), K nearest neighbor (KNN), random forest (RF), and support vector machine (SVM) classification models were built. The test results showed that the average accuracy rates for the 3 classes of disease spores using the SVM model, LR model, KNN model, and RF model were 94.36%, 90.13%, 89.37%, and 89.23%, respectively. The harmonic average of the accuracy and the recall rate value (F value) were higher for the SVM model and its overall average value reached 91.68%, which was 2.03, 3.59, and 3.96 percentage points higher than the LR model, KNN model, and RF model, respectively. Therefore, this method can effectively identify 3 classes of diseases spores and this study can provide a reference for the identification of greenhouse disease spores.
APA, Harvard, Vancouver, ISO, and other styles
25

Maruschak, Pavlo, Roman Vorobel, Oleksandra Student, Iryna Ivasenko, Halyna Krechkovska, Olena Berehulyak, Teodor Mandziy, Lesia Svirska, and Olegas Prentkovskis. "Estimation of Fatigue Crack Growth Rate in Heat-Resistant Steel by Processing of Digital Images of Fracture Surfaces." Metals 11, no. 11 (November 4, 2021): 1776. http://dx.doi.org/10.3390/met11111776.

Full text
Abstract:
The micro- and macroscopic fatigue crack growth (FCG) rates of a wide class of structural materials were analyzed and it was concluded that both rates coincide either during high-temperature tests or at high stress intensity factor (SIF) values. Their coincidence requires a high level of cyclic deformation of the metal along the entire crack front as a necessary condition for the formation of fatigue striations (FS). Based on the analysis of digital fractographic images of the fatigue fracture surfaces, a method for the quantitative assessment of the spacing of FS has been developed. The method includes the detection of FS by binarization of the image based on the principle of local minima, rotation of the highlighted fragments of the image using the Hough transform, and the calculation of the distances between continuous lines. The method was tested on 34KhN3M steel in the initial state and after long-term operation (~3 × 105 h) in the rotor disk of a steam turbine at a thermal power plant (TPP). Good agreement was confirmed between FCG rates (both macro and microscopic, determined manually or using digital imaging techniques) at high SIF ranges and their noticeable discrepancy at low SIF ranges. Possible reasons for the discrepancy between the micro- and macroscopic FCG rates at low values of the SIF are analyzed. It has also been noted that FS is easier to detect on the fracture surface of degraded steel. Hydrogen embrittlement of steel during operation promotes secondary cracking along the FS, making them easier to detect and quantify. It is shown that the invariable value of the microscopic FCG rate at a low SIF range in the operated steel is lower than observable for the steel in the initial state. Secondary cracking of the operated steel may have contributed to the formation of a typical FS pattern along the entire crack front at a lower FCG rate than in unoperated steel.
APA, Harvard, Vancouver, ISO, and other styles
26

Mawaddah, Saniyatul, and Nanik Suciati. "Pengenalan Karakter Tulisan Tangan Menggunakan Ekstraksi Fitur Bentuk Berbasis Chain Code." Jurnal Teknologi Informasi dan Ilmu Komputer 7, no. 4 (August 7, 2020): 683. http://dx.doi.org/10.25126/jtiik.2020742022.

Full text
Abstract:
<p class="Abstrak">Pengenalan karakter tulisan tangan pada citra merupakan suatu permasalahan yang sulit untuk dipecahkan, dikarenakan terdapat perbedaan gaya penulisan pada setiap orang. Tahapan proses dalam pengenalan tulisan tangan diantaranya adalah <em>preprocessing</em>, ekstraksi fitur, dan klasifikasi. <em>Preprocessing</em> dilakukan untuk merubah citra tulisan tangan menjadi citra biner yang hanya mempunyai ketebalan 1 pixel melalui proses binerisasi dan <em>thining</em>. Kemudian pada tahap ekstraksi fitur, dipilih fitur bentuk karena fitur bentuk memiliki peran yang lebih penting dibanding 2 fitur visual lainnya (warna dan tekstur) pada pengenalan karakter tulisan tangan. Metode ekstraksi fitur bentuk yang dipilih dalam penelitian ini adalah metode berbasis <em>chain code</em> karena metode tersebut sering digunakan dalam beberapa penelitian pengenalan tulisan tangan. Pada penelitian ini, dilakukan studi kinerja dari ekstraksi fitur berbasis <em>chain code</em> pada pengenalan karakter tulisan tangan untuk mengetahui metode terbaiknya. Tiga metode ekstraksi fitur berbasis <em>chain code</em> yang digunakan dalam penelitian ini adalah <em>freeman chain code</em>, <em>differential chain code</em> dan <em>vertex chain code</em>. Setiap citra karakter diekstrak menggunakan 3 metode tersebut dengan tiga cara yaitu ekstraksi secara global, lokal 3x3, 5x5, dan 7x7. Setelah esktraksi fitur, dilakukan proses klasifikasi menggunakan support vector machine (SVM). Hasil eksperimen menunjukkan akurasi terbaik adalah pada model citra 7x7 dengan nilai akurasi <em>freeman chain code</em> sebesar 99.75%, <em>differential chain code</em> sebesar 99.75%, dan <em>vertex chain code</em> sebesar 98.6%.</p><p class="Abstrak"><em><strong>Abstract</strong></em></p><p class="Abstract"><em>The recognition of handwriting characters images is a difficult problems to be solved, because everyone has a different writing style. The step of handwriting recognition process are preprocessing, feature extraction, and classification. Preprocessing is done to convert handwritten images into binary images that only have 1 pixel thickness by using binarization and thinning. Then, in the feature extraction we select shape feature because it is more important than two other visual features (color and texture) in handwriting character recognition. Shape feature extraction method chosen in this research is chain code method because this method is often used in several studies for handwriting recognition. In this study, a performance study of feature extraction based on chain codes was carried out on handwriting character recognition to know the best chain code method. The three shape feature extraction based on chain code used in this study are freeman, differential and vertex chain codes. Each character image is extracted using these 3 methods in three ways: extraction globally, local 3x3, 5x5, and 7x7. After the extraction feature, the classification process is carried out using the support vector machine (SVM). The experimental results show that the best accuracy is in the 7x7 image model with the value of freeman chain code accuracy of 99.75%, the differential chain code of 99.75%, and the vertex chain code of 98.6%.</em></p><p class="Abstrak"><em><strong><br /></strong></em></p>
APA, Harvard, Vancouver, ISO, and other styles
27

Schadler, Linda S., Wei Chen, L. Catherine Brinson, Ravishankar Sundararaman, Prajakta Prabhune, and Akshay Iyer. "(Invited) Combining Machine Learning, DFT, EFM, and Modeling to Design Nanodielectric Behavior." ECS Meeting Abstracts MA2022-01, no. 19 (July 7, 2022): 1068. http://dx.doi.org/10.1149/ma2022-01191068mtgabs.

Full text
Abstract:
Polymer Nanodielectrics are class of materials with intriguing combinations of properties. Predicting and designing the properties, however, is complex due to the number of parameters controlling the properties. This makes it difficult to compare results across groups, validate models, and develop a design methodology. This presentation will share a recent approach to developing a data driven design methodology grounded in physics-based models and experimental calibration. We combine finite element modeling of dielectric constant and loss functions with a Monte Carlo multi-scale simulation of carrier hopping to predict break down strength. Filler dispersion, filler geometry, isotropy and interface properties are explicitly taken into to account to compute objective functions for ideal nanodielectric insulators. Further, we have used machine learning to develop an EFM technique for directly measuring the dielectric constant of the nanoscale interfacial region as critical input to our models. Ultimately, we calculate the Pareto frontier with respect to nanocomposite constitute properties and geometry to optimize properties. The finite element approach can be used to forward predict properties based on the properties of the interfacial region and filler dispersion, or as an inverse tool to calculate interface properties or develop optimized filler morphologies. The breakdown strength model critically depends on the energy distribution of trap states that inhibit space charge motion. We have developed an ab initio approach to determine the trap states at amorphous interfaces, and have used that to do a systematic analyses of trap distributions in composites with functionalized particle interfaces. The models all use 3D particle distributions based on 2D imaging (TEM) of the composites and publicly available tools for binarization and characterization of filler morphology. Finally, we use a design of experiment approach (Latin Hybercube Design) to sample the complete experimental design space, and using calibrated models and morphologies, create a data set spanning the full set of parameters. We use the data from the DOE to train a Gaussian Process metamodel and a genetic algorithm to determine which designs fulfill the design parameters. The talk will present several case studies as examples. The data for this work was accessed from a new data resource, MaterialsMine, using FAIR (Findable, Accessible, Interoperable, and Reusable) principles. MaterialsMine is an open source data repository, and resource. It includes unique visualization tools as well as modeling and characterization resources.
APA, Harvard, Vancouver, ISO, and other styles
28

Knyazev, S. V., D. V. Skopich, E. A. Fat’yanova, A. A. Usol’tsev, and A. I. Kutsenko. "SOFTWARE AND HARDWARE AUTOMATED SYSTEM OF CASTS DEFECTS NON-DESTRUCTIVE MONITORING." Izvestiya. Ferrous Metallurgy 62, no. 2 (March 30, 2019): 134–40. http://dx.doi.org/10.17073/0368-0797-2019-2-134-140.

Full text
Abstract:
Introduction of the “Automated system for operational control of casts production (OCCP AS)” makes the basis of an integrated automated production control system (APCS). It performs three main tasks: control and recording (production, products, materials, etc.), improving quality of casts and operational management of technological processes. Solution of these tasks was accomplished through automating data collection in real time for all production operations, recording material flows, creating operational communication channels, as well as centralized collection, processing and representation of data by the process information server. The next step in building an effective automated control system is to stabilize product quality in changing external conditions, for example, quality of materials, and to optimize production (technology change in order to reduce costs for constant or higher product quality). The second stage is based on mathematical processing and analysis of data coming from OCCP AS, it allows to determine optimal ranges of parameters of technological processes – “Automated system for optimization and analysis of production progress (OAPP AS)”. OAPP AS consists of two subsystems: quality analysis and technology management. The first solves the problem of data analysis and modeling, the second – calculation of real-time optimal process parameters and real time prediction. The stages tasks compete for access to different hardware resources. The most critical parameter for OCCP AS is performance of server disk arrays, for OAPP AS it is processor performance. In either case, system scaling is effectively solved by parallelizing operations across different servers, forming a cluster, and across different processors (cores) on the same server. To process defect images and to obtain cause-and-effect characteristics, you can use OpenCV software package, which is an open source computer vision library. In course of processing, Sobel operator, Gauss filter and binarization were used. They are based on processing pixels using matrices. Operations on pixels are independent and can be performed in parallel. The task of clustering is reduced to definition of an expert method or using various mathematical algorithms for defects belonging to a specific cluster (data block) through a set of values of dependent factors. Thus, data blocks are formed by the criterion of the defect cause. Calculation of a data block to which a product defect belongs can be very resource-intensive operation. To increase efficiency of image recognition systems and parallelization ofsearch operations, it makes sense to place data clusters on different servers. As a result, there is a need for a distributed database. This is a special class of DBMS, which requires appropriate software. Generation of OAPPAS based on a multi-node cluster with ApacheCassandra DBMS installed and using Nvidia video cards supporting CUDA technology on each node will be the cheapest and most effective solution. Video card is selected based on required number of graphics processors on the node.
APA, Harvard, Vancouver, ISO, and other styles
29

Lan, Gongjin, Zhenyu Gao, Lingyao Tong, and Ting Liu. "Class binarization to neuroevolution for multiclass classification." Neural Computing and Applications, July 9, 2022. http://dx.doi.org/10.1007/s00521-022-07525-6.

Full text
Abstract:
AbstractMulticlass classification is a fundamental and challenging task in machine learning. The existing techniques of multiclass classification can be categorized as (1) decomposition into binary (2) extension from binary and (3) hierarchical classification. Decomposing multiclass classification into a set of binary classifications that can be efficiently solved by using binary classifiers, called class binarization, which is a popular technique for multiclass classification. Neuroevolution, a general and powerful technique for evolving the structure and weights of neural networks, has been successfully applied to binary classification. In this paper, we apply class binarization techniques to a neuroevolution algorithm, NeuroEvolution of Augmenting Topologies (NEAT), that are used to generate neural networks for multiclass classification. We propose a new method that applies Error-Correcting Output Codes (ECOC) to design the class binarization strategies on the neuroevolution for multiclass classification. The ECOC strategies are compared with the class binarization strategies of One-vs-One and One-vs-All on three well-known datasets of Digit, Satellite, and Ecoli. We analyse their performance from four aspects of multiclass classification degradation, accuracy, evolutionary efficiency, and robustness. The results show that the NEAT with ECOC performs high accuracy with low variance. Specifically, it shows significant benefits in a flexible number of binary classifiers and strong robustness.
APA, Harvard, Vancouver, ISO, and other styles
30

Mehta, Nihaal, Phillip X. Braun, Isaac Gendelman, A. Yasin Alibhai, Malvika Arya, Jay S. Duker, and Nadia K. Waheed. "Repeatability of binarization thresholding methods for optical coherence tomography angiography image quantification." Scientific Reports 10, no. 1 (September 21, 2020). http://dx.doi.org/10.1038/s41598-020-72358-z.

Full text
Abstract:
Abstract Binarization is a critical step in analysis of retinal optical coherence tomography angiography (OCTA) images, but the repeatability of metrics produced from various binarization methods has not been fully assessed. This study set out to examine the repeatability of OCTA quantification metrics produced using different binarization thresholding methods, all of which have been applied in previous studies, across multiple devices and plexuses. Successive 3 × 3 mm foveal OCTA images of 13 healthy eyes were obtained on three different devices. For each image, contrast adjustments, 3 image processing techniques (linear registration, histogram normalization, and contrast-limited adaptive histogram equalization), and 11 binarization thresholding methods were independently applied. Vessel area density (VAD) and vessel length were calculated for retinal vascular images. Choriocapillaris (CC) images were quantified for VAD and flow deficit metrics. Repeatability, measured using the intra-class correlation coefficient, was inconsistent and generally not high (ICC < 0.8) across binarization thresholds, devices, and plexuses. In retinal vascular images, local thresholds tended to incorrectly binarize the foveal avascular zone as white (i.e., wrongly indicating flow). No image processing technique analyzed consistently resulted in highly repeatable metrics. Across contrast changes, retinal vascular images showed the lowest repeatability and CC images showed the highest.
APA, Harvard, Vancouver, ISO, and other styles
31

"Obscure Image Classification and Restoration using Support Vector Machines." International Journal of Recent Technology and Engineering 8, no. 4 (November 30, 2019): 8231–36. http://dx.doi.org/10.35940/ijrte.d8913.118419.

Full text
Abstract:
A restoration and classification computation for blurred image which depends on obscure identification and characterization is proposed in this paper. Initially, new obscure location calculation is proposed to recognize the Gaussian, Motion and Defocus based blurred locales in the image. The degradation-restoration model referred with pre-processing followed by binarization and features extraction/classification algorithm applied on obscure images. At this point, support vector machine (SVM) classification algorithm is proposed to cluster the blurred images. Once the obscure class of the locales is affirmed, the structure of the obscure kernels of the blurred images are affirmed. At that point, the obscure kernel estimation techniques are embraced to appraise the obscure kernels. At last, the blurred locales are re-established utilizing nonblind image deblurring calculation and supplant the blurred images with the restored images. The simulation results demonstrate that the proposed calculation performs well
APA, Harvard, Vancouver, ISO, and other styles
32

Shamsara, Elham, Sara Saffar Soflaei, Mohammad Tajfard, Ivan Yamshchikov, Habibollah Esmaili, Maryam Saberi-Karimian, Hamideh Ghazizadeh, et al. "Artificial neural network models for coronary artery disease." Current Bioinformatics 15 (February 14, 2020). http://dx.doi.org/10.2174/1574893615666200214102837.

Full text
Abstract:
Background: Coronary artery disease (CAD) is an important cause of mortality and morbidity globally. Objective : The early prediction of the CAD would be valuable in identifying individuals at risk, and in focusing resources on its prevention. In this paper, we aimed to establish a diagnostic model to predict CAD by using three approaches of ANN (pattern recognition-ANN, LVQ-ANN, and competitive ANN). Methods: One promising method for early prediction of disease based on risk factors is machine learning. Among different machine learning algorithms, the artificial neural network (ANN) algo-rithms have been applied widely in medicine and a variety of real-world classifications. ANN is a non-linear computational model, that is inspired by the human brain to analyze and process complex datasets. Results: Different methods of ANN that are investigated in this paper indicates in both pattern recognition ANN and LVQ-ANN methods, the predictions of Angiography+ class have high accuracy. Moreover, in CNN the correlations between the individuals in cluster ”c” with the class of Angiography+ is strongly high. This accuracy indicates the significant difference among some of the input features in Angiography+ class and the other two output classes. A comparison among the chosen weights in these three methods in separating control class and Angiography+ shows that hs-CRP, FSG, and WBC are the most substantial excitatory weights in recognizing the Angiography+ individuals although, HDL-C and MCH are determined as inhibitory weights. Furthermore, the effect of decomposition of a multi-class problem to a set of binary classes and random sampling on the accuracy of the diagnostic model is investigated. Conclusion : This study confirms that pattern recognition-ANN had the most accuracy of performance among different methods of ANN. That’s due to the back-propagation procedure of the process in which the network classify input variables based on labeled classes. The results of binarization show that decomposition of the multi-class set to binary sets could achieve higher accuracy.
APA, Harvard, Vancouver, ISO, and other styles
33

Fang, Jiansheng, Yanwu Xu, Yitian Zhao, Yuguang Yan, Junling Liu, and Jiang Liu. "Weighing features of lung and heart regions for thoracic disease classification." BMC Medical Imaging 21, no. 1 (June 10, 2021). http://dx.doi.org/10.1186/s12880-021-00627-y.

Full text
Abstract:
Abstract Background Chest X-rays are the most commonly available and affordable radiological examination for screening thoracic diseases. According to the domain knowledge of screening chest X-rays, the pathological information usually lay on the lung and heart regions. However, it is costly to acquire region-level annotation in practice, and model training mainly relies on image-level class labels in a weakly supervised manner, which is highly challenging for computer-aided chest X-ray screening. To address this issue, some methods have been proposed recently to identify local regions containing pathological information, which is vital for thoracic disease classification. Inspired by this, we propose a novel deep learning framework to explore discriminative information from lung and heart regions. Result We design a feature extractor equipped with a multi-scale attention module to learn global attention maps from global images. To exploit disease-specific cues effectively, we locate lung and heart regions containing pathological information by a well-trained pixel-wise segmentation model to generate binarization masks. By introducing element-wise logical AND operator on the learned global attention maps and the binarization masks, we obtain local attention maps in which pixels are are 1 for lung and heart region and 0 for other regions. By zeroing features of non-lung and heart regions in attention maps, we can effectively exploit their disease-specific cues in lung and heart regions. Compared to existing methods fusing global and local features, we adopt feature weighting to avoid weakening visual cues unique to lung and heart regions. Our method with pixel-wise segmentation can help overcome the deviation of locating local regions. Evaluated by the benchmark split on the publicly available chest X-ray14 dataset, the comprehensive experiments show that our method achieves superior performance compared to the state-of-the-art methods. Conclusion We propose a novel deep framework for the multi-label classification of thoracic diseases in chest X-ray images. The proposed network aims to effectively exploit pathological regions containing the main cues for chest X-ray screening. Our proposed network has been used in clinic screening to assist the radiologists. Chest X-ray accounts for a significant proportion of radiological examinations. It is valuable to explore more methods for improving performance.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography