To see the other types of publications on this topic, follow the link: Classification/segmentation.

Journal articles on the topic 'Classification/segmentation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Classification/segmentation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Levner, Ilya, and Hong Zhang. "Classification-Driven Watershed Segmentation." IEEE Transactions on Image Processing 16, no. 5 (May 2007): 1437–45. http://dx.doi.org/10.1109/tip.2007.894239.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Pavlidis, Theo, and Jiangying Zhou. "Page segmentation and classification." CVGIP: Graphical Models and Image Processing 54, no. 6 (November 1992): 484–96. http://dx.doi.org/10.1016/1049-9652(92)90068-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Khanykov, I. G. "Classification of image segmentation algorithms." Izvestiâ vysših učebnyh zavedenij. Priborostroenie 61, no. 11 (November 30, 2018): 978–87. http://dx.doi.org/10.17586/0021-3454-2018-61-11-978-987.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wang, Ying, Jie Su, Qiuyu Xu, and Yixin Zhong. "A Collaborative Learning Model for Skin Lesion Segmentation and Classification." Diagnostics 13, no. 5 (February 28, 2023): 912. http://dx.doi.org/10.3390/diagnostics13050912.

Full text
Abstract:
The automatic segmentation and classification of skin lesions are two essential tasks in computer-aided skin cancer diagnosis. Segmentation aims to detect the location and boundary of the skin lesion area, while classification is used to evaluate the type of skin lesion. The location and contour information of lesions provided by segmentation is essential for the classification of skin lesions, while the skin disease classification helps generate target localization maps to assist the segmentation task. Although the segmentation and classification are studied independently in most cases, we find meaningful information can be explored using the correlation of dermatological segmentation and classification tasks, especially when the sample data are insufficient. In this paper, we propose a collaborative learning deep convolutional neural networks (CL-DCNN) model based on the teacher–student learning method for dermatological segmentation and classification. To generate high-quality pseudo-labels, we provide a self-training method. The segmentation network is selectively retrained through classification network screening pseudo-labels. Specially, we obtain high-quality pseudo-labels for the segmentation network by providing a reliability measure method. We also employ class activation maps to improve the location ability of the segmentation network. Furthermore, we provide the lesion contour information by using the lesion segmentation masks to improve the recognition ability of the classification network. Experiments are carried on the ISIC 2017 and ISIC Archive datasets. The CL-DCNN model achieved a Jaccard of 79.1% on the skin lesion segmentation task and an average AUC of 93.7% on the skin disease classification task, which is superior to the advanced skin lesion segmentation methods and classification methods.
APA, Harvard, Vancouver, ISO, and other styles
5

Sekhar, Mr Ch, Ms A. Sharmila, Mr Ch Narayana, Mr A. Rutwick, Mr B. Srinu, Mr D. Pramod Kumar, and Mr B. Snehith. "Osteoporosis Diagnosis through Visual Segmentation and Classification: Extensive Review." International Journal of Research Publication and Reviews 5, no. 3 (March 9, 2024): 3748–53. http://dx.doi.org/10.55248/gengpi.5.0324.0771.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Hyun-Cheol Park, Hyun-Cheol Park, Raman Ghimire Hyun-Cheol Park, Sahadev Poudel Raman Ghimire, and Sang-Woong Lee Sahadev Poudel. "Deep Learning for Joint Classification and Segmentation of Histopathology Image." 網際網路技術學刊 23, no. 4 (July 2022): 903–10. http://dx.doi.org/10.53106/160792642022072304025.

Full text
Abstract:
<p>Liver cancer is one of the most prevalent cancer deaths worldwide. Thus, early detection and diagnosis of possible liver cancer help in reducing cancer death. Histopathological Image Analysis (HIA) used to be carried out traditionally, but these are time-consuming and require expert knowledge. We propose a patch-based deep learning method for liver cell classification and segmentation. In this work, a two-step approach for the classification and segmentation of whole-slide image (WSI) is proposed. Since WSIs are too large to be fed into convolutional neural networks (CNN) directly, we first extract patches from them. The patches are fed into a modified version of U-Net with its equivalent mask for precise segmentation. In classification tasks, the WSIs are scaled 4 times, 16 times, and 64 times respectively. Patches extracted from each scale are then fed into the convolutional network with its corresponding label. During inference, we perform majority voting on the result obtained from the convolutional network. The proposed method has demonstrated better results in both classification and segmentation of liver cancer cells.</p> <p>&nbsp;</p>
APA, Harvard, Vancouver, ISO, and other styles
7

Pandeya, Yagya Raj, Bhuwan Bhattarai, and Joonwhoan Lee. "Tracking the Rhythm: Pansori Rhythm Segmentation and Classification Methods and Datasets." Applied Sciences 12, no. 19 (September 23, 2022): 9571. http://dx.doi.org/10.3390/app12199571.

Full text
Abstract:
This paper presents two methods to understand the rhythmic patterns of the voice in Korean traditional music called Pansori. We used semantic segmentation and classification-based structural analysis methods to segment the seven rhythmic categories of Pansori. We propose two datasets; one is for rhythm classification and one is for segmentation. Two classification and two segmentation neural networks are trained and tested in an end-to-end manner. The standard HR network and DeepLabV3+ network are used for rhythm segmentation. A modified HR network and a novel GlocalMuseNet are used for the classification of music rhythm. The GlocalMuseNet outperforms the HR network for Pansori rhythm classification. A novel segmentation model (a modified HR network) is proposed for Pansori rhythm segmentation. The results show that the DeepLabV3+ network is superior to the HR network. The classifier networks are used for time-varying rhythm classification that behaves as the segmentation using overlapping window frames in a spectral representation of audio. Semantic segmentation using the DeepLabV3+ and the HR network shows better results than the classification-based structural analysis methods used in this work; however, the annotation process is relatively time-consuming and costly.
APA, Harvard, Vancouver, ISO, and other styles
8

Vohra, Sumit K., and Dimiter Prodanov. "The Active Segmentation Platform for Microscopic Image Classification and Segmentation." Brain Sciences 11, no. 12 (December 14, 2021): 1645. http://dx.doi.org/10.3390/brainsci11121645.

Full text
Abstract:
Image segmentation still represents an active area of research since no universal solution can be identified. Traditional image segmentation algorithms are problem-specific and limited in scope. On the other hand, machine learning offers an alternative paradigm where predefined features are combined into different classifiers, providing pixel-level classification and segmentation. However, machine learning only can not address the question as to which features are appropriate for a certain classification problem. The article presents an automated image segmentation and classification platform, called Active Segmentation, which is based on ImageJ. The platform integrates expert domain knowledge, providing partial ground truth, with geometrical feature extraction based on multi-scale signal processing combined with machine learning. The approach in image segmentation is exemplified on the ISBI 2012 image segmentation challenge data set. As a second application we demonstrate whole image classification functionality based on the same principles. The approach is exemplified using the HeLa and HEp-2 data sets. Obtained results indicate that feature space enrichment properly balanced with feature selection functionality can achieve performance comparable to deep learning approaches. In summary, differential geometry can substantially improve the outcome of machine learning since it can enrich the underlying feature space with new geometrical invariant objects.
APA, Harvard, Vancouver, ISO, and other styles
9

Abbas, Khamael, and Mustafa Rydh. "Satellite Image Classification and Segmentation by Using JSEG Segmentation Algorithm." International Journal of Image, Graphics and Signal Processing 4, no. 10 (September 17, 2012): 48–53. http://dx.doi.org/10.5815/ijigsp.2012.10.07.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Mittal, Praveen, and Charul Bhatnagar. "Detection of DME by Classification and Segmentation Using OCT Images." Webology 19, no. 1 (January 20, 2022): 601–12. http://dx.doi.org/10.14704/web/v19i1/web19043.

Full text
Abstract:
Optical Coherence Tomography (OCT) is a developing medical scanning technique proposing non- protruding scanning with high resolution for biological tissues. It is extensively employed in optics to accomplish investigative scanning of the eye, especially the retinal layers. Various medical research works are conducted to evaluate the usage of Optical Coherence Tomography to detect diseases like DME. The current study provides an innovative, completely automated algorithm for disease detection such as DME through OCT scanning. We performed the classification and segmentation for the detection of DME. The algorithm used employed HOG descriptors as feature vectors for SVM based classifier. Cross-validation was performed on the SD-OCT data sets comprised of volumetric images obtained from 20 people. Out of 10 were normal, while 10 were patients of diabetic macular edema (DME). Our classifier effectively detected 100% of cases of DME while about 70% cases of healthy individuals. The development of such a notable technique is extremely important for detecting retinal diseases such as DME.
APA, Harvard, Vancouver, ISO, and other styles
11

Huang, J., L. Xie, W. Wang, X. Li, and R. Guo. "A MULTI-SCALE POINT CLOUDS SEGMENTATION METHOD FOR URBAN SCENE CLASSIFICATION USING REGION GROWING BASED ON MULTI-RESOLUTION SUPERVOXELS WITH ROBUST NEIGHBORHOOD." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B5-2022 (June 2, 2022): 79–86. http://dx.doi.org/10.5194/isprs-archives-xliii-b5-2022-79-2022.

Full text
Abstract:
Abstract. Point clouds classification is the basis for 3D spatial information extraction and applications. The point-clusters-based methods are proved to be more efficient and accurate than the point-based methods, however, the precision of the classification is significantly affected by the segmentation errors. The traditional single-scale point clouds segmentation methods cannot segment complex objects well in urban scenes which will result in inaccurate classification. In this paper, a new multi-scale point clouds segmentation method for urban scene point clouds classification is proposed. The proposed method consists of two stages. In the first stage, to ease the segmentation errors caused by density anisotropy and unreasonable neighborhood, a multi-resolution supervoxels segmentation algorithm is proposed to segment the objects into small-scale clusters. Firstly, the point cloud is segmented into initial supervoxels based on geometric and quantitative constraints. Secondly, robust neighboring relationships between supervoxels are obtained based on kd-tree and octree. Furthermore, the resolution of supervoxels in the planar and low-density region is optimized. In the second stage, planar supervoxels are clustered into the large-scale planar point clusters based on the region growing algorithm. Finally, a mix of small-scale and large-scale point clusters is obtained for classification. The performance of the segmentation method in classification is compared with other segmentation methods. Experimental results revealed that the proposed segmentation method can significantly improve the efficiency and accuracy of point clouds classification than other segmentation methods.
APA, Harvard, Vancouver, ISO, and other styles
12

Hao, Shuang, Yuhuan Cui, and Jie Wang. "Segmentation Scale Effect Analysis in the Object-Oriented Method of High-Spatial-Resolution Image Classification." Sensors 21, no. 23 (November 28, 2021): 7935. http://dx.doi.org/10.3390/s21237935.

Full text
Abstract:
High-spatial-resolution images play an important role in land cover classification, and object-based image analysis (OBIA) presents a good method of processing high-spatial-resolution images. Segmentation, as the most important premise of OBIA, significantly affects the image classification and target recognition results. However, scale selection for image segmentation is difficult and complicated for OBIA. The main challenge in image segmentation is the selection of the optimal segmentation parameters and an algorithm that can effectively extract the image information. This paper presents an approach that can effectively select an optimal segmentation scale based on land object average areas. First, 20 different segmentation scales were used for image segmentation. Next, the classification and regression tree model (CART) was used for image classification based on 20 different segmentation results, where four types of features were calculated and used, including image spectral bands value, texture value, vegetation indices, and spatial feature indices, respectively. WorldView-3 images were used as the experimental data to verify the validity of the proposed method for the selection of the optimal segmentation scale parameter. In order to decide the effect of the segmentation scale on the object area level, the average areas of different land objects were estimated based on the classification results. Experiments based on the multi-scale segmentation scale testify to the validity of the land object’s average area-based method for the selection of optimal segmentation scale parameters. The study results indicated that segmentation scales are strongly correlated with an object’s average area, and thus, the optimal segmentation scale of every land object can be obtained. In this regard, we conclude that the area-based segmentation scale selection method is suitable to determine optimal segmentation parameters for different land objects. We hope the segmentation scale selection method used in this study can be further extended and used for different image segmentation algorithms.
APA, Harvard, Vancouver, ISO, and other styles
13

Hu, Yuan Chun, Jian Sun, and Wei Liu. "Classification-Based Character Segmentation of Image." Applied Mechanics and Materials 519-520 (February 2014): 572–76. http://dx.doi.org/10.4028/www.scientific.net/amm.519-520.572.

Full text
Abstract:
In traditional way, the segmentation of image is conducted by simple technology of image processing, which cannot be operated automatically. In this paper, we present a kind of classification method to find the boundary area to segment character image. Referring to sample points and sample areas, the essential segmentation information is extracted. By merging different formats of image transformation, including rotation, erosion and dilation, more features are used to train and test the segmentation model. Parameter tuning is also proposed to optimize the model for promotion. By the means of cross validation, the basic training model and parameter tuning are integrated in iteration way. The comparison results show that the best precision and recall can up to 97.84% in precision and 94.09% in recall.
APA, Harvard, Vancouver, ISO, and other styles
14

Khan, Khalil, Muhammad Attique, Ikram Syed, and Asma Gul. "Automatic Gender Classification through Face Segmentation." Symmetry 11, no. 6 (June 6, 2019): 770. http://dx.doi.org/10.3390/sym11060770.

Full text
Abstract:
Automatic gender classification is challenging due to large variations of face images, particularly in the un-constrained scenarios. In this paper, we propose a framework which first segments a face image into face parts, and then performs automatic gender classification. We trained a Conditional Random Fields (CRFs) based segmentation model through manually labeled face images. The CRFs based model is used to segment a face image into six different classes—mouth, hair, eyes, nose, skin, and back. The probabilistic classification strategy (PCS) is used, and probability maps are created for all six classes. We use the probability maps as gender descriptors and trained a Random Decision Forest (RDF) classifier, which classifies the face images as either male or female. The performance of the proposed framework is assessed on four publicly available datasets, namely Adience, LFW, FERET, and FEI, with results outperforming state-of-the-art (SOA).
APA, Harvard, Vancouver, ISO, and other styles
15

Correia, P. L., and F. Pereira. "Classification of Video Segmentation Application Scenarios." IEEE Transactions on Circuits and Systems for Video Technology 14, no. 5 (May 2004): 735–41. http://dx.doi.org/10.1109/tcsvt.2004.826778.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Cooper, M., T. Liu, and E. Rieffel. "Video Segmentation via Temporal Pattern Classification." IEEE Transactions on Multimedia 9, no. 3 (April 2007): 610–18. http://dx.doi.org/10.1109/tmm.2006.888015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Joy, Neenu. "Various OCT Segmentation and Classification Techniques." International Journal of Information Systems and Computer Sciences 9, no. 3 (June 25, 2020): 31–37. http://dx.doi.org/10.30534/ijiscs/2020/05932020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Agostini, Valentina, Gabriella Balestra, and Marco Knaflitz. "Segmentation and Classification of Gait Cycles." IEEE Transactions on Neural Systems and Rehabilitation Engineering 22, no. 5 (September 2014): 946–52. http://dx.doi.org/10.1109/tnsre.2013.2291907.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Hoffman, Richard, and Anil K. Jain. "Segmentation and Classification of Range Images." IEEE Transactions on Pattern Analysis and Machine Intelligence PAMI-9, no. 5 (September 1987): 608–20. http://dx.doi.org/10.1109/tpami.1987.4767955.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Saifullah, Y., and M. T. Manry. "Classification-based segmentation of ZIP codes." IEEE Transactions on Systems, Man, and Cybernetics 23, no. 5 (1993): 1437–43. http://dx.doi.org/10.1109/21.260675.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Borshukov, G. D., G. Bozdagi, Y. Altunbasak, and A. M. Tekalp. "Motion segmentation by multistage affine classification." IEEE Transactions on Image Processing 6, no. 11 (November 1997): 1591–94. http://dx.doi.org/10.1109/83.641420.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Dikshit, Onkar, and Vinay Behl. "Segmentation-assisted classification for IKONOS imagery." Journal of the Indian Society of Remote Sensing 37, no. 4 (December 2009): 551–64. http://dx.doi.org/10.1007/s12524-009-0055-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Praveena, S., and S. P. Singh. "Segmentation and Classification of Satellite images." World Academics Journal of Engineering Sciences 01, no. 01 (2014): 1006. http://dx.doi.org/10.15449/wjes.2014.1006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Dam, E. B., and M. Loog. "Efficient Segmentation by Sparse Pixel Classification." IEEE Transactions on Medical Imaging 27, no. 10 (October 2008): 1525–34. http://dx.doi.org/10.1109/tmi.2008.923961.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Shih, F. Y., and Shy-Shyan Chen. "Adaptive document block segmentation and classification." IEEE Transactions on Systems, Man and Cybernetics, Part B (Cybernetics) 26, no. 5 (1996): 797–802. http://dx.doi.org/10.1109/3477.537322.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Cesario, Eugenio, Francesco Folino, Antonio Locane, Giuseppe Manco, and Riccardo Ortale. "Boosting text segmentation via progressive classification." Knowledge and Information Systems 15, no. 3 (June 22, 2007): 285–320. http://dx.doi.org/10.1007/s10115-007-0085-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Choi, Hyun-Tae, and Byung-Woo Hong. "Unsupervised Object Segmentation Based on Bi-Partitioning Image Model Integrated with Classification." Electronics 10, no. 18 (September 18, 2021): 2296. http://dx.doi.org/10.3390/electronics10182296.

Full text
Abstract:
The development of convolutional neural networks for deep learning has significantly contributed to image classification and segmentation areas. For high performance in supervised image segmentation, we need many ground-truth data. However, high costs are required to make these data, so unsupervised manners are actively being studied. The Mumford–Shah and Chan–Vese models are well-known unsupervised image segmentation models. However, the Mumford–Shah model and the Chan–Vese model cannot separate the foreground and background of the image because they are based on pixel intensities. In this paper, we propose a weakly supervised model for image segmentation based on the segmentation models (Mumford–Shah model and Chan–Vese model) and classification. The segmentation model (i.e., Mumford–Shah model or Chan–Vese model) is to find a base image mask for classification, and the classification network uses the mask from the segmentation models. With the classifcation network, the output mask of the segmentation model changes in the direction of increasing the performance of the classification network. In addition, the mask can distinguish the foreground and background of images naturally. Our experiment shows that our segmentation model, integrated with a classifier, can segment the input image to the foreground and the background only with the image’s class label, which is the image-level label.
APA, Harvard, Vancouver, ISO, and other styles
28

Sun, Jingliang. "Application of Image Segmentation Algorithm Based on Partial Differential Equation in Legal Case Text Classification." Advances in Mathematical Physics 2021 (October 8, 2021): 1–9. http://dx.doi.org/10.1155/2021/4062200.

Full text
Abstract:
As a means of regulating people’s code of conduct, law has a close relationship with text, and text data has been growing exponentially. Managing and classifying huge text data have become a huge challenge. The PDES image segmentation algorithm is an effective natural language processing method for text classification management. Based on the study of image segmentation algorithm and legal case text classification theory, an image segmentation model based on partial differential equation is proposed, in which diffusion indirectly acts on level set function through auxiliary function. The software architecture of image segmentation algorithm text classification system is proposed by using computer technology and three-layer architecture model, which can improve the classification ability of text classification algorithm. The validity of pDE image segmentation model is verified by experiments. The experimental results show that the model completes the legal case text classification, the performance of each functional module of the legal case text classification system is good, and the efficiency and quality of the legal case text classification are improved.
APA, Harvard, Vancouver, ISO, and other styles
29

Jemimma, T., and Y. Jacob Vetharaj. "A Survey on Brain Tumor Segmentation and Classification." International Journal of Software Innovation 10, no. 1 (January 1, 2022): 1–21. http://dx.doi.org/10.4018/ijsi.309721.

Full text
Abstract:
Brain tumor segmentation and classification is really a difficult process to identify and detect the tumor region. Magnetic resonance image (MRI) gives valuable information to find the affected area in the brain. The MRI brain image is initially considered, which specifies four various modalities of the brain such as T1, T2, T1C, and the Flair. The preprocessing methodologies and the state-of-the-art MRI-related brain tumor segmentation and classification methods are discussed. This study describes the different types of brain tumor segmentation and classification techniques with its most important contributions. The survey of brain tumor segmentation and classification (BTSC) technique including the four main phases—preprocessing, feature extraction, segmentation, and classification—is discussed. The different types of BTSC techniques are listed, along with their great contributions. A review of recent articles on classifiers shows the eccentric features of classifiers for research.
APA, Harvard, Vancouver, ISO, and other styles
30

Zhang, Xiaohua, Hui Wang, Wenxiang Xue, Chaoyun Qin, Yuping Wu, Shuyuan Wang, and Peng Qiu. "Research on classification method based on multi-scale segmentation and hierarchical classification." Journal of Physics: Conference Series 2189, no. 1 (February 1, 2022): 012029. http://dx.doi.org/10.1088/1742-6596/2189/1/012029.

Full text
Abstract:
Abstract This paper transmission line corridors covering area in Hebei north area as the research object to explore multi-scale segmentation threshold suitable for Hebei north image, found applicable to Hebei north region segmentation threshold rules. Main methods are the object-oriented multi-scale segmentation and hierarchical classification, using image segmentation principle, make full use of high resolution image rich features such as shape, texture, object relationships. It is used for the follow-up investigation of hidden danger of external damage of power transmission channel in northern Hebei region. The main conclusions of the experiment are as follows: 1. By comparing and analyzing the results of five groups of different thresholds (40, 50, 60, 70 and 80), it is concluded that the single threshold of the multi-scale segmentation method suitable for most mountainous images in the study area is 60, and this method can achieve high-precision ground object classification for images in northern Hebei region. 2. The ground feature cover classification method that is more suitable for the parallel processing of a large number of study areas is the object-oriented hierarchical classification method, and preliminary exploration results of detailed parameters have been achieved.
APA, Harvard, Vancouver, ISO, and other styles
31

Hasanpour Zaryabi, E., M. Saadatseresht, and E. Ghanbari Parmehr. "AN OBJECT-BASED CLASSIFICATION FRAMEWORK FOR ALS POINT CLOUD IN URBAN AREAS." ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences X-4/W1-2022 (January 13, 2023): 279–86. http://dx.doi.org/10.5194/isprs-annals-x-4-w1-2022-279-2023.

Full text
Abstract:
Abstract. This article presents an automated and effective framework for segmentation and classification of airborne laser scanning (ALS) point clouds obtained from LiDAR-UAV sensors in urban areas. Segmentation and classification are among the main processes of the point cloud. They are used to transform 3D point coordinates into a semantic representation. The proposed framework has three main parts, including the development of a supervoxel data structure, point cloud segmentation based on local graphs, and using three methods for object-based classification. The results of the point cloud segmentation with an average segmentation error of 0.15 show that the supervoxel structure with an optimal parameter for the number of neighbors can reduce the computational cost and the segmentation error. Moreover, weighted local graphs that connect neighboring supervoxels and examine their similarities play a significant role in improving and optimizing the segmentation process. Finally, three classification methods including Random Forest, Gradient Boosted Trees, and Bagging Decision Trees were evaluated. As a result, the extracted segments were classified with an average precision of higher than 83%.
APA, Harvard, Vancouver, ISO, and other styles
32

Salman Al-Shaikhli, Saif Dawood, Michael Ying Yang, and Bodo Rosenhahn. "Brain tumor classification and segmentation using sparse coding and dictionary learning." Biomedical Engineering / Biomedizinische Technik 61, no. 4 (August 1, 2016): 413–29. http://dx.doi.org/10.1515/bmt-2015-0071.

Full text
Abstract:
AbstractThis paper presents a novel fully automatic framework for multi-class brain tumor classification and segmentation using a sparse coding and dictionary learning method. The proposed framework consists of two steps: classification and segmentation. The classification of the brain tumors is based on brain topology and texture. The segmentation is based on voxel values of the image data. Using K-SVD, two types of dictionaries are learned from the training data and their associated ground truth segmentation: feature dictionary and voxel-wise coupled dictionaries. The feature dictionary consists of global image features (topological and texture features). The coupled dictionaries consist of coupled information: gray scale voxel values of the training image data and their associated label voxel values of the ground truth segmentation of the training data. For quantitative evaluation, the proposed framework is evaluated using different metrics. The segmentation results of the brain tumor segmentation (MICCAI-BraTS-2013) database are evaluated using five different metric scores, which are computed using the online evaluation tool provided by the BraTS-2013 challenge organizers. Experimental results demonstrate that the proposed approach achieves an accurate brain tumor classification and segmentation and outperforms the state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
33

Haq, Ejaz Ul, Huang Jianjun, Xu Huarong, Kang Li, and Lifen Weng. "A Hybrid Approach Based on Deep CNN and Machine Learning Classifiers for the Tumor Segmentation and Classification in Brain MRI." Computational and Mathematical Methods in Medicine 2022 (August 8, 2022): 1–18. http://dx.doi.org/10.1155/2022/6446680.

Full text
Abstract:
Conventional medical imaging and machine learning techniques are not perfect enough to correctly segment the brain tumor in MRI as the proper identification and segmentation of tumor borders are one of the most important criteria of tumor extraction. The existing approaches are time-consuming, incursive, and susceptible to human mistake. These drawbacks highlight the importance of developing a completely automated deep learning-based approach for segmentation and classification of brain tumors. The expedient and prompt segmentation and classification of a brain tumor are critical for accurate clinical diagnosis and adequately treatment. As a result, deep learning-based brain tumor segmentation and classification algorithms are extensively employed. In the deep learning-based brain tumor segmentation and classification technique, the CNN model has an excellent brain segmentation and classification effect. In this work, an integrated and hybrid approach based on deep convolutional neural network and machine learning classifiers is proposed for the accurate segmentation and classification of brain MRI tumor. A CNN is proposed in the first stage to learn the feature map from image space of brain MRI into the tumor marker region. In the second step, a faster region-based CNN is developed for the localization of tumor region followed by region proposal network (RPN). In the last step, a deep convolutional neural network and machine learning classifiers are incorporated in series in order to further refine the segmentation and classification process to obtain more accurate results and findings. The proposed model’s performance is assessed based on evaluation metrics extensively used in medical image processing. The experimental results validate that the proposed deep CNN and SVM-RBF classifier achieved an accuracy of 98.3% and a dice similarity coefficient (DSC) of 97.8% on the task of classifying brain tumors as gliomas, meningioma, or pituitary using brain dataset-1, while on Figshare dataset, it achieved an accuracy of 98.0% and a DSC of 97.1% on classifying brain tumors as gliomas, meningioma, or pituitary. The segmentation and classification results demonstrate that the proposed model outperforms state-of-the-art techniques by a significant margin.
APA, Harvard, Vancouver, ISO, and other styles
34

Sun, Xiaodan, and Xiaofang Sun. "A Pixel Texture Index Algorithm and Its Application." Photogrammetric Engineering & Remote Sensing 90, no. 5 (May 1, 2024): 277–92. http://dx.doi.org/10.14358/pers.23-00051r2.

Full text
Abstract:
Image segmentation is essential for object-oriented analysis, and classification is a critical parameter influencing analysis accuracy. However, image classification and segmentation based on spectral features are easily perturbed by the high-frequency information of a high spatial resolution remotely sensed (HSRRS) image, degrading its classification and segmentation quality. This article first presents a pixel texture index (PTI) by describing the texture and edge in a local area surrounding a pixel. Indeed.. The experimental results highlight that the HSRRS image classification and segmentation quality can be effectively improved by combining it with the PTI image. Indeed, the overall accuracy improved from 7% to 14%, and the kappa can be increased from 11% to 24%, respectively.
APA, Harvard, Vancouver, ISO, and other styles
35

Pintelas, Emmanuel, and Ioannis E. Livieris. "XSC—An eXplainable Image Segmentation and Classification Framework: A Case Study on Skin Cancer." Electronics 12, no. 17 (August 22, 2023): 3551. http://dx.doi.org/10.3390/electronics12173551.

Full text
Abstract:
Within the field of computer vision, image segmentation and classification serve as crucial tasks, involving the automatic categorization of images into predefined groups or classes, respectively. In this work, we propose a framework designed for simultaneously addressing segmentation and classification tasks in image-processing contexts. The proposed framework is composed of three main modules and focuses on providing transparency, interpretability, and explainability in its operations. The first two modules are used to partition the input image into regions of interest, allowing the automatic and interpretable identification of segmentation regions using clustering techniques. These segmentation regions are then analyzed to select those considered valuable by the user for addressing the classification task. The third module focuses on classification, using an explainable classifier, which relies on hand-crafted transparent features extracted from the selected segmentation regions. By leveraging only the selected informative regions, the classification model is made more reliable and less susceptible to misleading information. The proposed framework’s effectiveness was evaluated in a case study on skin-cancer-segmentation and -classification benchmarks. The experimental analysis highlighted that the proposed framework exhibited comparable performance with the state-of-the-art deep-learning approaches, which implies its efficiency, considering the fact that the proposed approach is also interpretable and explainable.
APA, Harvard, Vancouver, ISO, and other styles
36

Qiao, Y., T. Chen, J. He, Q. Wen, F. Liu, and Z. Wang. "METHOD OF GRASSLAND INFORMATION EXTRACTION BASED ON MULTI-LEVEL SEGMENTATION AND CART MODEL." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-3 (April 30, 2018): 1415–20. http://dx.doi.org/10.5194/isprs-archives-xlii-3-1415-2018.

Full text
Abstract:
It is difficult to extract grassland accurately by traditional classification methods, such as supervised method based on pixels or objects. This paper proposed a new method combing the multi-level segmentation with CART (classification and regression tree) model. The multi-level segmentation which combined the multi-resolution segmentation and the spectral difference segmentation could avoid the over and insufficient segmentation seen in the single segmentation mode. The CART model was established based on the spectral characteristics and texture feature which were excavated from training sample data. Xilinhaote City in Inner Mongolia Autonomous Region was chosen as the typical study area and the proposed method was verified by using visual interpretation results as approximate truth value. Meanwhile, the comparison with the nearest neighbor supervised classification method was obtained. The experimental results showed that the total precision of classification and the Kappa coefficient of the proposed method was 95&amp;thinsp;% and 0.9, respectively. However, the total precision of classification and the Kappa coefficient of the nearest neighbor supervised classification method was 80&amp;thinsp;% and 0.56, respectively. The result suggested that the accuracy of classification proposed in this paper was higher than the nearest neighbor supervised classification method. The experiment certificated that the proposed method was an effective extraction method of grassland information, which could enhance the boundary of grassland classification and avoid the restriction of grassland distribution scale. This method was also applicable to the extraction of grassland information in other regions with complicated spatial features, which could avoid the interference of woodland, arable land and water body effectively.
APA, Harvard, Vancouver, ISO, and other styles
37

Kp, Mamatha, and H. N. Suma. "DIAGNOSIS AND CLASSIFICATION OF DEMENTIA USING MRI IMAGES." International Journal of Research -GRANTHAALAYAH 5, no. 4RACSIT (April 30, 2017): 30–37. http://dx.doi.org/10.29121/granthaalayah.v5.i4racsit.2017.3345.

Full text
Abstract:
The proposed work is to present an effective approach to diagnoseof dementia using MRI images and classify into different stages. There are many manual segmentation algorithms on detection and classification or very simple and specific segmentation algorithms to segment each region of interest exclusively. Thus, the proposed system shall use one of the most effective automatic segmentation techniques on MRI images at once. The regions of interest to segment are CSF (Cerebralspinal fluid), gray matter, and white matter and ventricles using the effective segmentation method called level set segmentation. The features are extracted from these four regions of interest and classification of the dementia is performed using K-nearest neighbor.
APA, Harvard, Vancouver, ISO, and other styles
38

Alberto, R. T., S. C. Serrano, G. B. Damian, E. E. Camaso, A. B. Celestino, P. J. C. Hernando, M. F. Isip, K. M. Orge, M. J. C. Quinto, and R. C. Tagaca. "OBJECT BASED AGRICULTURAL LAND COVER CLASSIFICATION MAP OF SHADOWED AREAS FROM AERIAL IMAGE AND LIDAR DATA USING SUPPORT VECTOR MACHINE." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences III-7 (June 7, 2016): 45–50. http://dx.doi.org/10.5194/isprsannals-iii-7-45-2016.

Full text
Abstract:
Aerial image and LiDAR data offers a great possibility for agricultural land cover mapping. Unfortunately, these images leads to shadowy pixels. Management of shadowed areas for classification without image enhancement were investigated. Image segmentation approach using three different segmentation scales were used and tested to segment the image for ground features since only the ground features are affected by shadow caused by tall features. The RGB band and intensity were the layers used for the segmentation having an equal weights. A segmentation scale of 25 was found to be the optimal scale that will best fit for the shadowed and non-shadowed area classification. The SVM using Radial Basis Function kernel was then applied to extract classes based on properties extracted from the Lidar data and orthophoto. Training points for different classes including shadowed areas were selected homogeneously from the orthophoto. Separate training points for shadowed areas were made to create additional classes to reduced misclassification. Texture classification and object-oriented classifiers have been examined to reduced heterogeneity problem. The accuracy of the land cover classification using 25 scale segmentation after accounting for the shadow detection and classification was significantly higher compared to higher scale of segmentation.
APA, Harvard, Vancouver, ISO, and other styles
39

Alberto, R. T., S. C. Serrano, G. B. Damian, E. E. Camaso, A. B. Celestino, P. J. C. Hernando, M. F. Isip, K. M. Orge, M. J. C. Quinto, and R. C. Tagaca. "OBJECT BASED AGRICULTURAL LAND COVER CLASSIFICATION MAP OF SHADOWED AREAS FROM AERIAL IMAGE AND LIDAR DATA USING SUPPORT VECTOR MACHINE." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences III-7 (June 7, 2016): 45–50. http://dx.doi.org/10.5194/isprs-annals-iii-7-45-2016.

Full text
Abstract:
Aerial image and LiDAR data offers a great possibility for agricultural land cover mapping. Unfortunately, these images leads to shadowy pixels. Management of shadowed areas for classification without image enhancement were investigated. Image segmentation approach using three different segmentation scales were used and tested to segment the image for ground features since only the ground features are affected by shadow caused by tall features. The RGB band and intensity were the layers used for the segmentation having an equal weights. A segmentation scale of 25 was found to be the optimal scale that will best fit for the shadowed and non-shadowed area classification. The SVM using Radial Basis Function kernel was then applied to extract classes based on properties extracted from the Lidar data and orthophoto. Training points for different classes including shadowed areas were selected homogeneously from the orthophoto. Separate training points for shadowed areas were made to create additional classes to reduced misclassification. Texture classification and object-oriented classifiers have been examined to reduced heterogeneity problem. The accuracy of the land cover classification using 25 scale segmentation after accounting for the shadow detection and classification was significantly higher compared to higher scale of segmentation.
APA, Harvard, Vancouver, ISO, and other styles
40

Rahman, Fathur, Nuzul Hikmah, and Misdiyanto Misdiyanto. "Analysis Influence Segmentation Image on Classification Image X-raylungs with Method Convolutional Neural." Journal of Informatics Development 2, no. 1 (October 30, 2023): 23–29. http://dx.doi.org/10.30741/jid.v2i1.1159.

Full text
Abstract:
The impact of image segmentation on the classification of lung X-ray images using Convolutional Neural Networks (CNNs) has been scrutinized in this study. The dataset used in this research comprises 150 lung X-ray images, distributed as 78 for training, 30 for validation, and 42 for testing. Initially, image data undergoes preprocessing to enhance image quality, employing adaptive histogram equalization to augment contrast and enhance image details. The evaluation of segmentation's influence is based on a comparison between image classification with and without the segmentation process. Segmentation involves the delineation of lung regions through techniques like thresholding, accompanied by various morphological operations such as hole filling, area opening, and labeling. The image classification process employs a CNN featuring 5 convolution layers, the Adam optimizer, and a training period of 30 epochs. The results of this study indicate that the X-ray image dataset achieved a classification accuracy of 59.52% in network testing without segmentation. In contrast, when segmentation was applied to the X-ray image dataset, the accuracy significantly improved to 73.81%. This underscores the segmentation process's ability to enhance network performance, as it simplifies the classification of segmented image patterns.
APA, Harvard, Vancouver, ISO, and other styles
41

A., Afreen Habiba. "Diagnosis of Brain Tumor using Semantic Segmentation and Advance-CNN Classification." International Journal of Psychosocial Rehabilitation 24, no. 5 (March 31, 2020): 1204–24. http://dx.doi.org/10.37200/ijpr/v24i5/pr201795.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Dutta, Saibal, Sujoy Bhattacharya, and Kalyan Kumar Guin. "Segmentation and Classification of Indian Domestic Tourists : A Tourism Stakeholder Perspective." Journal of Management and Training for Industries 4, no. 1 (April 1, 2017): 1–24. http://dx.doi.org/10.12792/jmti.4.1.1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Lacerda, M. G., E. H. Shiguemori, A. J. Damião, C. S. Anjos, and M. Habermann. "IMPACT OF SEGMENTATION PARAMETERS ON THE CLASSIFICATION OF VHR IMAGES ACQUIRED BY RPAS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-3/W12-2020 (November 4, 2020): 43–48. http://dx.doi.org/10.5194/isprs-archives-xlii-3-w12-2020-43-2020.

Full text
Abstract:
Abstract. RPAs (Remotely Piloted Aircrafts) have been used in many Remote Sensing applications, featuring high-quality imaging sensors. In some situations, the images are interpreted in an automated fashion using object-oriented classification. In this case, the first step is segmentation. However, the setting of segmentation parameters such as scale, shape, and compactness may yield too many different segmentations, thus it is necessary to understand the influence of those parameters on the final output. This paper compares 24 segmentation parameter sets by taking into account classification scores. The results indicate that the segmentation parameters exert influence on both classification accuracy and processing time.
APA, Harvard, Vancouver, ISO, and other styles
44

Lee, Jiann-Shu, and Wen-Kai Wu. "Breast Tumor Tissue Image Classification Using DIU-Net." Sensors 22, no. 24 (December 14, 2022): 9838. http://dx.doi.org/10.3390/s22249838.

Full text
Abstract:
Inspired by the observation that pathologists pay more attention to the nuclei regions when analyzing pathological images, this study utilized soft segmentation to imitate the visual focus mechanism and proposed a new segmentation–classification joint model to achieve superior classification performance for breast cancer pathology images. Aiming at the characteristics of different sizes of nuclei in pathological images, this study developed a new segmentation network with excellent cross-scale description ability called DIU-Net. To enhance the generalization ability of the segmentation network, that is, to avoid the segmentation network from learning low-level features, we proposed the Complementary Color Conversion Scheme in the training phase. In addition, due to the disparity between the area of the nucleus and the background in the pathology image, there is an inherent data imbalance phenomenon, dice loss and focal loss were used to overcome this problem. In order to further strengthen the classification performance of the model, this study adopted a joint training scheme, so that the output of the classification network can not only be used to optimize the classification network itself, but also optimize the segmentation network. In addition, this model can also provide the pathologist model’s attention area, increasing the model’s interpretability. The classification performance verification of the proposed method was carried out with the BreaKHis dataset. Our method obtains binary/multi-class classification accuracy 97.24/93.75 and 98.19/94.43 for 200× and 400× images, outperforming existing methods.
APA, Harvard, Vancouver, ISO, and other styles
45

Khoiro, M., R. A. Firdaus, E. Suaebah, M. Yantidewi, and Dzulkiflih. "Segmentation Effect on Lungs X-Ray Image Classification Using Convolution Neural Network." Journal of Physics: Conference Series 2392, no. 1 (December 1, 2022): 012024. http://dx.doi.org/10.1088/1742-6596/2392/1/012024.

Full text
Abstract:
Abstract The effect of segmentation on lung X-ray image classification has been analyzed in this study. The 150 lung x-ray images in this study were separated into 78 as training data, 30 as validation data, and 42 as testing in three categories: normal lungs, effusion lungs, and cancer lungs. In pre-processing, the images were modified by adaptive histogram equalization to improve image quality and increase image contrast. The segmentation aims to mark the image by contouring the lung area obtained from the thresholding and some morphological manipulation processes such as filling holes, area openings, and labelling. Image classification uses Convolutional Neural Network (CNN) with five convolution layers, an Adam optimizer, and 30 epochs. The segmentation effect is analyzed by comparing the classification performance of the segmented and unsegmented images. In the study, the unsegmented X-ray image dataset classification reached an overall accuracy of 59.52% in the network testing process. The segmented X-ray image dataset obtained greater accuracy, 73.81%. It indicated that the segmentation process could improve network performance because the input pattern of the segmented image is easier to classify. Furthermore, the segmentation technique in the study can be one of the alternatives to developing image classification technologies, especially for medical image diagnosis. Segmentation Effect on Lungs X-Ray Image Classification Using Convolution Neural Network.
APA, Harvard, Vancouver, ISO, and other styles
46

Alam, Minhaj, Emma J. Zhao, Carson K. Lam, and Daniel L. Rubin. "Segmentation-Assisted Fully Convolutional Neural Network Enhances Deep Learning Performance to Identify Proliferative Diabetic Retinopathy." Journal of Clinical Medicine 12, no. 1 (January 3, 2023): 385. http://dx.doi.org/10.3390/jcm12010385.

Full text
Abstract:
With the progression of diabetic retinopathy (DR) from the non-proliferative (NPDR) to proliferative (PDR) stage, the possibility of vision impairment increases significantly. Therefore, it is clinically important to detect the progression to PDR stage for proper intervention. We propose a segmentation-assisted DR classification methodology, that builds on (and improves) current methods by using a fully convolutional network (FCN) to segment retinal neovascularizations (NV) in retinal images prior to image classification. This study utilizes the Kaggle EyePacs dataset, containing retinal photographs from patients with varying degrees of DR (mild, moderate, severe NPDR and PDR. Two graders annotated the NV (a board-certified ophthalmologist and a trained medical student). Segmentation was performed by training an FCN to locate neovascularization on 669 retinal fundus photographs labeled with PDR status according to NV presence. The trained segmentation model was used to locate probable NV in images from the classification dataset. Finally, a CNN was trained to classify the combined images and probability maps into categories of PDR. The mean accuracy of segmentation-assisted classification was 87.71% on the test set (SD = 7.71%). Segmentation-assisted classification of PDR achieved accuracy that was 7.74% better than classification alone. Our study shows that segmentation assistance improves identification of the most severe stage of diabetic retinopathy and has the potential to improve deep learning performance in other imaging problems with limited data availability.
APA, Harvard, Vancouver, ISO, and other styles
47

Rathna Priya, T. S., and Annamalai Manickavasagan. "Evaluation of segmentation methods for RGB colour image-based detection of Fusarium infection in corn grains using support vector machine (SVM) and pre-trained convolution neural network (CNN)." Canadian Biosystems Engineering 64, no. 1 (December 31, 2022): 7.09–7.20. http://dx.doi.org/10.7451/cbe.2022.64.7.9.

Full text
Abstract:
This study evaluated six segmentation methods (clustering, flood-fill, graph-cut, colour-thresholding, watershed, and Otsu’s-thresholding) for segmentation accuracy and classification accuracy in discriminating Fusarium infected corn grains using RGB colour images. The segmentation accuracy was calculated using Jaccard similarity index and Dice coefficient in comparison with the gold standard (manual segmentation method). Flood-fill and graph-cut methods showed the highest segmentation accuracy of 77% and 87% for Jaccard and Dice evaluation metrics, respectively. Pre-trained convolution neural network (CNN) and support vector machine (SVM) were used to evaluate the effect of segmentation methods on classification accuracy using segmented images and extracted features from the segmented images, respectively. The SVM based two-class model to discriminate healthy and Fusarium infected corn grains yielded the classification accuracy of 84%, 79%, 78%, 74%, 69% and 65% for graph-cut, watershed, clustering, flood-fill, colour-thresholding, and Otsu’s-thresholding, respectively. In pretrained CNN model, the classification accuracies were 93%, 88%, 87%, 84%, 61% and 59% for flood-fill, graph-cut, colour-thresholding, clustering, watershed, and Otsu’s-thresholding, respectively. Jaccard and Dice evaluation metrics showed the highest correlation with the pretrained CNN classification accuracies with R2 values of 0.9693 and 0.9727, respectively. The correlation with SVM classification accuracies were R2–0.505 for Jaccard and R2–0.5151 for Dice evaluation metrics.
APA, Harvard, Vancouver, ISO, and other styles
48

Shala, Vegim, and Eliot Bytyçi. "Neural Network Image Segmentation for Sign Language Interpretation." International Journal of Emerging Technology and Advanced Engineering 13, no. 3 (March 1, 2023): 111–16. http://dx.doi.org/10.46338/ijetae0323_11.

Full text
Abstract:
The use of neural networks to recognize and classify objects in images is a popular field in computer science. It is highly likely that an object in an image chosen for classification will have a representation matrix with significantly less pixels than the background or other elements of the image. As a result, the initial plan would be to divide or segment that object from the other portions of the image that are not essential for categorization. This also serves as the study's objective, for which we employ segmentation to separate the components essential to the classification procedure and assess any room for improvement in the final classification outcome. Mask Region Convolutional Neural Network was the model used for segmentation, and Convolutional Neural Network was the model used for classification. The study's findings demonstrate a notable improvement in the classification in the case of sign language. Further advancement of image segmentation models implies better more accurate results for classification models once they are combined. Keywords— Neural network, Image segmentation, Sign language, Classification, Mask Regional Convolutional Neural Network.
APA, Harvard, Vancouver, ISO, and other styles
49

Latif, Ghazanfar. "DeepTumor: Framework for Brain MR Image Classification, Segmentation and Tumor Detection." Diagnostics 12, no. 11 (November 21, 2022): 2888. http://dx.doi.org/10.3390/diagnostics12112888.

Full text
Abstract:
The proper segmentation of the brain tumor from the image is important for both patients and medical personnel due to the sensitivity of the human brain. Operation intervention would require doctors to be extremely cautious and precise to target the brain’s required portion. Furthermore, the segmentation process is also important for multi-class tumor classification. This work primarily concentrated on making a contribution in three main areas of brain MR Image processing for classification and segmentation which are: Brain MR image classification, tumor region segmentation and tumor classification. A framework named DeepTumor is presented for the multistage-multiclass Glioma Tumor classification into four classes; Edema, Necrosis, Enhancing and Non-enhancing. For the brain MR image binary classification (Tumorous and Non-tumorous), two deep Convolutional Neural Network) CNN models were proposed for brain MR image classification; 9-layer model with a total of 217,954 trainable parameters and an improved 10-layer model with a total of 80,243 trainable parameters. In the second stage, an enhanced Fuzzy C-means (FCM) based technique is proposed for the tumor segmentation in brain MR images. In the final stage, an enhanced CNN model 3 with 11 hidden layers and a total of 241,624 trainable parameters was proposed for the classification of the segmented tumor region into four Glioma Tumor classes. The experiments are performed using the BraTS MRI dataset. The experimental results of the proposed CNN models for binary classification and multiclass tumor classification are compared with the existing CNN models such as LeNet, AlexNet and GoogleNet as well as with the latest literature.
APA, Harvard, Vancouver, ISO, and other styles
50

Ginley, Brandon, Brendon Lutnick, Kuang-Yu Jen, Agnes B. Fogo, Sanjay Jain, Avi Rosenberg, Vighnesh Walavalkar, et al. "Computational Segmentation and Classification of Diabetic Glomerulosclerosis." Journal of the American Society of Nephrology 30, no. 10 (September 5, 2019): 1953–67. http://dx.doi.org/10.1681/asn.2018121259.

Full text
Abstract:
BackgroundPathologists use visual classification of glomerular lesions to assess samples from patients with diabetic nephropathy (DN). The results may vary among pathologists. Digital algorithms may reduce this variability and provide more consistent image structure interpretation.MethodsWe developed a digital pipeline to classify renal biopsies from patients with DN. We combined traditional image analysis with modern machine learning to efficiently capture important structures, minimize manual effort and supervision, and enforce biologic prior information onto our model. To computationally quantify glomerular structure despite its complexity, we simplified it to three components consisting of nuclei, capillary lumina and Bowman spaces; and Periodic Acid-Schiff positive structures. We detected glomerular boundaries and nuclei from whole slide images using convolutional neural networks, and the remaining glomerular structures using an unsupervised technique developed expressly for this purpose. We defined a set of digital features which quantify the structural progression of DN, and a recurrent network architecture which processes these features into a classification.ResultsOur digital classification agreed with a senior pathologist whose classifications were used as ground truth with moderate Cohen’s kappa κ = 0.55 and 95% confidence interval [0.50, 0.60]. Two other renal pathologists agreed with the digital classification with κ1 = 0.68, 95% interval [0.50, 0.86] and κ2 = 0.48, 95% interval [0.32, 0.64]. Our results suggest computational approaches are comparable to human visual classification methods, and can offer improved precision in clinical decision workflows. We detected glomerular boundaries from whole slide images with 0.93±0.04 balanced accuracy, glomerular nuclei with 0.94 sensitivity and 0.93 specificity, and glomerular structural components with 0.95 sensitivity and 0.99 specificity.ConclusionsComputationally derived, histologic image features hold significant diagnostic information that may augment clinical diagnostics.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography