Journal articles on the topic 'Whole slide image classification'

To see the other types of publications on this topic, follow the link: Whole slide image classification.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Whole slide image classification.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Feng, Ming, Kele Xu, Nanhui Wu, Weiquan Huang, Yan Bai, Yin Wang, Changjian Wang, and Huaimin Wang. "Trusted multi-scale classification framework for whole slide image." Biomedical Signal Processing and Control 89 (March 2024): 105790. http://dx.doi.org/10.1016/j.bspc.2023.105790.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Fridman, M. V., A. A. Kosareva, E. V. Snezhko, P. V. Kamlach, and V. A. Kovalev. "Papillary thyroid carcinoma whole-slide images as a basis for deep learning." Informatics 20, no. 2 (June 29, 2023): 28–38. http://dx.doi.org/10.37661/1816-0301-2023-20-2-28-38.

Full text
Abstract:
Objectives. Morphological analysis of papillary thyroid cancer is a cornerstone for further treatment planning. Traditional and neural network methods of extracting parts of images are used to automate the analysis. It is necessary to prepare a set of data for teaching neural networks to develop a system of similar anatomical region in the histopathological image. Authors discuss the second selection of signs for the marking of histological images, methodological approaches to dissect whole-slide images, how to prepare raw data for a future analysis. The influence of the representative size of the fragment of the full-to-suction image of papillary thyroid cancer on the accuracy of the classification of trained neural network EfficientNetB0 is conducted. The analysis of the resulting results is carried out, the weaknesses of the use of fragments of images of different representative size and the cause of the unsatisfactory accuracy of the classification on large increase are evaluated.Materials and methods. Histopathological whole-slide imaged of 129 patients were used. Histological micropreparations containing elements of a tumor and surrounding tissue were scanned in the Aperio AT2 (Leica Biosystems, Germany) apparatus with maximum resolution. The marking was carried out in the ASAP software package. To choose the optimal representative size of the fragment the problem of classification was solved using the pre-study neural network EfficientNetB0.Results. A methodology for preparing a database of histopathological images of papillary thyroid cancer was proposed. Experiments were conducted to determine the optimal representative size of the image fragment. The best result of the accuracy of determining the class of test sample showed the size of a representative fragment as 394.32×394.32 microns.Conclusion. The analysis of the influence of the representative sizes of fragments of histopathological images showed the problems in solving the classification tasks because of cutting and staining images specifics, morphological complex and textured differences in the images of the same class. At the same time, it was determined that the task of preparing a set of data for training neural network to solve the problem of finding invasion of vessels in a histopathological image is not trivial and it requires additional stages of data preparation.
APA, Harvard, Vancouver, ISO, and other styles
3

Zarella, Mark D., Matthew R. Quaschnick;, David E. Breen, and Fernando U. Garcia. "Estimation of Fine-Scale Histologic Features at Low Magnification." Archives of Pathology & Laboratory Medicine 142, no. 11 (June 18, 2018): 1394–402. http://dx.doi.org/10.5858/arpa.2017-0380-oa.

Full text
Abstract:
Context.— Whole-slide imaging has ushered in a new era of technology that has fostered the use of computational image analysis for diagnostic support and has begun to transfer the act of analyzing a slide to computer monitors. Due to the overwhelming amount of detail available in whole-slide images, analytic procedures—whether computational or visual—often operate at magnifications lower than the magnification at which the image was acquired. As a result, a corresponding reduction in image resolution occurs. It is unclear how much information is lost when magnification is reduced, and whether the rich color attributes of histologic slides can aid in reconstructing some of that information. Objective.— To examine the correspondence between the color and spatial properties of whole-slide images to elucidate the impact of resolution reduction on the histologic attributes of the slide. Design.— We simulated image resolution reduction and modeled its effect on classification of the underlying histologic structure. By harnessing measured histologic features and the intrinsic spatial relationships between histologic structures, we developed a predictive model to estimate the histologic composition of tissue in a manner that exceeds the resolution of the image. Results.— Reduction in resolution resulted in a significant loss of the ability to accurately characterize histologic components at magnifications less than ×10. By utilizing pixel color, this ability was improved at all magnifications. Conclusions.— Multiscale analysis of histologic images requires an adequate understanding of the limitations imposed by image resolution. Our findings suggest that some of these limitations may be overcome with computational modeling.
APA, Harvard, Vancouver, ISO, and other styles
4

Chen, Kaitao, Shiliang Sun, and Jing Zhao. "CaMIL: Causal Multiple Instance Learning for Whole Slide Image Classification." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 2 (March 24, 2024): 1120–28. http://dx.doi.org/10.1609/aaai.v38i2.27873.

Full text
Abstract:
Whole slide image (WSI) classification is a crucial component in automated pathology analysis. Due to the inherent challenges of high-resolution WSIs and the absence of patch-level labels, most of the proposed methods follow the multiple instance learning (MIL) formulation. While MIL has been equipped with excellent instance feature extractors and aggregators, it is prone to learn spurious associations that undermine the performance of the model. For example, relying solely on color features may lead to erroneous diagnoses due to spurious associations between the disease and the color of patches. To address this issue, we develop a causal MIL framework for WSI classification, effectively distinguishing between causal and spurious associations. Specifically, we use the expectation of the intervention P(Y | do(X)) for bag prediction rather than the traditional likelihood P(Y | X). By applying the front-door adjustment, the spurious association is effectively blocked, where the intervened mediator is aggregated from patch-level features. We evaluate our proposed method on two publicly available WSI datasets, Camelyon16 and TCGA-NSCLC. Our causal MIL framework shows outstanding performance and is plug-and-play, seamlessly integrating with various feature extractors and aggregators.
APA, Harvard, Vancouver, ISO, and other styles
5

Lewis, Joshua, Conrad Shebelut, Bradley Drumheller, Xuebao Zhang, Nithya Shanmugam, Michel Attieh, Michael Horwath, Anurag Khanna, Geoffrey Smith, and David Gutman. "An Automated Pipeline for Cell Differentials on Whole-Slide Bone Marrow Aspirate Smears." American Journal of Clinical Pathology 158, Supplement_1 (November 1, 2022): S12. http://dx.doi.org/10.1093/ajcp/aqac126.020.

Full text
Abstract:
Abstract Current pathologic diagnosis of benign and neoplastic bone marrow disorders relies in part on the microscopic analysis of bone marrow aspirate (BMA) smears and manual counting of nucleated cell populations to obtain a cell differential. This manual process has significant limitations, including the limited sample of cells analyzed by a conventional 500-cell differential compared to the thousands of nucleated cells present, as well as the inter-observer variability seen between differentials on single samples due to differences in cell selection and classification. To address these shortcomings, we developed an automated computational platform for obtaining cell differentials from scanned whole-slide BMAs at 40x magnification. This pipeline utilizes a sequential process of identifying BMA regions with high proportions of marrow nucleated cells that are ideal for cell counting, detecting individual cells within these optimal regions, and classifying cells into one of 11 types within the differential. Training of convolutional neural network models for region and cell classification, as well as a region-based convolutional neural network for cell detection, involved the generation of an annotated training data set containing 10,948 BMA regions, 28,914 cell boundaries, and 23,609 cell classifications from 73 BMA slides. Among 44 testing BMA slides, an average of 19,209 viable cells per slide were identified and used in automated cell differentials, with a range of 237 to 126,483 cells. In comparing these automated cell differential percentages with corresponding manual differentials, cell type-specific correlation coefficients ranged from 0.913 for blast cells to 0.365 for myelocytes, with an average coefficient of 0.654 among all 11 cell types. A statistically significant concordance was observed among slides with blast percentages less or greater than 20% (p=1.0x10-5) and with plasma cell percentages less or greater than 10% (p=5.9x10-6) between automated and manual differentials, suggesting potential diagnostic utility of this automated pipeline for malignancies such as acute myeloid leukemia and multiple myeloma. Additionally, by simulating the manual counting of 500 cells within localized areas of a BMA slide and iterating over all optimal slide locations, we quantified the inter-observer variability associated with limited sample size in traditional BMA cell counting. Localized differentials exemplify an average variance ranging from 24.1% for erythroid precursors to 1.8% for basophils. Variance in localized differentials of up to 44.8% for blast cells and 36.9% for plasma cells was observed, demonstrating that sample classification based on diagnostic thresholds of cell populations is variable even between different areas within a single slide. Finally, pipeline outputs of region classification, cell detection, cell classification, and localized cell differentials can be visualized using whole-slide image analysis software. By improving cell sampling and reducing inter-observer variability, this automated pipeline has potential to improve the current standard of practice for utilizing BMA smears in the diagnosis of hematologic disorders.
APA, Harvard, Vancouver, ISO, and other styles
6

Ahmed, Shakil, Asadullah Shaikh, Hani Alshahrani, Abdullah Alghamdi, Mesfer Alrizq, Junaid Baber, and Maheen Bakhtyar. "Transfer Learning Approach for Classification of Histopathology Whole Slide Images." Sensors 21, no. 16 (August 9, 2021): 5361. http://dx.doi.org/10.3390/s21165361.

Full text
Abstract:
The classification of whole slide images (WSIs) provides physicians with an accurate analysis of diseases and also helps them to treat patients effectively. The classification can be linked to further detailed analysis and diagnosis. Deep learning (DL) has made significant advances in the medical industry, including the use of magnetic resonance imaging (MRI) scans, computerized tomography (CT) scans, and electrocardiograms (ECGs) to detect life-threatening diseases, including heart disease, cancer, and brain tumors. However, more advancement in the field of pathology is needed, but the main hurdle causing the slow progress is the shortage of large-labeled datasets of histopathology images to train the models. The Kimia Path24 dataset was particularly created for the classification and retrieval of histopathology images. It contains 23,916 histopathology patches with 24 tissue texture classes. A transfer learning-based framework is proposed and evaluated on two famous DL models, Inception-V3 and VGG-16. To improve the productivity of Inception-V3 and VGG-16, we used their pre-trained weights and concatenated these with an image vector, which is used as input for the training of the same architecture. Experiments show that the proposed innovation improves the accuracy of both famous models. The patch-to-scan accuracy of VGG-16 is improved from 0.65 to 0.77, and for the Inception-V3, it is improved from 0.74 to 0.79.
APA, Harvard, Vancouver, ISO, and other styles
7

Franklin, Daniel L., Tara Pattilachan, and Anthony Magliocco. "Abstract 5048: Imaging based EGFR mutation subtype classification using EfficientNet." Cancer Research 82, no. 12_Supplement (June 15, 2022): 5048. http://dx.doi.org/10.1158/1538-7445.am2022-5048.

Full text
Abstract:
Abstract This study aimed to determine whether EfficientNet-B0 was able to classify EGFR mutation subtypes with H&E stained whole slide images of lung and lymph node tissue. Background: Non-small cell lung cancer (NSCLC) accounts for the majority of all lung adenocarcinomas, with estimates that up to a third of such cases have a mutation in their epidermal growth factor receptor (EGFR). EGFR mutations can occur in various subtypes, such as Exon19 deletion, and L858R substitution, which are important for early therapy decisions. Here, we propose a deep learning approach for detecting and classifying EGFR mutation subtypes, which will greatly reduce the cost of determining mutation status, allowing for testing in a low resource setting. Methods: An EfficientNet-B0 model was trained with whole slide images of lung tissue or metastatic lymph nodes with known EGFR mutation subtype (wild type, exon19 deletion or L858R substitution). Regions of interest were tiled into 512x512 pixel images. The RGB .jpeg tiles are augmented by rotating 90°, 180°, 270°, and mirroring. The model was initialized with random parameters and trained with a batch size of 32, a learning rate of 0.0001 for 1 epoch before the validation loss increased for the next 5 epochs. Results: The model achieved a slide AUC of 0.8333, and a tile AUC of 0.8010. Slide AUC is the result of averaging all tiles within a slide and measuring performance based on correctly predicted slides (n=18). Tile AUC is the result of measuring performance based on correctly predicted tiles (n=102,000). Conclusion: Using EfficientNet-B0 architecture as the basis for our EGFR mutation classification system, we were able to create a top performing model and achieve a slide AUC of 0.833 and tile AUC of 0.801. Healthcare providers and researchers may utilize this AI model in clinical settings to allow for detection of EGFR mutation from routinely captured images and bypass expensive and time consuming sequencing methods. Table 1. Number of image tiles used and the number of slides they were extracted from. Train Validation Test Exon19 tiles 187,384 47,904 33,096 L858R tiles 166,288 19,512 26,136 Wild type tiles 225,944 27,696 42,768 Exon19 slides 47 6 6 L858R slides 46 6 6 WIld type slides 43 6 6 Citation Format: Daniel L. Franklin, Tara Pattilachan, Anthony Magliocco. Imaging based EGFR mutation subtype classification using EfficientNet [abstract]. In: Proceedings of the American Association for Cancer Research Annual Meeting 2022; 2022 Apr 8-13. Philadelphia (PA): AACR; Cancer Res 2022;82(12_Suppl):Abstract nr 5048.
APA, Harvard, Vancouver, ISO, and other styles
8

Jansen, Philipp, Adelaida Creosteanu, Viktor Matyas, Amrei Dilling, Ana Pina, Andrea Saggini, Tobias Schimming, et al. "Deep Learning Assisted Diagnosis of Onychomycosis on Whole-Slide Images." Journal of Fungi 8, no. 9 (August 28, 2022): 912. http://dx.doi.org/10.3390/jof8090912.

Full text
Abstract:
Background: Onychomycosis numbers among the most common fungal infections in humans affecting finger- or toenails. Histology remains a frequently applied screening technique to diagnose onychomycosis. Screening slides for fungal elements can be time-consuming for pathologists, and sensitivity in cases with low amounts of fungi remains a concern. Convolutional neural networks (CNNs) have revolutionized image classification in recent years. The goal of our project was to evaluate if a U-NET-based segmentation approach as a subcategory of CNNs can be applied to detect fungal elements on digitized histologic sections of human nail specimens and to compare it with the performance of 11 board-certified dermatopathologists. Methods: In total, 664 corresponding H&E- and PAS-stained histologic whole-slide images (WSIs) of human nail plates from four different laboratories were digitized. Histologic structures were manually annotated. A U-NET image segmentation model was trained for binary segmentation on the dataset generated by annotated slides. Results: The U-NET algorithm detected 90.5% of WSIs with fungi, demonstrating a comparable sensitivity with that of the 11 board-certified dermatopathologists (sensitivity of 89.2%). Conclusions: Our results demonstrate that machine-learning-based algorithms applied to real-world clinical cases can produce comparable sensitivities to human pathologists. Our established U-NET may be used as a supportive diagnostic tool to preselect possible slides with fungal elements. Slides where fungal elements are indicated by our U-NET should be reevaluated by the pathologist to confirm or refute the diagnosis of onychomycosis.
APA, Harvard, Vancouver, ISO, and other styles
9

Amgad, Mohamed, Habiba Elfandy, Hagar Hussein, Lamees A. Atteya, Mai A. T. Elsebaie, Lamia S. Abo Elnasr, Rokia A. Sakr, et al. "Structured crowdsourcing enables convolutional segmentation of histology images." Bioinformatics 35, no. 18 (February 6, 2019): 3461–67. http://dx.doi.org/10.1093/bioinformatics/btz083.

Full text
Abstract:
Abstract Motivation While deep-learning algorithms have demonstrated outstanding performance in semantic image segmentation tasks, large annotation datasets are needed to create accurate models. Annotation of histology images is challenging due to the effort and experience required to carefully delineate tissue structures, and difficulties related to sharing and markup of whole-slide images. Results We recruited 25 participants, ranging in experience from senior pathologists to medical students, to delineate tissue regions in 151 breast cancer slides using the Digital Slide Archive. Inter-participant discordance was systematically evaluated, revealing low discordance for tumor and stroma, and higher discordance for more subjectively defined or rare tissue classes. Feedback provided by senior participants enabled the generation and curation of 20 000+ annotated tissue regions. Fully convolutional networks trained using these annotations were highly accurate (mean AUC=0.945), and the scale of annotation data provided notable improvements in image classification accuracy. Availability and Implementation Dataset is freely available at: https://goo.gl/cNM4EL. Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
10

Wang, Qian, Ying Zou, Jianxin Zhang, and Bin Liu. "Second-order multi-instance learning model for whole slide image classification." Physics in Medicine & Biology 66, no. 14 (July 12, 2021): 145006. http://dx.doi.org/10.1088/1361-6560/ac0f30.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Molla, Md Rony, and Ma Jian Fen. "Convolutional Sparse Coding Multiple Instance Learning for Whole Slide Image Classification." International Journal of Advanced Engineering Research and Science 10, no. 12 (2023): 096–104. http://dx.doi.org/10.22161/ijaers.1012.10.

Full text
Abstract:
Multiple Instance Learning (MIL) is commonly utilized in weakly supervised whole slide image (WSI) classification. MIL techniques typically involve a feature embedding step using a pretrained feature extractor, then an aggregator that aggregates the embedded instances into predictions. Current efforts aim to enhance these sections by refining feature embeddings through self-supervised pretraining and modeling correlations between instances. In this paper, we propose a convolutional sparsely coded MIL (CSCMIL) that utilizes convolutional sparse dictionary learning to simultaneously address these two aspects. Sparse dictionary learning consists of filters or kernels that are applied with convolutional operations and utilizes an overly comprehensive dictionary to represent instances as sparse linear combinations of atoms, thereby capturing their similarities. Straightforwardly built into existing MIL frameworks, the suggested CSC module has an affordable computation cost. Experiments on various datasets showed that the suggested CSC module improved performance by 3.85% in AUC and 4.50% in accuracy, equivalent to the SimCLR pretraining (4.21% and 4.98%) significantly of current MIL approaches.
APA, Harvard, Vancouver, ISO, and other styles
12

Aftab, Rukhma, Yan Qiang, and Zhao Juanjuan. "Contrastive Learning for Whole Slide Image Representation: A Self-Supervised Approach in Digital Pathology." European Journal of Applied Science, Engineering and Technology 2, no. 2 (March 1, 2024): 175–85. http://dx.doi.org/10.59324/ejaset.2024.2(2).12.

Full text
Abstract:
Image analysis in digital pathology is identified as a challenging field, particularly for AI-driven classification and search tasks. The high-resolution and large-scale nature of whole slide images (WSIs) present significant computational challenges in representing and analyzing these images effectively. The research endeavors to tackle these hurdles by presenting an innovative methodology grounded in self-supervised learning (SSL). Unlike prior SSL approaches that depend on augmenting at the patch level, the novel framework capitalizes on existing primary site information to directly glean effective representations from Whole Slide Images (WSIs). Moreover, the investigation integrates fully supervised contrastive learning to bolster the resilience of these representations for both classification and search endeavors. For experimentation, the study drew upon a dataset encompassing over 6,000 WSIs sourced from The Cancer Genome Atlas (TCGA) repository facilitated by the National Cancer Institute. The proposed architecture underwent training and assessment using this dataset. Evaluation primarily focused on scrutinizing performance across diverse primary sites and cancer subtypes, with particular attention dedicated to lung cancer classification. Impressively, the proposed architecture yielded outstanding outcomes, showcasing robust performance across the majority of primary sites and cancer subtypes. Furthermore, the study garnered the top position in validation for a lung cancer classification task.
APA, Harvard, Vancouver, ISO, and other styles
13

Walkowski, Slawomir, and Janusz Szymas. "Histopathologic Patterns of Nervous System Tumors Based on Computer Vision Methods and Whole Slide Imaging (WSI)." Analytical Cellular Pathology 35, no. 2 (2012): 117–22. http://dx.doi.org/10.1155/2012/483525.

Full text
Abstract:
Background: Making an automatic diagnosis based on virtual slides and whole slide imaging or even determining whether a case belongs to a single class, representing a specific disease, is a big challenge. In this work we focus on WHO Classification of Tumours of the Central Nervous System. We try to design a method which allows to automatically distinguish virtual slides which contain histopathologic patterns characteristic of glioblastoma – pseudopalisading necrosis and discriminate cases with neurinoma (schwannoma), which contain similar structures – palisading (Verocay bodies).Methods: Our method is based on computer vision approaches like structural analysis and shape descriptors. We start with image segmentation in a virtual slide, find specific patterns and use a set of features which can describe pseudopalisading necrosis and distinguish it from palisades. Type of structures found in a slide decides about its classification.Results: Described method is tested on a set of 49 virtual slides, captured using robotic microscope. Results show that 82% of glioblastoma cases and 90% of neurinoma cases were correctly identified by the proposed algorithm.Conclusion: Our method is a promising approach to automatic detection of nervous system tumors using virtual slides.
APA, Harvard, Vancouver, ISO, and other styles
14

Fell, Christina, Mahnaz Mohammadi, David Morrison, Ognjen Arandjelović, Sheeba Syed, Prakash Konanahalli, Sarah Bell, Gareth Bryson, David J. Harrison, and David Harris-Birtill. "Detection of malignancy in whole slide images of endometrial cancer biopsies using artificial intelligence." PLOS ONE 18, no. 3 (March 8, 2023): e0282577. http://dx.doi.org/10.1371/journal.pone.0282577.

Full text
Abstract:
In this study we use artificial intelligence (AI) to categorise endometrial biopsy whole slide images (WSI) from digital pathology as either “malignant”, “other or benign” or “insufficient”. An endometrial biopsy is a key step in diagnosis of endometrial cancer, biopsies are viewed and diagnosed by pathologists. Pathology is increasingly digitised, with slides viewed as images on screens rather than through the lens of a microscope. The availability of these images is driving automation via the application of AI. A model that classifies slides in the manner proposed would allow prioritisation of these slides for pathologist review and hence reduce time to diagnosis for patients with cancer. Previous studies using AI on endometrial biopsies have examined slightly different tasks, for example using images alongside genomic data to differentiate between cancer subtypes. We took 2909 slides with “malignant” and “other or benign” areas annotated by pathologists. A fully supervised convolutional neural network (CNN) model was trained to calculate the probability of a patch from the slide being “malignant” or “other or benign”. Heatmaps of all the patches on each slide were then produced to show malignant areas. These heatmaps were used to train a slide classification model to give the final slide categorisation as either “malignant”, “other or benign” or “insufficient”. The final model was able to accurately classify 90% of all slides correctly and 97% of slides in the malignant class; this accuracy is good enough to allow prioritisation of pathologists’ workload.
APA, Harvard, Vancouver, ISO, and other styles
15

Ginley, Brandon, Brendon Lutnick, Kuang-Yu Jen, Agnes B. Fogo, Sanjay Jain, Avi Rosenberg, Vighnesh Walavalkar, et al. "Computational Segmentation and Classification of Diabetic Glomerulosclerosis." Journal of the American Society of Nephrology 30, no. 10 (September 5, 2019): 1953–67. http://dx.doi.org/10.1681/asn.2018121259.

Full text
Abstract:
BackgroundPathologists use visual classification of glomerular lesions to assess samples from patients with diabetic nephropathy (DN). The results may vary among pathologists. Digital algorithms may reduce this variability and provide more consistent image structure interpretation.MethodsWe developed a digital pipeline to classify renal biopsies from patients with DN. We combined traditional image analysis with modern machine learning to efficiently capture important structures, minimize manual effort and supervision, and enforce biologic prior information onto our model. To computationally quantify glomerular structure despite its complexity, we simplified it to three components consisting of nuclei, capillary lumina and Bowman spaces; and Periodic Acid-Schiff positive structures. We detected glomerular boundaries and nuclei from whole slide images using convolutional neural networks, and the remaining glomerular structures using an unsupervised technique developed expressly for this purpose. We defined a set of digital features which quantify the structural progression of DN, and a recurrent network architecture which processes these features into a classification.ResultsOur digital classification agreed with a senior pathologist whose classifications were used as ground truth with moderate Cohen’s kappa κ = 0.55 and 95% confidence interval [0.50, 0.60]. Two other renal pathologists agreed with the digital classification with κ1 = 0.68, 95% interval [0.50, 0.86] and κ2 = 0.48, 95% interval [0.32, 0.64]. Our results suggest computational approaches are comparable to human visual classification methods, and can offer improved precision in clinical decision workflows. We detected glomerular boundaries from whole slide images with 0.93±0.04 balanced accuracy, glomerular nuclei with 0.94 sensitivity and 0.93 specificity, and glomerular structural components with 0.95 sensitivity and 0.99 specificity.ConclusionsComputationally derived, histologic image features hold significant diagnostic information that may augment clinical diagnostics.
APA, Harvard, Vancouver, ISO, and other styles
16

Lewis, Joshua, Xuebao Zhang, Nithya Shanmugam, Bradley Drumheller, Conrad Shebelut, Geoffrey Smith, Lee Cooper, and David Jaye. "Machine Learning-Based Automated Selection of Regions for Analysis on Bone Marrow Aspirate Smears." American Journal of Clinical Pathology 156, Supplement_1 (October 1, 2021): S1—S2. http://dx.doi.org/10.1093/ajcp/aqab189.001.

Full text
Abstract:
Abstract Manual microscopic examination of bone marrow aspirate (BMA) smears and counting of cell populations remains the standard of practice for accurate assessment of benign and neoplastic bone marrow disorders. While automated cell classification software using machine learning models has been developed and applied to BMAs, current systems nonetheless require manual identification of optimal regions within the slide that are rich in marrow hematopoietic cells. To address this issue, we have developed a machine learning-based platform for automated identification of optimal regions in whole-slide images of BMA smears. A training dataset was developed by manual annotation of 53 BMA slides across biopsy diagnoses including unremarkable trilineal hematopoiesis, acute leukemia, and plasma cell neoplasms, as well as across differences in total cellularity represented by a spectrum of marrow nucleated cell content and white blood cell counts. 10,537 regions among these 53 slides were manually annotated as either “optimal” (regions near aspirate particles with high proportions of marrow nucleated cells), “particle” (aspirate particles), or “hemodilute” (blood-rich regions with high proportions of red blood cells). Training of a neural network-based classifier on 10x magnification slides with region cropping and image augmentation resulted in a classifier with substantial accuracy on new testing-set BMA slides (one-vs-rest AUROC > 0.999 across 10 training/testing splits for all 3 region classes), with very few particle and hemodilute regions being classified as optimal (particle: 0.83%, hemodilute: 0.39%). Additionally, this classifier accurately classifies BMA regions on slides from hematological disorders not represented in the training data, including Burkitt lymphoma (AUROC > 0.999 across region classes), chronic myeloid leukemia (AUROC > 0.999 across region classes), and diffuse large B-cell lymphoma (AUROC = 1 across region classes), demonstrating the broad applicability of our approach. To assess the performance of our classifier on whole-slide images, tiles from 10x magnification slides were manually annotated by three participants with notable concordance (Krippendorff’s alpha = 0.424); substantial agreement was found between manual annotations and model predictions within whole-slide images (optimal AUROC = 0.958, particle AUROC = 1.0, hemodilute AUROC = 0.947). Based on these promising results, this machine learning-based region classification model is being connected to a previously-developed bone marrow cell classifier to fully automate differential cell counting in whole-slide images. The development of this novel automated pipeline has potential to streamline the diagnostic process for hematological disorders while enhancing accuracy and replicability, as well as decreasing diagnostic turnaround time for improving patient care.
APA, Harvard, Vancouver, ISO, and other styles
17

Rao, Roopa S., Divya B. Shivanna, Kirti S. Mahadevpur, Sinchana G. Shivaramegowda, Spoorthi Prakash, Surendra Lakshminarayana, and Shankargouda Patil. "Deep Learning-Based Microscopic Diagnosis of Odontogenic Keratocysts and Non-Keratocysts in Haematoxylin and Eosin-Stained Incisional Biopsies." Diagnostics 11, no. 12 (November 24, 2021): 2184. http://dx.doi.org/10.3390/diagnostics11122184.

Full text
Abstract:
Background: The goal of the study was to create a histopathology image classification automation system that could identify odontogenic keratocysts in hematoxylin and eosin-stained jaw cyst sections. Methods: From 54 odontogenic keratocysts, 23 dentigerous cysts, and 20 radicular cysts, about 2657 microscopic pictures with 400× magnification were obtained. The images were annotated by a pathologist and categorized into epithelium, cystic lumen, and stroma of keratocysts and non-keratocysts. Preprocessing was performed in two steps; the first is data augmentation, as the Deep Learning techniques (DLT) improve their performance with increased data size. Secondly, the epithelial region was selected as the region of interest. Results: Four experiments were conducted using the DLT. In the first, a pre-trained VGG16 was employed to classify after-image augmentation. In the second, DenseNet-169 was implemented for image classification on the augmented images. In the third, DenseNet-169 was trained on the two-step preprocessed images. In the last experiment, two and three results were averaged to obtain an accuracy of 93% on OKC and non-OKC images. Conclusions: The proposed algorithm may fit into the automation system of OKC and non-OKC diagnosis. Utmost care was taken in the manual process of image acquisition (minimum 28–30 images/slide at 40× magnification covering the entire stretch of epithelium and stromal component). Further, there is scope to improve the accuracy rate and make it human bias free by using a whole slide imaging scanner for image acquisition from slides.
APA, Harvard, Vancouver, ISO, and other styles
18

Dimitriou, Neofytos, Ognjen Arandjelović, and David J. Harrison. "Magnifying Networks for Histopathological Images with Billions of Pixels." Diagnostics 14, no. 5 (March 1, 2024): 524. http://dx.doi.org/10.3390/diagnostics14050524.

Full text
Abstract:
Amongst the other benefits conferred by the shift from traditional to digital pathology is the potential to use machine learning for diagnosis, prognosis, and personalization. A major challenge in the realization of this potential emerges from the extremely large size of digitized images, which are often in excess of 100,000 × 100,000 pixels. In this paper, we tackle this challenge head-on by diverging from the existing approaches in the literature—which rely on the splitting of the original images into small patches—and introducing magnifying networks (MagNets). By using an attention mechanism, MagNets identify the regions of the gigapixel image that benefit from an analysis on a finer scale. This process is repeated, resulting in an attention-driven coarse-to-fine analysis of only a small portion of the information contained in the original whole-slide images. Importantly, this is achieved using minimal ground truth annotation, namely, using only global, slide-level labels. The results from our tests on the publicly available Camelyon16 and Camelyon17 datasets demonstrate the effectiveness of MagNets—as well as the proposed optimization framework—in the task of whole-slide image classification. Importantly, MagNets process at least five times fewer patches from each whole-slide image than any of the existing end-to-end approaches.
APA, Harvard, Vancouver, ISO, and other styles
19

Wang, Shujun, Yaxi Zhu, Lequan Yu, Hao Chen, Huangjing Lin, Xiangbo Wan, Xinjuan Fan, and Pheng-Ann Heng. "RMDL: Recalibrated multi-instance deep learning for whole slide gastric image classification." Medical Image Analysis 58 (December 2019): 101549. http://dx.doi.org/10.1016/j.media.2019.101549.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Yu, Jiahui, Tianyu Ma, Yu Fu, Hang Chen, Maode Lai, Cheng Zhuo, and Yingke Xu. "Local-to-global spatial learning for whole-slide image representation and classification." Computerized Medical Imaging and Graphics 107 (July 2023): 102230. http://dx.doi.org/10.1016/j.compmedimag.2023.102230.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Ma, Yingfan, Xiaoyuan Luo, Kexue Fu, and Manning Wang. "Transformer-Based Video-Structure Multi-Instance Learning for Whole Slide Image Classification." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 13 (March 24, 2024): 14263–71. http://dx.doi.org/10.1609/aaai.v38i13.29338.

Full text
Abstract:
Pathological images play a vital role in clinical cancer diagnosis. Computer-aided diagnosis utilized on digital Whole Slide Images (WSIs) has been widely studied. The major challenge of using deep learning models for WSI analysis is the huge size of WSI images and existing methods struggle between end-to-end learning and proper modeling of contextual information. Most state-of-the-art methods utilize a two-stage strategy, in which they use a pre-trained model to extract features of small patches cut from a WSI and then input these features into a classification model. These methods can not perform end-to-end learning and consider contextual information at the same time. To solve this problem, we propose a framework that models a WSI as a pathologist's observing video and utilizes Transformer to process video clips with a divide-and-conquer strategy, which helps achieve both context-awareness and end-to-end learning. Extensive experiments on three public WSI datasets show that our proposed method outperforms existing SOTA methods in both WSI classification and positive region detection.
APA, Harvard, Vancouver, ISO, and other styles
22

Yang, Rui, Pei Liu, and Luping Ji. "ProDiv: Prototype-driven consistent pseudo-bag division for whole-slide image classification." Computer Methods and Programs in Biomedicine 249 (June 2024): 108161. http://dx.doi.org/10.1016/j.cmpb.2024.108161.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Song, JaeYen, Soyoung Im, Sung Hak Lee, and Hyun-Jong Jang. "Deep Learning-Based Classification of Uterine Cervical and Endometrial Cancer Subtypes from Whole-Slide Histopathology Images." Diagnostics 12, no. 11 (October 28, 2022): 2623. http://dx.doi.org/10.3390/diagnostics12112623.

Full text
Abstract:
Uterine cervical and endometrial cancers have different subtypes with different clinical outcomes. Therefore, cancer subtyping is essential for proper treatment decisions. Furthermore, an endometrial and endocervical origin for an adenocarcinoma should also be distinguished. Although the discrimination can be helped with various immunohistochemical markers, there is no definitive marker. Therefore, we tested the feasibility of deep learning (DL)-based classification for the subtypes of cervical and endometrial cancers and the site of origin of adenocarcinomas from whole slide images (WSIs) of tissue slides. WSIs were split into 360 × 360-pixel image patches at 20× magnification for classification. Then, the average of patch classification results was used for the final classification. The area under the receiver operating characteristic curves (AUROCs) for the cervical and endometrial cancer classifiers were 0.977 and 0.944, respectively. The classifier for the origin of an adenocarcinoma yielded an AUROC of 0.939. These results clearly demonstrated the feasibility of DL-based classifiers for the discrimination of cancers from the cervix and uterus. We expect that the performance of the classifiers will be much enhanced with an accumulation of WSI data. Then, the information from the classifiers can be integrated with other data for more precise discrimination of cervical and endometrial cancers.
APA, Harvard, Vancouver, ISO, and other styles
24

Neto, Pedro C., Sara P. Oliveira, Diana Montezuma, João Fraga, Ana Monteiro, Liliana Ribeiro, Sofia Gonçalves, Isabel M. Pinto, and Jaime S. Cardoso. "iMIL4PATH: A Semi-Supervised Interpretable Approach for Colorectal Whole-Slide Images." Cancers 14, no. 10 (May 18, 2022): 2489. http://dx.doi.org/10.3390/cancers14102489.

Full text
Abstract:
Colorectal cancer (CRC) diagnosis is based on samples obtained from biopsies, assessed in pathology laboratories. Due to population growth and ageing, as well as better screening programs, the CRC incidence rate has been increasing, leading to a higher workload for pathologists. In this sense, the application of AI for automatic CRC diagnosis, particularly on whole-slide images (WSI), is of utmost relevance, in order to assist professionals in case triage and case review. In this work, we propose an interpretable semi-supervised approach to detect lesions in colorectal biopsies with high sensitivity, based on multiple-instance learning and feature aggregation methods. The model was developed on an extended version of the recent, publicly available CRC dataset (the CRC+ dataset with 4433 WSI), using 3424 slides for training and 1009 slides for evaluation. The proposed method attained 90.19% classification ACC, 98.8% sensitivity, 85.7% specificity, and a quadratic weighted kappa of 0.888 at slide-based evaluation. Its generalisation capabilities are also studied on two publicly available external datasets.
APA, Harvard, Vancouver, ISO, and other styles
25

Govind, Darshana, Brendon Lutnick, John E. Tomaszewski, and Pinaki Sarder. "Automated erythrocyte detection and classification from whole slide images." Journal of Medical Imaging 5, no. 02 (April 10, 2018): 1. http://dx.doi.org/10.1117/1.jmi.5.2.027501.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Tourniaire, Paul, Marius Ilie, Paul Hofman, Nicholas Ayache, and Hervé Delingette. "Abstract 461: Mixed supervision to improve the classification and localization: Coherence of tumors in histological slides." Cancer Research 82, no. 12_Supplement (June 15, 2022): 461. http://dx.doi.org/10.1158/1538-7445.am2022-461.

Full text
Abstract:
Abstract With the growing standardization of Whole Slide Images (WSIs), deep learning algorithms have shown promising results for the automated classification and localization of tumors. Yet, it is often difficult to train such algorithms, as they usually require careful detailed annotations from expert pathologists, which are tedious to produce. This is why in general only slide-level labels are accessible while annotations of small regions (or tiles) are limited. With only slide-level information, it is difficult to obtain accurate predictions of the localization of pathological tissues inside a slide, despite reaching good slide-level classification. Besides, existing algorithms show limited consistency between slide- and tile-level predictions, leading to difficult interpretation in case of healthy tissue. Using the attention-based multiple instance learning framework, we propose to combine slide-level labels on all slides with tile-level labels on a small fraction (e.g. 20%) of slides within a histology dataset to improve both classification and localization performances. With this mixed supervision of slides, we aim to enforce a better consistency between slide- and tile-level labels. To this end, we introduce an attention-based loss function to further guide the model’s attention on discriminative regions inside tumorous slides, and display an equal attention among all tiles of normal slides. On the Camelyon16 dataset, we reached precision and recall scores as high as 0.99 and 0.85 respectively with an AUC of 0.93 on the competition test set, using only 50% of the slides with tile-level annotations in the training set. Experiments using various proportions of fully annotated slides in the training set show promising results for an improved localization of tumors and classification of slides. In this work, we showed that using a limited amount of fully annotated slides we can improve both the classification and localization performances of an attention-based deep learning model. This increased consistency and performances should help pathologists to better interpret the algorithm output and to focus on suspicious regions in probable tumorous slides. Citation Format: Paul Tourniaire, Marius Ilie, Paul Hofman, Nicholas Ayache, Hervé Delingette. Mixed supervision to improve the classification and localization: Coherence of tumors in histological slides [abstract]. In: Proceedings of the American Association for Cancer Research Annual Meeting 2022; 2022 Apr 8-13. Philadelphia (PA): AACR; Cancer Res 2022;82(12_Suppl):Abstract nr 461.
APA, Harvard, Vancouver, ISO, and other styles
27

Jayaratne, N., A. Sasikumar, S. Subasinghe, A. Borkowski, S. Mastorides, L. Thomas, E. Mastorides, and L. DeLand. "Using Deep Learning for Whole Slide Image Prostate Cancer Diagnosis and Grading in South Florida Veteran Population." American Journal of Clinical Pathology 156, Supplement_1 (October 1, 2021): S141. http://dx.doi.org/10.1093/ajcp/aqab191.301.

Full text
Abstract:
Abstract Introduction/Objective Prostate cancer is the most common non-cutaneous malignancy in veterans, with approximately 11,000 new prostate cancer cases diagnosed in the Veteran’s Affairs system each year. Prostate cancer diagnosis and grading can be challenging even for experienced pathologists. Although large VA medical centers have pathologists that specialize in urologic pathology, the vast majority have not. We hypothesized that the AI-augmented diagnosis and grading may provide the solution for such situations. Methods/Case Report Dataset consisted of 10,000 prostate biopsy whole slide images (WSI) from the Kaggle PANDA challenge, and 6,000 WSI from the James A. Haley Veterans’ Hospital. Two Classification models were trained on the combined Kaggle and VA datasets using whole slide labels, and not annotated slides that resemble semi-supervised training. Two-Class Classification to predict Benign: ISUP [0] / Cancerous: ISUP [1,2,3,4,5] Three-Class Classification to predict Benign: ISUP [0] / Low-grade: ISUP [1,2] / High-grade: ISUP [3,4,5] WSI split into “tiles” were used for training the models to reduce whitespace around samples, manage large images, and normalize dimensions/orientations. Results (if a Case Study enter NA) Models trained purely as binary and 3-class classifiers performed very well. Two-Class Model: Accuracy = 0.937 Precision = 0.965 F1 = 0.94 AUC = 0.979 Three-Class Model: Accuracy 0.89 o Benign: Precision=0.897 f1=0.928 o Low-grade: Precision=0.866 f1=0.841 o High-grade: Precision=0.91 f1=0.878 We plan to develop multi-stage prediction models using these 2-Class and 3-Class classifiers as the first stage and a cancer grade predictor in the second stage. Conclusion We successfully showed that AI can augment pathologist’s diagnosis and grading of prostate cancer.
APA, Harvard, Vancouver, ISO, and other styles
28

Soldatov, Sergey A., Danil M. Pashkov, Sergey A. Guda, Nikolay S. Karnaukhov, Alexander A. Guda, and Alexander V. Soldatov. "Deep Learning Classification of Colorectal Lesions Based on Whole Slide Images." Algorithms 15, no. 11 (October 27, 2022): 398. http://dx.doi.org/10.3390/a15110398.

Full text
Abstract:
Microscopic tissue analysis is the key diagnostic method needed for disease identification and choosing the best treatment regimen. According to the Global Cancer Observatory, approximately two million people are diagnosed with colorectal cancer each year, and an accurate diagnosis requires a significant amount of time and a highly qualified pathologist to decrease the high mortality rate. Recent development of artificial intelligence technologies and scanning microscopy introduced digital pathology into the field of cancer diagnosis by means of the whole-slide image (WSI). In this work, we applied deep learning methods to diagnose six types of colon mucosal lesions using convolutional neural networks (CNNs). As a result, an algorithm for the automatic segmentation of WSIs of colon biopsies was developed, implementing pre-trained, deep convolutional neural networks of the ResNet and EfficientNet architectures. We compared the classical method and one-cycle policy for CNN training and applied both multi-class and multi-label approaches to solve the classification problem. The multi-label approach was superior because some WSI patches may belong to several classes at once or to none of them. Using the standard one-vs-rest approach, we trained multiple binary classifiers. They achieved the receiver operator curve AUC in the range of 0.80–0.96. Other metrics were also calculated, such as accuracy, precision, sensitivity, specificity, negative predictive value, and F1-score. Obtained CNNs can support human pathologists in the diagnostic process and can be extended to other cancers after adding a sufficient amount of labeled data.
APA, Harvard, Vancouver, ISO, and other styles
29

Prasad Battula, Krishna, and B. Sai Chandana. "Multi-class Cervical Cancer Classification using Transfer Learning-based Optimized SE-ResNet152 model in Pap Smear Whole Slide Images." International journal of electrical and computer engineering systems 14, no. 6 (July 12, 2023): 623. http://dx.doi.org/10.32985/ijeces.14.6.1.

Full text
Abstract:
Among the main factors contributing to death globally is cervical cancer, regardless of whether it can be avoided and treated if the afflicted tissues are removed early. Cervical screening programs must be made accessible to everyone and effectively, which is a difficult task that necessitates, among other things, identifying the population's most vulnerable members. Therefore, we present an effective deep-learning method for classifying the multi-class cervical cancer disease using Pap smear images in this research. The transfer learning-based optimized SE-ResNet152 model is used for effective multi-class Pap smear image classification. The reliable significant image features are accurately extracted by the proposed network model. The network's hyper-parameters are optimized using the Deer Hunting Optimization (DHO) algorithm. Five SIPaKMeD dataset categories and six CRIC dataset categories constitute the 11 classes for cervical cancer diseases. A Pap smear image dataset with 8838 images and various class distributions is used to evaluate the proposed method. The introduction of the cost-sensitive loss function throughout the classifier's learning process rectifies the dataset's imbalance. When compared to prior existing approaches on multi-class Pap smear image classification, 99.68% accuracy, 98.82% precision, 97.86% recall, and 98.64% F1-Score are achieved by the proposed method on the test set. For automated preliminary diagnosis of cervical cancer diseases, the proposed method produces better identification results in hospitals and cervical cancer clinics due to the positive classification results.
APA, Harvard, Vancouver, ISO, and other styles
30

Cui, Lei, Jun Feng, and Lin Yang. "Towards Fine Whole-Slide Skeletal Muscle Image Segmentation through Deep Hierarchically Connected Networks." Journal of Healthcare Engineering 2019 (June 27, 2019): 1–10. http://dx.doi.org/10.1155/2019/5191630.

Full text
Abstract:
Automatic skeletal muscle image segmentation (MIS) is crucial in the diagnosis of muscle-related diseases. However, accurate methods often suffer from expensive computations, which are not scalable to large-scale, whole-slide muscle images. In this paper, we present a fast and accurate method to enable the more clinically meaningful whole-slide MIS. Leveraging on recently popular convolutional neural network (CNN), we train our network in an end-to-end manner so as to directly perform pixelwise classification. Our deep network is comprised of the encoder and decoder modules. The encoder module captures rich and hierarchical representations through a series of convolutional and max-pooling layers. Then, the multiple decoders utilize multilevel representations to perform multiscale predictions. The multiscale predictions are then combined together to generate a more robust dense segmentation as the network output. The decoder modules have independent loss function, which are jointly trained with a weighted loss function to address fine-grained pixelwise prediction. We also propose a two-stage transfer learning strategy to effectively train such deep network. Sufficient experiments on a challenging muscle image dataset demonstrate the significantly improved efficiency and accuracy of our method compared with recent state of the arts.
APA, Harvard, Vancouver, ISO, and other styles
31

Schmitt, Max, Roman Christoph Maron, Achim Hekler, Albrecht Stenzinger, Axel Hauschild, Michael Weichenthal, Markus Tiemann, et al. "Hidden Variables in Deep Learning Digital Pathology and Their Potential to Cause Batch Effects: Prediction Model Study." Journal of Medical Internet Research 23, no. 2 (February 2, 2021): e23436. http://dx.doi.org/10.2196/23436.

Full text
Abstract:
Background An increasing number of studies within digital pathology show the potential of artificial intelligence (AI) to diagnose cancer using histological whole slide images, which requires large and diverse data sets. While diversification may result in more generalizable AI-based systems, it can also introduce hidden variables. If neural networks are able to distinguish/learn hidden variables, these variables can introduce batch effects that compromise the accuracy of classification systems. Objective The objective of the study was to analyze the learnability of an exemplary selection of hidden variables (patient age, slide preparation date, slide origin, and scanner type) that are commonly found in whole slide image data sets in digital pathology and could create batch effects. Methods We trained four separate convolutional neural networks (CNNs) to learn four variables using a data set of digitized whole slide melanoma images from five different institutes. For robustness, each CNN training and evaluation run was repeated multiple times, and a variable was only considered learnable if the lower bound of the 95% confidence interval of its mean balanced accuracy was above 50.0%. Results A mean balanced accuracy above 50.0% was achieved for all four tasks, even when considering the lower bound of the 95% confidence interval. Performance between tasks showed wide variation, ranging from 56.1% (slide preparation date) to 100% (slide origin). Conclusions Because all of the analyzed hidden variables are learnable, they have the potential to create batch effects in dermatopathology data sets, which negatively affect AI-based classification systems. Practitioners should be aware of these and similar pitfalls when developing and evaluating such systems and address these and potentially other batch effect variables in their data sets through sufficient data set stratification.
APA, Harvard, Vancouver, ISO, and other styles
32

Srivastava, Arunima, Chaitanya Kulkarni, Kun Huang, Anil Parwani, Parag Mallick, and Raghu Machiraju. "Imitating Pathologist Based Assessment With Interpretable and Context Based Neural Network Modeling of Histology Images." Biomedical Informatics Insights 10 (January 2018): 117822261880748. http://dx.doi.org/10.1177/1178222618807481.

Full text
Abstract:
Convolutional neural networks (CNNs) have gained steady popularity as a tool to perform automatic classification of whole slide histology images. While CNNs have proven to be powerful classifiers in this context, they fail to explain this classification, as the network engineered features used for modeling and classification are ONLY interpretable by the CNNs themselves. This work aims at enhancing a traditional neural network model to perform histology image modeling, patient classification, and interpretation of the distinctive features identified by the network within the histology whole slide images (WSIs). We synthesize a workflow which (a) intelligently samples the training data by automatically selecting only image areas that display visible disease-relevant tissue state and (b) isolates regions most pertinent to the trained CNN prediction and translates them to observable and qualitative features such as color, intensity, cell and tissue morphology and texture. We use the Cancer Genome Atlas’s Breast Invasive Carcinoma (TCGA-BRCA) histology dataset to build a model predicting patient attributes (disease stage and node status) and the tumor proliferation challenge (TUPAC 2016) breast cancer histology image repository to help identify disease-relevant tissue state (mitotic activity). We find that our enhanced CNN based workflow both increased patient attribute predictive accuracy (~2% increase for disease stage and ~10% increase for node status) and experimentally proved that a data-driven CNN histology model predicting breast invasive carcinoma stages is highly sensitive to features such as color, cell size, and shape, granularity, and uniformity. This work summarizes the need for understanding the widely trusted models built using deep learning and adds a layer of biological context to a technique that functioned as a classification only approach till now.
APA, Harvard, Vancouver, ISO, and other styles
33

Rodner, Erik, Marcel Simon, and Joachim Denzler. "Deep bilinear features for Her2 scoring in digital pathology." Current Directions in Biomedical Engineering 3, no. 2 (September 7, 2017): 811–14. http://dx.doi.org/10.1515/cdbme-2017-0171.

Full text
Abstract:
AbstractWe present an automated approach for rating HER2 over-expressions in given whole-slide images of breast cancer histology slides. The slides have a very high resolution and only a small part of it is relevant for the rating.Our approach is based on Convolutional Neural Networks (CNN), which are directly modelling the whole computer vision pipeline, from feature extraction to classification, with a single parameterized model. CNN models have led to a significant breakthrough in a lot of vision applications and showed promising results for medical tasks. However, the required size of training data is still an issue. Our CNN models are pre-trained on a large set of datasets of non-medical images, which prevents over-fitting to the small annotated dataset available in our case. We assume the selection of the probe in the data with just a single mouse click defining a point of interest. This is reasonable especially for slices acquired together with another sample. We sample image patches around the point of interest and obtain bilinear features by passing them through a CNN and encoding the output of the last convolutional layer with its second-order statistics.Our approach ranked second in the Her2 contest held by the University of Warwick achieving 345 points compared to 348 points of the winning team. In addition to pure classification, our approach would also allow for localization of parts of the slice relevant for visual detection of Her2 over-expression.
APA, Harvard, Vancouver, ISO, and other styles
34

Kanavati, Fahdi, Naoki Hirose, Takahiro Ishii, Ayaka Fukuda, Shin Ichihara, and Masayuki Tsuneki. "A Deep Learning Model for Cervical Cancer Screening on Liquid-Based Cytology Specimens in Whole Slide Images." Cancers 14, no. 5 (February 24, 2022): 1159. http://dx.doi.org/10.3390/cancers14051159.

Full text
Abstract:
Liquid-based cytology (LBC) for cervical cancer screening is now more common than the conventional smears, which when digitised from glass slides into whole-slide images (WSIs), opens up the possibility of artificial intelligence (AI)-based automated image analysis. Since conventional screening processes by cytoscreeners and cytopathologists using microscopes is limited in terms of human resources, it is important to develop new computational techniques that can automatically and rapidly diagnose a large amount of specimens without delay, which would be of great benefit for clinical laboratories and hospitals. The goal of this study was to investigate the use of a deep learning model for the classification of WSIs of LBC specimens into neoplastic and non-neoplastic. To do so, we used a dataset of 1605 cervical WSIs. We evaluated the model on three test sets with a combined total of 1468 WSIs, achieving ROC AUCs for WSI diagnosis in the range of 0.89–0.96, demonstrating the promising potential use of such models for aiding screening processes.
APA, Harvard, Vancouver, ISO, and other styles
35

Fu, Zhibing, Qingkui Chen, Mingming Wang, and Chen Huang. "Whole slide images classification model based on self-learning sampling." Biomedical Signal Processing and Control 90 (April 2024): 105826. http://dx.doi.org/10.1016/j.bspc.2023.105826.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Dimauro, Giovanni, Vitoantonio Bevilacqua, Pio Fina, Domenico Buongiorno, Antonio Brunetti, Sergio Latrofa, Michele Cassano, and Matteo Gelardi. "Comparative Analysis of Rhino-Cytological Specimens with Image Analysis and Deep Learning Techniques." Electronics 9, no. 6 (June 8, 2020): 952. http://dx.doi.org/10.3390/electronics9060952.

Full text
Abstract:
Cytological study of the nasal mucosa (also known as rhino-cytology) represents an important diagnostic aid that allows highlighting of the presence of some types of rhinitis through the analysis of cellular features visible under a microscope. Nowadays, the automated detection and classification of cells benefit from the capacity of deep learning techniques in processing digital images of the cytological preparation. Even though the results of such automatic systems need to be validated by a specialized rhino-cytologist, this technology represents a valid support that aims at increasing the accuracy of the analysis while reducing the required time and effort. The quality of the rhino-cytological preparation, which is clearly important for the microscope observation phase, is also fundamental for the automatic classification process. In fact, the slide-preparing technique turns out to be a crucial factor among the multiple ones that may modify the morphological and chromatic characteristics of the cells. This paper aims to investigate the possible differences between direct smear (SM) and cytological centrifugation (CYT) slide-preparation techniques, in order to preserve image quality during the observation and cell classification phases in rhino-cytology. Firstly, a comparative study based on image analysis techniques has been put forward. The extraction of densitometric and morphometric features has made it possible to quantify and describe the spatial distribution of the cells in the field images observed under the microscope. Statistical analysis of the distribution of these features has been used to evaluate the degree of similarity between images acquired from SM and CYT slides. The results prove an important difference in the observation process of the cells prepared with the above-mentioned techniques, with reference to cell density and spatial distribution: the analysis of CYT slides has been more difficult than of the SM ones due to the spatial distribution of the cells, which results in a lower cell density than the SM slides. As a marginal part of this study, a performance assessment of the computer-aided diagnosis (CAD) system called Rhino-cyt has also been carried out on both groups of image slide types.
APA, Harvard, Vancouver, ISO, and other styles
37

Gupta, Pushpanjali, Yenlin Huang, Prasan Kumar Sahoo, Jeng-Fu You, Sum-Fu Chiang, Djeane Debora Onthoni, Yih-Jong Chern, et al. "Colon Tissues Classification and Localization in Whole Slide Images Using Deep Learning." Diagnostics 11, no. 8 (August 2, 2021): 1398. http://dx.doi.org/10.3390/diagnostics11081398.

Full text
Abstract:
Colorectal cancer is one of the leading causes of cancer-related death worldwide. The early diagnosis of colon cancer not only reduces mortality but also reduces the burden related to the treatment strategies such as chemotherapy and/or radiotherapy. However, when the microscopic examination of the suspected colon tissue sample is carried out, it becomes a tedious and time-consuming job for the pathologists to find the abnormality in the tissue. In addition, there may be interobserver variability that might lead to conflict in the final diagnosis. As a result, there is a crucial need of developing an intelligent automated method that can learn from the patterns themselves and assist the pathologist in making a faster, accurate, and consistent decision for determining the normal and abnormal region in the colorectal tissues. Moreover, the intelligent method should be able to localize the abnormal region in the whole slide image (WSI), which will make it easier for the pathologists to focus on only the region of interest making the task of tissue examination faster and lesser time-consuming. As a result, artificial intelligence (AI)-based classification and localization models are proposed for determining and localizing the abnormal regions in WSI. The proposed models achieved F-score of 0.97, area under curve (AUC) 0.97 with pretrained Inception-v3 model, and F-score of 0.99 and AUC 0.99 with customized Inception-ResNet-v2 Type 5 (IR-v2 Type 5) model.
APA, Harvard, Vancouver, ISO, and other styles
38

Bhatt, Anant R., Amit Ganatra, and Ketan Kotecha. "Cervical cancer detection in pap smear whole slide images using convNet with transfer learning and progressive resizing." PeerJ Computer Science 7 (February 18, 2021): e348. http://dx.doi.org/10.7717/peerj-cs.348.

Full text
Abstract:
Cervical intraepithelial neoplasia (CIN) and cervical cancer are major health problems faced by women worldwide. The conventional Papanicolaou (Pap) smear analysis is an effective method to diagnose cervical pre-malignant and malignant conditions by analyzing swab images. Various computer vision techniques can be explored to identify potential precancerous and cancerous lesions by analyzing the Pap smear image. The majority of existing work cover binary classification approaches using various classifiers and Convolution Neural Networks. However, they suffer from inherent challenges for minute feature extraction and precise classification. We propose a novel methodology to carry out the multiclass classification of cervical cells from Whole Slide Images (WSI) with optimum feature extraction. The actualization of Conv Net with Transfer Learning technique substantiates meaningful Metamorphic Diagnosis of neoplastic and pre-neoplastic lesions. As the Progressive Resizing technique (an advanced method for training ConvNet) incorporates prior knowledge of the feature hierarchy and can reuse old computations while learning new ones, the model can carry forward the extracted morphological cell features to subsequent Neural Network layers iteratively for elusive learning. The Progressive Resizing technique superimposition in consultation with the Transfer Learning technique while training the Conv Net models has shown a substantial performance increase. The proposed binary and multiclass classification methodology succored in achieving benchmark scores on the Herlev Dataset. We achieved singular multiclass classification scores for WSI images of the SIPaKMed dataset, that is, accuracy (99.70%), precision (99.70%), recall (99.72%), F-Beta (99.63%), and Kappa scores (99.31%), which supersede the scores obtained through principal methodologies. GradCam based feature interpretation extends enhanced assimilation of the generated results, highlighting the pre-malignant and malignant lesions by visual localization in the images.
APA, Harvard, Vancouver, ISO, and other styles
39

Wetteland, Rune, Kjersti Engan, Trygve Eftestøl, Vebjørn Kvikstad, and Emiel A. M. Janssen. "A Multiscale Approach for Whole-Slide Image Segmentation of five Tissue Classes in Urothelial Carcinoma Slides." Technology in Cancer Research & Treatment 19 (January 1, 2020): 153303382094678. http://dx.doi.org/10.1177/1533033820946787.

Full text
Abstract:
In pathology labs worldwide, we see an increasing number of tissue samples that need to be assessed without the same increase in the number of pathologists. Computational pathology, where digital scans of histological samples called whole-slide images (WSI) are processed by computational tools, can be of help for the pathologists and is gaining research interests. Most research effort has been given to classify slides as being cancerous or not, localization of cancerous regions, and to the “big-four” in cancer: breast, lung, prostate, and bowel. Urothelial carcinoma, the most common form of bladder cancer, is expensive to follow up due to a high risk of recurrence, and grading systems have a high degree of inter- and intra-observer variability. The tissue samples of urothelial carcinoma contain a mixture of damaged tissue, blood, stroma, muscle, and urothelium, where it is mainly muscle and urothelium that is diagnostically relevant. A coarse segmentation of these tissue types would be useful to i) guide pathologists to the diagnostic relevant areas of the WSI, and ii) use as input in a computer-aided diagnostic (CAD) system. However, little work has been done on segmenting tissue types in WSIs, and on computational pathology for urothelial carcinoma in particular. In this work, we are using convolutional neural networks (CNN) for multiscale tile-wise classification and coarse segmentation, including both context and detail, by using three magnification levels: 25x, 100x, and 400x. 28 models were trained on weakly labeled data from 32 WSIs, where the best model got an F1-score of 96.5% across six classes. The multiscale models were consistently better than the single-scale models, demonstrating the benefit of combining multiple scales. No tissue-class ground-truth for complete WSIs exist, but the best models were used to segment seven unseen WSIs where the results were manually inspected by a pathologist and are considered as very promising.
APA, Harvard, Vancouver, ISO, and other styles
40

Wang, Haiyue, Yuming Jiang, Bailiang Li, Yi Cui, Dengwang Li, and Ruijiang Li. "Single-Cell Spatial Analysis of Tumor and Immune Microenvironment on Whole-Slide Image Reveals Hepatocellular Carcinoma Subtypes." Cancers 12, no. 12 (November 28, 2020): 3562. http://dx.doi.org/10.3390/cancers12123562.

Full text
Abstract:
Hepatocellular carcinoma (HCC) is a heterogeneous disease with diverse characteristics and outcomes. Here, we aim to develop a histological classification for HCC by integrating computational imaging features of the tumor and its microenvironment. We first trained a multitask deep-learning neural network for automated single-cell segmentation and classification on hematoxylin- and eosin-stained tissue sections. After confirming the accuracy in a testing set, we applied the model to whole-slide images of 304 tumors in the Cancer Genome Atlas. Given the single-cell map, we calculated 246 quantitative image features to characterize individual nuclei as well as spatial relations between tumor cells and infiltrating lymphocytes. Unsupervised consensus clustering revealed three reproducible histological subtypes, which exhibit distinct nuclear features as well as spatial distribution and relation between tumor cells and lymphocytes. These histological subtypes were associated with somatic genomic alterations (i.e., aneuploidy) and specific molecular pathways, including cell cycle progression and oxidative phosphorylation. Importantly, these histological subtypes complement established molecular classification and demonstrate independent prognostic value beyond conventional clinicopathologic factors. Our study represents a step forward in quantifying the spatial distribution and complex interaction between tumor and immune microenvironment. The clinical relevance of the imaging subtypes for predicting prognosis and therapy response warrants further validation.
APA, Harvard, Vancouver, ISO, and other styles
41

Acevedo Zamora, Marco Andres, and Balz S. Kamber. "Petrographic Microscopy with Ray Tracing and Segmentation from Multi-Angle Polarisation Whole-Slide Images." Minerals 13, no. 2 (January 20, 2023): 156. http://dx.doi.org/10.3390/min13020156.

Full text
Abstract:
‘Slide scanners’ are rapid optical microscopes equipped with automated and accurate x-y travel stages with virtual z-motion that cannot be rotated. In biomedical microscopic imaging, they are widely deployed to generate whole-slide images (WSI) of tissue samples in various modes of illumination. The availability of WSI has motivated the development of instrument-agnostic advanced image analysis software, helping drug development, pathology, and many other areas of research. Slide scanners are now being modified to enable polarised petrographic microscopy by simulating stage rotation with the acquisition of multiple rotation angles of the polariser–analyser pair for observing randomly oriented anisotropic materials. Here we report on the calibration strategy of one repurposed slide scanner and describe a pilot image analysis pipeline designed to introduce the wider audience to the complexity of performing computer-assisted feature recognition on mineral groups. The repurposed biological scanner produces transmitted light plane- and cross-polarised (TL-PPL and XPL) and unpolarised reflected light (RL) WSI from polished thin sections or slim epoxy mounts at various magnifications, yielding pixel dimensions from ca. 2.7 × 2.7 to 0.14 × 0.14 µm. A data tree of 14 WSI is regularly obtained, containing two RL and six of each PPL and XPL WSI (at 18° rotation increments). This pyramidal image stack is stitched and built into a local server database simultaneously with acquisition. The pyramids (multi-resolution ‘cubes’) can be viewed with freeware locally deployed for teaching petrography and collaborative research. The main progress reported here concerns image analysis with a pilot open-source software pipeline enabling semantic segmentation on petrographic imagery. For this purpose, all WSI are post-processed and aligned to a ‘fixed’ reflective surface (RL), and the PPL and XPL stacks are then summarised in one image, each with ray tracing that describes visible light reflection, absorption, and O- and E-wave interference phenomena. The maximum red-green-blue values were found to best overcome the limitation of refractive index anisotropy for segmentation based on pixel-neighbouring feature maps. This strongly reduces the variation in dichroism in PPL and interference colour in XPL. The synthetic ray trace WSI is then combined with one RL to estimate modal mineralogy with multi-scale algorithms originally designed for object-based cell segmentation in pathological tissues. This requires generating a small number of polygonal expert annotations that inform a training dataset, enabling on-the-fly machine learning classification into mineral classes. The accuracy of the approach was tested by comparison with modal mineralogy obtained by energy-dispersive spectroscopy scanning electron microscopy (SEM-EDX) for a suite of rocks of simple mineralogy (granulites and peridotite). The strengths and limitations of the pixel-based classification approach are described, and phenomena from sample preparation imperfections to semantic segmentation artefacts around fine-grained minerals and/or of indiscriminate optical properties are discussed. Finally, we provide an outlook on image analysis strategies that will improve the status quo by using the first-pass mineralogy identification from optical WSI to generate a location grid to obtain targeted chemical data (e.g., by SEM-EDX) and by considering the rock texture.
APA, Harvard, Vancouver, ISO, and other styles
42

PENKIN, M. A., A. V. KHVOSTIKOV, and A. S. KRYLOV. "AUTOMATED METHOD FOR OPTIMUM SCALE SEARCH WHEN USING TRAINED MODELS FOR HISTOLOGICAL IMAGE ANALYSIS." Программирование, no. 3 (May 1, 2023): 49–55. http://dx.doi.org/10.31857/s0132347423030032.

Full text
Abstract:
Preparation of input data for an artificial neural network is a key step to achieve a high accuracy of its predictions. It is well known that convolutional neural models have low invariance to changes in the scale of input data. For instance, processing multiscale whole-slide histological images by convolutional neural networks naturally poses a problem of choosing an optimal processing scale. In this paper, this problem is solved by iterative analysis of distances to a separating hyperplane that are generated by a convolutional classifier at different input scales. The proposed method is tested on the DenseNet121 deep architecture pretrained on PATH-DT-MSU data, which implements patch classification of whole-slide histological images.
APA, Harvard, Vancouver, ISO, and other styles
43

El-Hossiny, Ahmed S., Walid Al-Atabany, Osama Hassan, Ahmed M. Soliman, and Sherif A. Sami. "Classification of Thyroid Carcinoma in Whole Slide Images Using Cascaded CNN." IEEE Access 9 (2021): 88429–38. http://dx.doi.org/10.1109/access.2021.3076158.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Yoshida, Hiroshi, Yoshiko Yamashita, Taichi Shimazu, Eric Cosatto, Tomoharu Kiyuna, Hirokazu Taniguchi, Shigeki Sekine, and Atsushi Ochiai. "Automated histological classification of whole slide images of colorectal biopsy specimens." Oncotarget 8, no. 53 (October 12, 2017): 90719–29. http://dx.doi.org/10.18632/oncotarget.21819.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Xu, Hongming, Sunho Park, and Tae Hyun Hwang. "Computerized Classification of Prostate Cancer Gleason Scores from Whole Slide Images." IEEE/ACM Transactions on Computational Biology and Bioinformatics 17, no. 6 (November 1, 2020): 1871–82. http://dx.doi.org/10.1109/tcbb.2019.2941195.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Hassanpour, Saeed, Bruno Korbar, AndreaM Olofson, AllenP Miraflor, CatherineM Nicka, MatthewA Suriawinata, Lorenzo Torresani, and AriefA Suriawinata. "Deep learning for classification of colorectal polyps on whole-slide images." Journal of Pathology Informatics 8, no. 1 (2017): 30. http://dx.doi.org/10.4103/jpi.jpi_34_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Yoshida, Hiroshi, Taichi Shimazu, Tomoharu Kiyuna, Atsushi Marugame, Yoshiko Yamashita, Eric Cosatto, Hirokazu Taniguchi, Shigeki Sekine, and Atsushi Ochiai. "Automated histological classification of whole-slide images of gastric biopsy specimens." Gastric Cancer 21, no. 2 (June 2, 2017): 249–57. http://dx.doi.org/10.1007/s10120-017-0731-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Hyun-Cheol Park, Hyun-Cheol Park, Raman Ghimire Hyun-Cheol Park, Sahadev Poudel Raman Ghimire, and Sang-Woong Lee Sahadev Poudel. "Deep Learning for Joint Classification and Segmentation of Histopathology Image." 網際網路技術學刊 23, no. 4 (July 2022): 903–10. http://dx.doi.org/10.53106/160792642022072304025.

Full text
Abstract:
<p>Liver cancer is one of the most prevalent cancer deaths worldwide. Thus, early detection and diagnosis of possible liver cancer help in reducing cancer death. Histopathological Image Analysis (HIA) used to be carried out traditionally, but these are time-consuming and require expert knowledge. We propose a patch-based deep learning method for liver cell classification and segmentation. In this work, a two-step approach for the classification and segmentation of whole-slide image (WSI) is proposed. Since WSIs are too large to be fed into convolutional neural networks (CNN) directly, we first extract patches from them. The patches are fed into a modified version of U-Net with its equivalent mask for precise segmentation. In classification tasks, the WSIs are scaled 4 times, 16 times, and 64 times respectively. Patches extracted from each scale are then fed into the convolutional network with its corresponding label. During inference, we perform majority voting on the result obtained from the convolutional network. The proposed method has demonstrated better results in both classification and segmentation of liver cancer cells.</p> <p>&nbsp;</p>
APA, Harvard, Vancouver, ISO, and other styles
49

Xie, Peizhen, Ke Zuo, Jie Liu, Mingliang Chen, Shuang Zhao, Wenjie Kang, and Fangfang Li. "Interpretable Diagnosis for Whole-Slide Melanoma Histology Images Using Convolutional Neural Network." Journal of Healthcare Engineering 2021 (November 1, 2021): 1–7. http://dx.doi.org/10.1155/2021/8396438.

Full text
Abstract:
At present, deep learning-based medical image diagnosis had achieved high performance in several diseases. However, the black-box nature of the convolutional neural network (CNN) limits their role in diagnosis. In this study, a novel interpretable diagnosis pipeline using the CNN model was proposed. Furthermore, a sizeable melanoma database that contains 841 digital whole-slide images (WSIs) was built to train and evaluate the model. The model achieved strong melanoma classification ability (0.962 areas under the receiver operating characteristic, 0.887 sensitivity, and 0.925 specificity). Moreover, the proposed model outperformed the existing schemes in terms of accuracy that is 20 pathologists (0.933 vs 0.732 accuracy). Finally, the gradient-weighted class activation mapping (Grad-CAM) method was used to show the inner logic of the proposed model and its feasibility to improve diagnosis process in healthcare. The mechanism of feature heat maps which is visualized through a saliency mapping has demonstrated that features learned or extracted by the proposed model are compatible with the accepted pathological features. Conclusively, the proposed model provides a rapid and accurate diagnosis by locating the distinctive features of melanoma to build doctors’ trust in the CNNs’ diagnosis results.
APA, Harvard, Vancouver, ISO, and other styles
50

Hao, Fang, Xueyu Liu, Ming Li, and Weixia Han. "Accurate Kidney Pathological Image Classification Method Based on Deep Learning and Multi-Modal Fusion Method with Application to Membranous Nephropathy." Life 13, no. 2 (January 31, 2023): 399. http://dx.doi.org/10.3390/life13020399.

Full text
Abstract:
Membranous nephropathy is one of the most prevalent conditions responsible for nephrotic syndrome in adults. It is clinically nonspecific and mainly diagnosed by kidney biopsy pathology, with three prevalent techniques: light microscopy, electron microscopy, and immunofluorescence microscopy. Manual observation of glomeruli one by one under the microscope is very time-consuming, and there are certain differences in the observation results between physicians. This study makes use of whole-slide images scanned by a light microscope as well as immunofluorescence images to classify patients with membranous nephropathy. The framework mainly includes a glomerular segmentation module, a confidence coefficient extraction module, and a multi-modal fusion module. This framework first identifies and segments the glomerulus from whole-slide images and immunofluorescence images, and then a glomerular classifier is trained to extract the features of each glomerulus. The results are then combined to produce the final diagnosis. The results of the experiments show that the F1-score of image classification results obtained by combining two kinds of features, which can reach 97.32%, is higher than those obtained by using only light-microscopy-observed images or immunofluorescent images, which reach 92.76% and 93.20%, respectively. Experiments demonstrate that considering both WSIs and immunofluorescence images is effective in improving the diagnosis of membranous nephropathy.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography