Journal articles on the topic 'Whole slide images classification'

To see the other types of publications on this topic, follow the link: Whole slide images classification.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Whole slide images classification.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Fell, Christina, Mahnaz Mohammadi, David Morrison, Ognjen Arandjelović, Sheeba Syed, Prakash Konanahalli, Sarah Bell, Gareth Bryson, David J. Harrison, and David Harris-Birtill. "Detection of malignancy in whole slide images of endometrial cancer biopsies using artificial intelligence." PLOS ONE 18, no. 3 (March 8, 2023): e0282577. http://dx.doi.org/10.1371/journal.pone.0282577.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this study we use artificial intelligence (AI) to categorise endometrial biopsy whole slide images (WSI) from digital pathology as either “malignant”, “other or benign” or “insufficient”. An endometrial biopsy is a key step in diagnosis of endometrial cancer, biopsies are viewed and diagnosed by pathologists. Pathology is increasingly digitised, with slides viewed as images on screens rather than through the lens of a microscope. The availability of these images is driving automation via the application of AI. A model that classifies slides in the manner proposed would allow prioritisation of these slides for pathologist review and hence reduce time to diagnosis for patients with cancer. Previous studies using AI on endometrial biopsies have examined slightly different tasks, for example using images alongside genomic data to differentiate between cancer subtypes. We took 2909 slides with “malignant” and “other or benign” areas annotated by pathologists. A fully supervised convolutional neural network (CNN) model was trained to calculate the probability of a patch from the slide being “malignant” or “other or benign”. Heatmaps of all the patches on each slide were then produced to show malignant areas. These heatmaps were used to train a slide classification model to give the final slide categorisation as either “malignant”, “other or benign” or “insufficient”. The final model was able to accurately classify 90% of all slides correctly and 97% of slides in the malignant class; this accuracy is good enough to allow prioritisation of pathologists’ workload.
2

Govind, Darshana, Brendon Lutnick, John E. Tomaszewski, and Pinaki Sarder. "Automated erythrocyte detection and classification from whole slide images." Journal of Medical Imaging 5, no. 02 (April 10, 2018): 1. http://dx.doi.org/10.1117/1.jmi.5.2.027501.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Neto, Pedro C., Sara P. Oliveira, Diana Montezuma, João Fraga, Ana Monteiro, Liliana Ribeiro, Sofia Gonçalves, Isabel M. Pinto, and Jaime S. Cardoso. "iMIL4PATH: A Semi-Supervised Interpretable Approach for Colorectal Whole-Slide Images." Cancers 14, no. 10 (May 18, 2022): 2489. http://dx.doi.org/10.3390/cancers14102489.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Colorectal cancer (CRC) diagnosis is based on samples obtained from biopsies, assessed in pathology laboratories. Due to population growth and ageing, as well as better screening programs, the CRC incidence rate has been increasing, leading to a higher workload for pathologists. In this sense, the application of AI for automatic CRC diagnosis, particularly on whole-slide images (WSI), is of utmost relevance, in order to assist professionals in case triage and case review. In this work, we propose an interpretable semi-supervised approach to detect lesions in colorectal biopsies with high sensitivity, based on multiple-instance learning and feature aggregation methods. The model was developed on an extended version of the recent, publicly available CRC dataset (the CRC+ dataset with 4433 WSI), using 3424 slides for training and 1009 slides for evaluation. The proposed method attained 90.19% classification ACC, 98.8% sensitivity, 85.7% specificity, and a quadratic weighted kappa of 0.888 at slide-based evaluation. Its generalisation capabilities are also studied on two publicly available external datasets.
4

Franklin, Daniel L., Tara Pattilachan, and Anthony Magliocco. "Abstract 5048: Imaging based EGFR mutation subtype classification using EfficientNet." Cancer Research 82, no. 12_Supplement (June 15, 2022): 5048. http://dx.doi.org/10.1158/1538-7445.am2022-5048.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract This study aimed to determine whether EfficientNet-B0 was able to classify EGFR mutation subtypes with H&E stained whole slide images of lung and lymph node tissue. Background: Non-small cell lung cancer (NSCLC) accounts for the majority of all lung adenocarcinomas, with estimates that up to a third of such cases have a mutation in their epidermal growth factor receptor (EGFR). EGFR mutations can occur in various subtypes, such as Exon19 deletion, and L858R substitution, which are important for early therapy decisions. Here, we propose a deep learning approach for detecting and classifying EGFR mutation subtypes, which will greatly reduce the cost of determining mutation status, allowing for testing in a low resource setting. Methods: An EfficientNet-B0 model was trained with whole slide images of lung tissue or metastatic lymph nodes with known EGFR mutation subtype (wild type, exon19 deletion or L858R substitution). Regions of interest were tiled into 512x512 pixel images. The RGB .jpeg tiles are augmented by rotating 90°, 180°, 270°, and mirroring. The model was initialized with random parameters and trained with a batch size of 32, a learning rate of 0.0001 for 1 epoch before the validation loss increased for the next 5 epochs. Results: The model achieved a slide AUC of 0.8333, and a tile AUC of 0.8010. Slide AUC is the result of averaging all tiles within a slide and measuring performance based on correctly predicted slides (n=18). Tile AUC is the result of measuring performance based on correctly predicted tiles (n=102,000). Conclusion: Using EfficientNet-B0 architecture as the basis for our EGFR mutation classification system, we were able to create a top performing model and achieve a slide AUC of 0.833 and tile AUC of 0.801. Healthcare providers and researchers may utilize this AI model in clinical settings to allow for detection of EGFR mutation from routinely captured images and bypass expensive and time consuming sequencing methods. Table 1. Number of image tiles used and the number of slides they were extracted from. Train Validation Test Exon19 tiles 187,384 47,904 33,096 L858R tiles 166,288 19,512 26,136 Wild type tiles 225,944 27,696 42,768 Exon19 slides 47 6 6 L858R slides 46 6 6 WIld type slides 43 6 6 Citation Format: Daniel L. Franklin, Tara Pattilachan, Anthony Magliocco. Imaging based EGFR mutation subtype classification using EfficientNet [abstract]. In: Proceedings of the American Association for Cancer Research Annual Meeting 2022; 2022 Apr 8-13. Philadelphia (PA): AACR; Cancer Res 2022;82(12_Suppl):Abstract nr 5048.
5

Ahmed, Shakil, Asadullah Shaikh, Hani Alshahrani, Abdullah Alghamdi, Mesfer Alrizq, Junaid Baber, and Maheen Bakhtyar. "Transfer Learning Approach for Classification of Histopathology Whole Slide Images." Sensors 21, no. 16 (August 9, 2021): 5361. http://dx.doi.org/10.3390/s21165361.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The classification of whole slide images (WSIs) provides physicians with an accurate analysis of diseases and also helps them to treat patients effectively. The classification can be linked to further detailed analysis and diagnosis. Deep learning (DL) has made significant advances in the medical industry, including the use of magnetic resonance imaging (MRI) scans, computerized tomography (CT) scans, and electrocardiograms (ECGs) to detect life-threatening diseases, including heart disease, cancer, and brain tumors. However, more advancement in the field of pathology is needed, but the main hurdle causing the slow progress is the shortage of large-labeled datasets of histopathology images to train the models. The Kimia Path24 dataset was particularly created for the classification and retrieval of histopathology images. It contains 23,916 histopathology patches with 24 tissue texture classes. A transfer learning-based framework is proposed and evaluated on two famous DL models, Inception-V3 and VGG-16. To improve the productivity of Inception-V3 and VGG-16, we used their pre-trained weights and concatenated these with an image vector, which is used as input for the training of the same architecture. Experiments show that the proposed innovation improves the accuracy of both famous models. The patch-to-scan accuracy of VGG-16 is improved from 0.65 to 0.77, and for the Inception-V3, it is improved from 0.74 to 0.79.
6

Fu, Zhibing, Qingkui Chen, Mingming Wang, and Chen Huang. "Whole slide images classification model based on self-learning sampling." Biomedical Signal Processing and Control 90 (April 2024): 105826. http://dx.doi.org/10.1016/j.bspc.2023.105826.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Fridman, M. V., A. A. Kosareva, E. V. Snezhko, P. V. Kamlach, and V. A. Kovalev. "Papillary thyroid carcinoma whole-slide images as a basis for deep learning." Informatics 20, no. 2 (June 29, 2023): 28–38. http://dx.doi.org/10.37661/1816-0301-2023-20-2-28-38.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Objectives. Morphological analysis of papillary thyroid cancer is a cornerstone for further treatment planning. Traditional and neural network methods of extracting parts of images are used to automate the analysis. It is necessary to prepare a set of data for teaching neural networks to develop a system of similar anatomical region in the histopathological image. Authors discuss the second selection of signs for the marking of histological images, methodological approaches to dissect whole-slide images, how to prepare raw data for a future analysis. The influence of the representative size of the fragment of the full-to-suction image of papillary thyroid cancer on the accuracy of the classification of trained neural network EfficientNetB0 is conducted. The analysis of the resulting results is carried out, the weaknesses of the use of fragments of images of different representative size and the cause of the unsatisfactory accuracy of the classification on large increase are evaluated.Materials and methods. Histopathological whole-slide imaged of 129 patients were used. Histological micropreparations containing elements of a tumor and surrounding tissue were scanned in the Aperio AT2 (Leica Biosystems, Germany) apparatus with maximum resolution. The marking was carried out in the ASAP software package. To choose the optimal representative size of the fragment the problem of classification was solved using the pre-study neural network EfficientNetB0.Results. A methodology for preparing a database of histopathological images of papillary thyroid cancer was proposed. Experiments were conducted to determine the optimal representative size of the image fragment. The best result of the accuracy of determining the class of test sample showed the size of a representative fragment as 394.32×394.32 microns.Conclusion. The analysis of the influence of the representative sizes of fragments of histopathological images showed the problems in solving the classification tasks because of cutting and staining images specifics, morphological complex and textured differences in the images of the same class. At the same time, it was determined that the task of preparing a set of data for training neural network to solve the problem of finding invasion of vessels in a histopathological image is not trivial and it requires additional stages of data preparation.
8

Jansen, Philipp, Adelaida Creosteanu, Viktor Matyas, Amrei Dilling, Ana Pina, Andrea Saggini, Tobias Schimming, et al. "Deep Learning Assisted Diagnosis of Onychomycosis on Whole-Slide Images." Journal of Fungi 8, no. 9 (August 28, 2022): 912. http://dx.doi.org/10.3390/jof8090912.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Background: Onychomycosis numbers among the most common fungal infections in humans affecting finger- or toenails. Histology remains a frequently applied screening technique to diagnose onychomycosis. Screening slides for fungal elements can be time-consuming for pathologists, and sensitivity in cases with low amounts of fungi remains a concern. Convolutional neural networks (CNNs) have revolutionized image classification in recent years. The goal of our project was to evaluate if a U-NET-based segmentation approach as a subcategory of CNNs can be applied to detect fungal elements on digitized histologic sections of human nail specimens and to compare it with the performance of 11 board-certified dermatopathologists. Methods: In total, 664 corresponding H&E- and PAS-stained histologic whole-slide images (WSIs) of human nail plates from four different laboratories were digitized. Histologic structures were manually annotated. A U-NET image segmentation model was trained for binary segmentation on the dataset generated by annotated slides. Results: The U-NET algorithm detected 90.5% of WSIs with fungi, demonstrating a comparable sensitivity with that of the 11 board-certified dermatopathologists (sensitivity of 89.2%). Conclusions: Our results demonstrate that machine-learning-based algorithms applied to real-world clinical cases can produce comparable sensitivities to human pathologists. Our established U-NET may be used as a supportive diagnostic tool to preselect possible slides with fungal elements. Slides where fungal elements are indicated by our U-NET should be reevaluated by the pathologist to confirm or refute the diagnosis of onychomycosis.
9

Lewis, Joshua, Xuebao Zhang, Nithya Shanmugam, Bradley Drumheller, Conrad Shebelut, Geoffrey Smith, Lee Cooper, and David Jaye. "Machine Learning-Based Automated Selection of Regions for Analysis on Bone Marrow Aspirate Smears." American Journal of Clinical Pathology 156, Supplement_1 (October 1, 2021): S1—S2. http://dx.doi.org/10.1093/ajcp/aqab189.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract Manual microscopic examination of bone marrow aspirate (BMA) smears and counting of cell populations remains the standard of practice for accurate assessment of benign and neoplastic bone marrow disorders. While automated cell classification software using machine learning models has been developed and applied to BMAs, current systems nonetheless require manual identification of optimal regions within the slide that are rich in marrow hematopoietic cells. To address this issue, we have developed a machine learning-based platform for automated identification of optimal regions in whole-slide images of BMA smears. A training dataset was developed by manual annotation of 53 BMA slides across biopsy diagnoses including unremarkable trilineal hematopoiesis, acute leukemia, and plasma cell neoplasms, as well as across differences in total cellularity represented by a spectrum of marrow nucleated cell content and white blood cell counts. 10,537 regions among these 53 slides were manually annotated as either “optimal” (regions near aspirate particles with high proportions of marrow nucleated cells), “particle” (aspirate particles), or “hemodilute” (blood-rich regions with high proportions of red blood cells). Training of a neural network-based classifier on 10x magnification slides with region cropping and image augmentation resulted in a classifier with substantial accuracy on new testing-set BMA slides (one-vs-rest AUROC > 0.999 across 10 training/testing splits for all 3 region classes), with very few particle and hemodilute regions being classified as optimal (particle: 0.83%, hemodilute: 0.39%). Additionally, this classifier accurately classifies BMA regions on slides from hematological disorders not represented in the training data, including Burkitt lymphoma (AUROC > 0.999 across region classes), chronic myeloid leukemia (AUROC > 0.999 across region classes), and diffuse large B-cell lymphoma (AUROC = 1 across region classes), demonstrating the broad applicability of our approach. To assess the performance of our classifier on whole-slide images, tiles from 10x magnification slides were manually annotated by three participants with notable concordance (Krippendorff’s alpha = 0.424); substantial agreement was found between manual annotations and model predictions within whole-slide images (optimal AUROC = 0.958, particle AUROC = 1.0, hemodilute AUROC = 0.947). Based on these promising results, this machine learning-based region classification model is being connected to a previously-developed bone marrow cell classifier to fully automate differential cell counting in whole-slide images. The development of this novel automated pipeline has potential to streamline the diagnostic process for hematological disorders while enhancing accuracy and replicability, as well as decreasing diagnostic turnaround time for improving patient care.
10

El-Hossiny, Ahmed S., Walid Al-Atabany, Osama Hassan, Ahmed M. Soliman, and Sherif A. Sami. "Classification of Thyroid Carcinoma in Whole Slide Images Using Cascaded CNN." IEEE Access 9 (2021): 88429–38. http://dx.doi.org/10.1109/access.2021.3076158.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Yoshida, Hiroshi, Yoshiko Yamashita, Taichi Shimazu, Eric Cosatto, Tomoharu Kiyuna, Hirokazu Taniguchi, Shigeki Sekine, and Atsushi Ochiai. "Automated histological classification of whole slide images of colorectal biopsy specimens." Oncotarget 8, no. 53 (October 12, 2017): 90719–29. http://dx.doi.org/10.18632/oncotarget.21819.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Xu, Hongming, Sunho Park, and Tae Hyun Hwang. "Computerized Classification of Prostate Cancer Gleason Scores from Whole Slide Images." IEEE/ACM Transactions on Computational Biology and Bioinformatics 17, no. 6 (November 1, 2020): 1871–82. http://dx.doi.org/10.1109/tcbb.2019.2941195.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Hassanpour, Saeed, Bruno Korbar, AndreaM Olofson, AllenP Miraflor, CatherineM Nicka, MatthewA Suriawinata, Lorenzo Torresani, and AriefA Suriawinata. "Deep learning for classification of colorectal polyps on whole-slide images." Journal of Pathology Informatics 8, no. 1 (2017): 30. http://dx.doi.org/10.4103/jpi.jpi_34_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Soldatov, Sergey A., Danil M. Pashkov, Sergey A. Guda, Nikolay S. Karnaukhov, Alexander A. Guda, and Alexander V. Soldatov. "Deep Learning Classification of Colorectal Lesions Based on Whole Slide Images." Algorithms 15, no. 11 (October 27, 2022): 398. http://dx.doi.org/10.3390/a15110398.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Microscopic tissue analysis is the key diagnostic method needed for disease identification and choosing the best treatment regimen. According to the Global Cancer Observatory, approximately two million people are diagnosed with colorectal cancer each year, and an accurate diagnosis requires a significant amount of time and a highly qualified pathologist to decrease the high mortality rate. Recent development of artificial intelligence technologies and scanning microscopy introduced digital pathology into the field of cancer diagnosis by means of the whole-slide image (WSI). In this work, we applied deep learning methods to diagnose six types of colon mucosal lesions using convolutional neural networks (CNNs). As a result, an algorithm for the automatic segmentation of WSIs of colon biopsies was developed, implementing pre-trained, deep convolutional neural networks of the ResNet and EfficientNet architectures. We compared the classical method and one-cycle policy for CNN training and applied both multi-class and multi-label approaches to solve the classification problem. The multi-label approach was superior because some WSI patches may belong to several classes at once or to none of them. Using the standard one-vs-rest approach, we trained multiple binary classifiers. They achieved the receiver operator curve AUC in the range of 0.80–0.96. Other metrics were also calculated, such as accuracy, precision, sensitivity, specificity, negative predictive value, and F1-score. Obtained CNNs can support human pathologists in the diagnostic process and can be extended to other cancers after adding a sufficient amount of labeled data.
15

Yoshida, Hiroshi, Taichi Shimazu, Tomoharu Kiyuna, Atsushi Marugame, Yoshiko Yamashita, Eric Cosatto, Hirokazu Taniguchi, Shigeki Sekine, and Atsushi Ochiai. "Automated histological classification of whole-slide images of gastric biopsy specimens." Gastric Cancer 21, no. 2 (June 2, 2017): 249–57. http://dx.doi.org/10.1007/s10120-017-0731-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Tourniaire, Paul, Marius Ilie, Paul Hofman, Nicholas Ayache, and Hervé Delingette. "Abstract 461: Mixed supervision to improve the classification and localization: Coherence of tumors in histological slides." Cancer Research 82, no. 12_Supplement (June 15, 2022): 461. http://dx.doi.org/10.1158/1538-7445.am2022-461.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract With the growing standardization of Whole Slide Images (WSIs), deep learning algorithms have shown promising results for the automated classification and localization of tumors. Yet, it is often difficult to train such algorithms, as they usually require careful detailed annotations from expert pathologists, which are tedious to produce. This is why in general only slide-level labels are accessible while annotations of small regions (or tiles) are limited. With only slide-level information, it is difficult to obtain accurate predictions of the localization of pathological tissues inside a slide, despite reaching good slide-level classification. Besides, existing algorithms show limited consistency between slide- and tile-level predictions, leading to difficult interpretation in case of healthy tissue. Using the attention-based multiple instance learning framework, we propose to combine slide-level labels on all slides with tile-level labels on a small fraction (e.g. 20%) of slides within a histology dataset to improve both classification and localization performances. With this mixed supervision of slides, we aim to enforce a better consistency between slide- and tile-level labels. To this end, we introduce an attention-based loss function to further guide the model’s attention on discriminative regions inside tumorous slides, and display an equal attention among all tiles of normal slides. On the Camelyon16 dataset, we reached precision and recall scores as high as 0.99 and 0.85 respectively with an AUC of 0.93 on the competition test set, using only 50% of the slides with tile-level annotations in the training set. Experiments using various proportions of fully annotated slides in the training set show promising results for an improved localization of tumors and classification of slides. In this work, we showed that using a limited amount of fully annotated slides we can improve both the classification and localization performances of an attention-based deep learning model. This increased consistency and performances should help pathologists to better interpret the algorithm output and to focus on suspicious regions in probable tumorous slides. Citation Format: Paul Tourniaire, Marius Ilie, Paul Hofman, Nicholas Ayache, Hervé Delingette. Mixed supervision to improve the classification and localization: Coherence of tumors in histological slides [abstract]. In: Proceedings of the American Association for Cancer Research Annual Meeting 2022; 2022 Apr 8-13. Philadelphia (PA): AACR; Cancer Res 2022;82(12_Suppl):Abstract nr 461.
17

Ma, Yingfan, Xiaoyuan Luo, Kexue Fu, and Manning Wang. "Transformer-Based Video-Structure Multi-Instance Learning for Whole Slide Image Classification." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 13 (March 24, 2024): 14263–71. http://dx.doi.org/10.1609/aaai.v38i13.29338.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Pathological images play a vital role in clinical cancer diagnosis. Computer-aided diagnosis utilized on digital Whole Slide Images (WSIs) has been widely studied. The major challenge of using deep learning models for WSI analysis is the huge size of WSI images and existing methods struggle between end-to-end learning and proper modeling of contextual information. Most state-of-the-art methods utilize a two-stage strategy, in which they use a pre-trained model to extract features of small patches cut from a WSI and then input these features into a classification model. These methods can not perform end-to-end learning and consider contextual information at the same time. To solve this problem, we propose a framework that models a WSI as a pathologist's observing video and utilizes Transformer to process video clips with a divide-and-conquer strategy, which helps achieve both context-awareness and end-to-end learning. Extensive experiments on three public WSI datasets show that our proposed method outperforms existing SOTA methods in both WSI classification and positive region detection.
18

Amgad, Mohamed, Habiba Elfandy, Hagar Hussein, Lamees A. Atteya, Mai A. T. Elsebaie, Lamia S. Abo Elnasr, Rokia A. Sakr, et al. "Structured crowdsourcing enables convolutional segmentation of histology images." Bioinformatics 35, no. 18 (February 6, 2019): 3461–67. http://dx.doi.org/10.1093/bioinformatics/btz083.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract Motivation While deep-learning algorithms have demonstrated outstanding performance in semantic image segmentation tasks, large annotation datasets are needed to create accurate models. Annotation of histology images is challenging due to the effort and experience required to carefully delineate tissue structures, and difficulties related to sharing and markup of whole-slide images. Results We recruited 25 participants, ranging in experience from senior pathologists to medical students, to delineate tissue regions in 151 breast cancer slides using the Digital Slide Archive. Inter-participant discordance was systematically evaluated, revealing low discordance for tumor and stroma, and higher discordance for more subjectively defined or rare tissue classes. Feedback provided by senior participants enabled the generation and curation of 20 000+ annotated tissue regions. Fully convolutional networks trained using these annotations were highly accurate (mean AUC=0.945), and the scale of annotation data provided notable improvements in image classification accuracy. Availability and Implementation Dataset is freely available at: https://goo.gl/cNM4EL. Supplementary information Supplementary data are available at Bioinformatics online.
19

Kallipolitis, Athanasios, Kyriakos Revelos, and Ilias Maglogiannis. "Ensembling EfficientNets for the Classification and Interpretation of Histopathology Images." Algorithms 14, no. 10 (September 26, 2021): 278. http://dx.doi.org/10.3390/a14100278.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The extended utilization of digitized Whole Slide Images is transforming the workflow of traditional clinical histopathology to the digital era. The ongoing transformation has demonstrated major potentials towards the exploitation of Machine Learning and Deep Learning techniques as assistive tools for specialized medical personnel. While the performance of the implemented algorithms is continually boosted by the mass production of generated Whole Slide Images and the development of state-of the-art deep convolutional architectures, ensemble models provide an additional methodology towards the improvement of the prediction accuracy. Despite the earlier belief related to deep convolutional networks being treated as black boxes, important steps for the interpretation of such predictive models have also been proposed recently. However, this trend is not fully unveiled for the ensemble models. The paper investigates the application of an explanation scheme for ensemble classifiers, while providing satisfactory classification results of histopathology breast and colon cancer images in terms of accuracy. The results can be interpreted by the hidden layers’ activation of the included subnetworks and provide more accurate results than single network implementations.
20

Mahmood, F., C. J. Robbins, S. Perincheri, and R. Torres. "Applying Deep Learning Cancer Subtyping Algorithms Trained on Physical Slides to Multiphoton Imaging of Unembedded Samples." American Journal of Clinical Pathology 158, Supplement_1 (November 1, 2022): S117. http://dx.doi.org/10.1093/ajcp/aqac126.248.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract Introduction/Objective Deep learning algorithms on digital images of physical tissue slides have shown potential improvements in accuracy and precision of diagnostic interpretation of neoplastic histology. Clustering-constrained- attention multiple-instance learning (CLAM) is one such method that identifies diagnostic sub-regions to accurately classify whole slides. Often, algorithm performance degrades when deployed on datasets that differ from the original set and it is subject to physical slide preparation variability. New multiphoton imaging modalities have potential workflow and quality advantages over physical slides, producing images analogous to whole slide imaging (WSI) without cutting artifacts, but performance of existing algorithms trained on digitized physical slides and applied to multiphoton images remains completely unknown. Given this, we aimed to test the performance of CLAM algorithms for subtyping renal cell carcinoma (RCC) and lung cancer (LC) applied to pseudo-colored multiphoton WSI. Methods/Case Report Clinical RCC and LC surgical resection samples were processed and imaged by Clearing Histology with MulitPhoton microscopy (CHiMP, Applikate Technologies, Fairfield, CT), producing digital images of un- cut, un-embeded tissue to generate H&E-like optical slices. Multiphoton images were downscaled to 0.5 um/px to match algorithm target resolution. CLAM models for subtyping RCC (chromophobe, clear cell, papillary) and LC (squamous & adenocarcinoma) previously trained using TCGA and CPTAC whole slide images of physical slides were applied directly to CHiMP multiphoton images without adjustment. Reference cancer subtype classifications were provided from physical and digital slides. Results (if a Case Study enter NA) For the subtypes included during training, multiphoton WSIs of RCC and LC were accurately subtyped by the CLAM models without stain normalization nor network fine tuning producing high prediction levels. Subtypes not included during the training (namely oncocytoma for RCC) resulted in low scoring model predictions (below 0.85), indicating specificity of identification. Multiple slide levels improved interpretation of several difficult cases for CLAM predictions. Conclusion This preliminary data suggests that CLAM models trained on standard H&E WSIs for RCC and LC subtyping are applicable to pseudo-H&E multiphoton WSIs without domain adaptations. This implies that diagnostic histologic features have been learned by these CLAM models and are efficiently recognized in digital histology images produced via CHiMP.
21

Gupta, Pushpanjali, Yenlin Huang, Prasan Kumar Sahoo, Jeng-Fu You, Sum-Fu Chiang, Djeane Debora Onthoni, Yih-Jong Chern, et al. "Colon Tissues Classification and Localization in Whole Slide Images Using Deep Learning." Diagnostics 11, no. 8 (August 2, 2021): 1398. http://dx.doi.org/10.3390/diagnostics11081398.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Colorectal cancer is one of the leading causes of cancer-related death worldwide. The early diagnosis of colon cancer not only reduces mortality but also reduces the burden related to the treatment strategies such as chemotherapy and/or radiotherapy. However, when the microscopic examination of the suspected colon tissue sample is carried out, it becomes a tedious and time-consuming job for the pathologists to find the abnormality in the tissue. In addition, there may be interobserver variability that might lead to conflict in the final diagnosis. As a result, there is a crucial need of developing an intelligent automated method that can learn from the patterns themselves and assist the pathologist in making a faster, accurate, and consistent decision for determining the normal and abnormal region in the colorectal tissues. Moreover, the intelligent method should be able to localize the abnormal region in the whole slide image (WSI), which will make it easier for the pathologists to focus on only the region of interest making the task of tissue examination faster and lesser time-consuming. As a result, artificial intelligence (AI)-based classification and localization models are proposed for determining and localizing the abnormal regions in WSI. The proposed models achieved F-score of 0.97, area under curve (AUC) 0.97 with pretrained Inception-v3 model, and F-score of 0.99 and AUC 0.99 with customized Inception-ResNet-v2 Type 5 (IR-v2 Type 5) model.
22

Xu, Hongming, Cheng Lu, Richard Berendt, Naresh Jha, and Mrinal Mandal. "Automated analysis and classification of melanocytic tumor on skin whole slide images." Computerized Medical Imaging and Graphics 66 (June 2018): 124–34. http://dx.doi.org/10.1016/j.compmedimag.2018.01.008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Tsuneki, Masayuki, and Fahdi Kanavati. "Weakly supervised learning for multi-organ adenocarcinoma classification in whole slide images." PLOS ONE 17, no. 11 (November 23, 2022): e0275378. http://dx.doi.org/10.1371/journal.pone.0275378.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The primary screening by automated computational pathology algorithms of the presence or absence of adenocarcinoma in biopsy specimens (e.g., endoscopic biopsy, transbronchial lung biopsy, and needle biopsy) of possible primary organs (e.g., stomach, colon, lung, and breast) and radical lymph node dissection specimen is very useful and should be a powerful tool to assist surgical pathologists in routine histopathological diagnostic workflow. In this paper, we trained multi-organ deep learning models to classify adenocarcinoma in biopsy and radical lymph node dissection specimens whole slide images (WSIs). We evaluated the models on five independent test sets (stomach, colon, lung, breast, lymph nodes) to demonstrate the feasibility in multi-organ and lymph nodes specimens from different medical institutions, achieving receiver operating characteristic areas under the curves (ROC-AUCs) in the range of 0.91 -0.98.
24

Zhao, Boxuan, Jun Zhang, Deheng Ye, Jian Cao, Xiao Han, Qiang Fu, and Wei Yang. "RLogist: Fast Observation Strategy on Whole-Slide Images with Deep Reinforcement Learning." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 3 (June 26, 2023): 3570–78. http://dx.doi.org/10.1609/aaai.v37i3.25467.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Whole-slide images (WSI) in computational pathology have high resolution with gigapixel size, but are generally with sparse regions of interest, which leads to weak diagnostic relevance and data inefficiency for each area in the slide. Most of the existing methods rely on a multiple instance learning framework that requires densely sampling local patches at high magnification. The limitation is evident in the application stage as the heavy computation for extracting patch-level features is inevitable. In this paper, we develop RLogist, a benchmarking deep reinforcement learning (DRL) method for fast observation strategy on WSIs. Imitating the diagnostic logic of human pathologists, our RL agent learns how to find regions of observation value and obtain representative features across multiple resolution levels, without having to analyze each part of the WSI at the high magnification. We benchmark our method on two whole-slide level classification tasks, including detection of metastases in WSIs of lymph node sections, and subtyping of lung cancer. Experimental results demonstrate that RLogist achieves competitive classification performance compared to typical multiple instance learning algorithms, while having a significantly short observation path. In addition, the observation path given by RLogist provides good decision-making interpretability, and its ability of reading path navigation can potentially be used by pathologists for educational/assistive purposes. Our code is available at: https://github.com/tencent-ailab/RLogist.
25

Aftab, Rukhma, Yan Qiang, and Zhao Juanjuan. "Contrastive Learning for Whole Slide Image Representation: A Self-Supervised Approach in Digital Pathology." European Journal of Applied Science, Engineering and Technology 2, no. 2 (March 1, 2024): 175–85. http://dx.doi.org/10.59324/ejaset.2024.2(2).12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Image analysis in digital pathology is identified as a challenging field, particularly for AI-driven classification and search tasks. The high-resolution and large-scale nature of whole slide images (WSIs) present significant computational challenges in representing and analyzing these images effectively. The research endeavors to tackle these hurdles by presenting an innovative methodology grounded in self-supervised learning (SSL). Unlike prior SSL approaches that depend on augmenting at the patch level, the novel framework capitalizes on existing primary site information to directly glean effective representations from Whole Slide Images (WSIs). Moreover, the investigation integrates fully supervised contrastive learning to bolster the resilience of these representations for both classification and search endeavors. For experimentation, the study drew upon a dataset encompassing over 6,000 WSIs sourced from The Cancer Genome Atlas (TCGA) repository facilitated by the National Cancer Institute. The proposed architecture underwent training and assessment using this dataset. Evaluation primarily focused on scrutinizing performance across diverse primary sites and cancer subtypes, with particular attention dedicated to lung cancer classification. Impressively, the proposed architecture yielded outstanding outcomes, showcasing robust performance across the majority of primary sites and cancer subtypes. Furthermore, the study garnered the top position in validation for a lung cancer classification task.
26

Song, JaeYen, Soyoung Im, Sung Hak Lee, and Hyun-Jong Jang. "Deep Learning-Based Classification of Uterine Cervical and Endometrial Cancer Subtypes from Whole-Slide Histopathology Images." Diagnostics 12, no. 11 (October 28, 2022): 2623. http://dx.doi.org/10.3390/diagnostics12112623.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Uterine cervical and endometrial cancers have different subtypes with different clinical outcomes. Therefore, cancer subtyping is essential for proper treatment decisions. Furthermore, an endometrial and endocervical origin for an adenocarcinoma should also be distinguished. Although the discrimination can be helped with various immunohistochemical markers, there is no definitive marker. Therefore, we tested the feasibility of deep learning (DL)-based classification for the subtypes of cervical and endometrial cancers and the site of origin of adenocarcinomas from whole slide images (WSIs) of tissue slides. WSIs were split into 360 × 360-pixel image patches at 20× magnification for classification. Then, the average of patch classification results was used for the final classification. The area under the receiver operating characteristic curves (AUROCs) for the cervical and endometrial cancer classifiers were 0.977 and 0.944, respectively. The classifier for the origin of an adenocarcinoma yielded an AUROC of 0.939. These results clearly demonstrated the feasibility of DL-based classifiers for the discrimination of cancers from the cervix and uterus. We expect that the performance of the classifiers will be much enhanced with an accumulation of WSI data. Then, the information from the classifiers can be integrated with other data for more precise discrimination of cervical and endometrial cancers.
27

Zarella, Mark D., Matthew R. Quaschnick;, David E. Breen, and Fernando U. Garcia. "Estimation of Fine-Scale Histologic Features at Low Magnification." Archives of Pathology & Laboratory Medicine 142, no. 11 (June 18, 2018): 1394–402. http://dx.doi.org/10.5858/arpa.2017-0380-oa.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Context.— Whole-slide imaging has ushered in a new era of technology that has fostered the use of computational image analysis for diagnostic support and has begun to transfer the act of analyzing a slide to computer monitors. Due to the overwhelming amount of detail available in whole-slide images, analytic procedures—whether computational or visual—often operate at magnifications lower than the magnification at which the image was acquired. As a result, a corresponding reduction in image resolution occurs. It is unclear how much information is lost when magnification is reduced, and whether the rich color attributes of histologic slides can aid in reconstructing some of that information. Objective.— To examine the correspondence between the color and spatial properties of whole-slide images to elucidate the impact of resolution reduction on the histologic attributes of the slide. Design.— We simulated image resolution reduction and modeled its effect on classification of the underlying histologic structure. By harnessing measured histologic features and the intrinsic spatial relationships between histologic structures, we developed a predictive model to estimate the histologic composition of tissue in a manner that exceeds the resolution of the image. Results.— Reduction in resolution resulted in a significant loss of the ability to accurately characterize histologic components at magnifications less than ×10. By utilizing pixel color, this ability was improved at all magnifications. Conclusions.— Multiscale analysis of histologic images requires an adequate understanding of the limitations imposed by image resolution. Our findings suggest that some of these limitations may be overcome with computational modeling.
28

Tavolara, Thomas E., Metin N. Gurcan, and M. Khalid Khan Niazi. "Contrastive Multiple Instance Learning: An Unsupervised Framework for Learning Slide-Level Representations of Whole Slide Histopathology Images without Labels." Cancers 14, no. 23 (November 24, 2022): 5778. http://dx.doi.org/10.3390/cancers14235778.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Recent methods in computational pathology have trended towards semi- and weakly-supervised methods requiring only slide-level labels. Yet, even slide-level labels may be absent or irrelevant to the application of interest, such as in clinical trials. Hence, we present a fully unsupervised method to learn meaningful, compact representations of WSIs. Our method initially trains a tile-wise encoder using SimCLR, from which subsets of tile-wise embeddings are extracted and fused via an attention-based multiple-instance learning framework to yield slide-level representations. The resulting set of intra-slide-level and inter-slide-level embeddings are attracted and repelled via contrastive loss, respectively. This resulted in slide-level representations with self-supervision. We applied our method to two tasks— (1) non-small cell lung cancer subtyping (NSCLC) as a classification prototype and (2) breast cancer proliferation scoring (TUPAC16) as a regression prototype—and achieved an AUC of 0.8641 ± 0.0115 and correlation (R2) of 0.5740 ± 0.0970, respectively. Ablation experiments demonstrate that the resulting unsupervised slide-level feature space can be fine-tuned with small datasets for both tasks. Overall, our method approaches computational pathology in a novel manner, where meaningful features can be learned from whole-slide images without the need for annotations of slide-level labels. The proposed method stands to benefit computational pathology, as it theoretically enables researchers to benefit from completely unlabeled whole-slide images.
29

Feng, Ming, Kele Xu, Nanhui Wu, Weiquan Huang, Yan Bai, Yin Wang, Changjian Wang, and Huaimin Wang. "Trusted multi-scale classification framework for whole slide image." Biomedical Signal Processing and Control 89 (March 2024): 105790. http://dx.doi.org/10.1016/j.bspc.2023.105790.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Pirovano, Antoine, Hippolyte Heuberger, Sylvain Berlemont, SaÏd Ladjal, and Isabelle Bloch. "Automatic Feature Selection for Improved Interpretability on Whole Slide Imaging." Machine Learning and Knowledge Extraction 3, no. 1 (February 22, 2021): 243–62. http://dx.doi.org/10.3390/make3010012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Deep learning methods are widely used for medical applications to assist medical doctors in their daily routine. While performances reach expert’s level, interpretability (highlighting how and what a trained model learned and why it makes a specific decision) is the next important challenge that deep learning methods need to answer to be fully integrated in the medical field. In this paper, we address the question of interpretability in the context of whole slide images (WSI) classification with the formalization of the design of WSI classification architectures and propose a piece-wise interpretability approach, relying on gradient-based methods, feature visualization and multiple instance learning context. After training two WSI classification architectures on Camelyon-16 WSI dataset, highlighting discriminative features learned, and validating our approach with pathologists, we propose a novel manner of computing interpretability slide-level heat-maps, based on the extracted features, that improves tile-level classification performances. We measure the improvement using the tile-level AUC that we called Localization AUC, and show an improvement of more than 0.2. We also validate our results with a RemOve And Retrain (ROAR) measure. Then, after studying the impact of the number of features used for heat-map computation, we propose a corrective approach, relying on activation colocalization of selected features, that improves the performances and the stability of our proposed method.
31

Wang, Ching-Wei, Sheng-Chuan Huang, Yu-Ching Lee, Yu-Jie Shen, Shwu-Ing Meng, and Jeff L. Gaol. "Deep learning for bone marrow cell detection and classification on whole-slide images." Medical Image Analysis 75 (January 2022): 102270. http://dx.doi.org/10.1016/j.media.2021.102270.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Morkūnas, Mindaugas, Povilas Treigys, Jolita Bernatavičienė, Arvydas Laurinavičius, and Gražina Korvel. "Machine Learning Based Classification of Colorectal Cancer Tumour Tissue in Whole-Slide Images." Informatica 29, no. 1 (January 1, 2018): 75–90. http://dx.doi.org/10.15388/informatica.2018.158.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Cho, Kyung-Ok, Sung Hak Lee, and Hyun-Jong Jang. "Feasibility of fully automated classification of whole slide images based on deep learning." Korean Journal of Physiology & Pharmacology 24, no. 1 (2020): 89. http://dx.doi.org/10.4196/kjpp.2020.24.1.89.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Sertel, O., J. Kong, H. Shimada, U. V. Catalyurek, J. H. Saltz, and M. N. Gurcan. "Computer-aided prognosis of neuroblastoma on whole-slide images: Classification of stromal development." Pattern Recognition 42, no. 6 (June 2009): 1093–103. http://dx.doi.org/10.1016/j.patcog.2008.08.027.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Yingli, Zhao, Ding Weilong, You Qinghua, Zhu Fenglong, Zhu Xiaojie, Zheng Kui, and Liu Dandan. "Classification of whole slide images of breast histopathology based on spatial correlation characteristics." Journal of Image and Graphics 28, no. 4 (2023): 1134–45. http://dx.doi.org/10.11834/jig.211133.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Shakarami, Ashkan, Lorenzo Nicolè, Matteo Terreran, Angelo Paolo Dei Tos, and Stefano Ghidoni. "TCNN: A Transformer Convolutional Neural Network for artifact classification in whole slide images." Biomedical Signal Processing and Control 84 (July 2023): 104812. http://dx.doi.org/10.1016/j.bspc.2023.104812.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Fu, Yan, Fanlin Zhou, Xu Shi, Long Wang, Yu Li, Jian Wu, and Hong Huang. "Classification of adenoid cystic carcinoma in whole slide images by using deep learning." Biomedical Signal Processing and Control 84 (July 2023): 104789. http://dx.doi.org/10.1016/j.bspc.2023.104789.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Sun, Shenghuan, Jacob Cleave, Linlin Wang, Fabienne Lucas, Laura Brown, Jacob Spector, Leonardo Boiocchi, et al. "Deep Learning for Morphology-Based, Bone Marrow Cell Classification." Blood 142, Supplement 1 (November 28, 2023): 2841. http://dx.doi.org/10.1182/blood-2023-172654.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The morphological classification of cells in bone marrow aspirate (BMA) is central to the diagnosis of hematologic diseases, including leukemias. Despite being a critical task, its monotonous, time-consuming nature and dependency on highly skilled clinical experts makes it prone to human error. Such errors can lead to delays and misdiagnoses that negatively impact patient care. To counter these challenges, we curated an expansive dataset of more than 40,000 hematopathologist consensus-annotated single-cell images, extracted from BMA whole slide images (WSIs), each annotated into one of 23 distinct morphologic classes. We then utilized this data to develop DeepHeme, a convolutional neural network classifier designed for bone marrow cell typing tasks. DeepHeme achieves state-of-the-art performance in both the breadth of differentiable classes and accuracy across these classes. By comparing its performance to that of individual hematopathologists from three premier academic medical centers, using our gold standard consensus-labelled images, we found our AI algorithm either matched or surpassed the average performance across all classes. In addition, we integrated DeepHeme with internally developed region classifier and cell detection algorithms, culminating in a comprehensive diagnostic pipeline for whole slide cell differential. We next tested DeepHeme on slides from an external hospital system at a major cancer center to evaluate the generalizability of our model, a necessary precondition to widespread application. DeepHeme demonstrated a high level of generalizability, evidenced by a decrease of only 4% in the mean F-1 score, from 0.89 to 0.85, across all 23 cell classes. Lastly, to improve access to the DeepHeme algorithm results and encourage further real-world generalizability testing, we developed a web application that allows scientists and clinicians to test the DeepHeme algorithm on either test images from our study or their own user-uploaded aspirates.
39

Jayaratne, N., A. Sasikumar, S. Subasinghe, A. Borkowski, S. Mastorides, L. Thomas, E. Mastorides, and L. DeLand. "Using Deep Learning for Whole Slide Image Prostate Cancer Diagnosis and Grading in South Florida Veteran Population." American Journal of Clinical Pathology 156, Supplement_1 (October 1, 2021): S141. http://dx.doi.org/10.1093/ajcp/aqab191.301.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract Introduction/Objective Prostate cancer is the most common non-cutaneous malignancy in veterans, with approximately 11,000 new prostate cancer cases diagnosed in the Veteran’s Affairs system each year. Prostate cancer diagnosis and grading can be challenging even for experienced pathologists. Although large VA medical centers have pathologists that specialize in urologic pathology, the vast majority have not. We hypothesized that the AI-augmented diagnosis and grading may provide the solution for such situations. Methods/Case Report Dataset consisted of 10,000 prostate biopsy whole slide images (WSI) from the Kaggle PANDA challenge, and 6,000 WSI from the James A. Haley Veterans’ Hospital. Two Classification models were trained on the combined Kaggle and VA datasets using whole slide labels, and not annotated slides that resemble semi-supervised training. Two-Class Classification to predict Benign: ISUP [0] / Cancerous: ISUP [1,2,3,4,5] Three-Class Classification to predict Benign: ISUP [0] / Low-grade: ISUP [1,2] / High-grade: ISUP [3,4,5] WSI split into “tiles” were used for training the models to reduce whitespace around samples, manage large images, and normalize dimensions/orientations. Results (if a Case Study enter NA) Models trained purely as binary and 3-class classifiers performed very well. Two-Class Model: Accuracy = 0.937 Precision = 0.965 F1 = 0.94 AUC = 0.979 Three-Class Model: Accuracy 0.89 o Benign: Precision=0.897 f1=0.928 o Low-grade: Precision=0.866 f1=0.841 o High-grade: Precision=0.91 f1=0.878 We plan to develop multi-stage prediction models using these 2-Class and 3-Class classifiers as the first stage and a cancer grade predictor in the second stage. Conclusion We successfully showed that AI can augment pathologist’s diagnosis and grading of prostate cancer.
40

Huang, Jin, Liye Mei, Mengping Long, Yiqiang Liu, Wei Sun, Xiaoxiao Li, Hui Shen, et al. "BM-Net: CNN-Based MobileNet-V3 and Bilinear Structure for Breast Cancer Detection in Whole Slide Images." Bioengineering 9, no. 6 (June 20, 2022): 261. http://dx.doi.org/10.3390/bioengineering9060261.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Breast cancer is one of the most common types of cancer and is the leading cause of cancer-related death. Diagnosis of breast cancer is based on the evaluation of pathology slides. In the era of digital pathology, these slides can be converted into digital whole slide images (WSIs) for further analysis. However, due to their sheer size, digital WSIs diagnoses are time consuming and challenging. In this study, we present a lightweight architecture that consists of a bilinear structure and MobileNet-V3 network, bilinear MobileNet-V3 (BM-Net), to analyze breast cancer WSIs. We utilized the WSI dataset from the ICIAR2018 Grand Challenge on Breast Cancer Histology Images (BACH) competition, which contains four classes: normal, benign, in situ carcinoma, and invasive carcinoma. We adopted data augmentation techniques to increase diversity and utilized focal loss to remove class imbalance. We achieved high performance, with 0.88 accuracy in patch classification and an average 0.71 score, which surpassed state-of-the-art models. Our BM-Net shows great potential in detecting cancer in WSIs and is a promising clinical tool.
41

Schmitt, Max, Roman Christoph Maron, Achim Hekler, Albrecht Stenzinger, Axel Hauschild, Michael Weichenthal, Markus Tiemann, et al. "Hidden Variables in Deep Learning Digital Pathology and Their Potential to Cause Batch Effects: Prediction Model Study." Journal of Medical Internet Research 23, no. 2 (February 2, 2021): e23436. http://dx.doi.org/10.2196/23436.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Background An increasing number of studies within digital pathology show the potential of artificial intelligence (AI) to diagnose cancer using histological whole slide images, which requires large and diverse data sets. While diversification may result in more generalizable AI-based systems, it can also introduce hidden variables. If neural networks are able to distinguish/learn hidden variables, these variables can introduce batch effects that compromise the accuracy of classification systems. Objective The objective of the study was to analyze the learnability of an exemplary selection of hidden variables (patient age, slide preparation date, slide origin, and scanner type) that are commonly found in whole slide image data sets in digital pathology and could create batch effects. Methods We trained four separate convolutional neural networks (CNNs) to learn four variables using a data set of digitized whole slide melanoma images from five different institutes. For robustness, each CNN training and evaluation run was repeated multiple times, and a variable was only considered learnable if the lower bound of the 95% confidence interval of its mean balanced accuracy was above 50.0%. Results A mean balanced accuracy above 50.0% was achieved for all four tasks, even when considering the lower bound of the 95% confidence interval. Performance between tasks showed wide variation, ranging from 56.1% (slide preparation date) to 100% (slide origin). Conclusions Because all of the analyzed hidden variables are learnable, they have the potential to create batch effects in dermatopathology data sets, which negatively affect AI-based classification systems. Practitioners should be aware of these and similar pitfalls when developing and evaluating such systems and address these and potentially other batch effect variables in their data sets through sufficient data set stratification.
42

Dimitriou, Neofytos, Ognjen Arandjelović, and David J. Harrison. "Magnifying Networks for Histopathological Images with Billions of Pixels." Diagnostics 14, no. 5 (March 1, 2024): 524. http://dx.doi.org/10.3390/diagnostics14050524.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Amongst the other benefits conferred by the shift from traditional to digital pathology is the potential to use machine learning for diagnosis, prognosis, and personalization. A major challenge in the realization of this potential emerges from the extremely large size of digitized images, which are often in excess of 100,000 × 100,000 pixels. In this paper, we tackle this challenge head-on by diverging from the existing approaches in the literature—which rely on the splitting of the original images into small patches—and introducing magnifying networks (MagNets). By using an attention mechanism, MagNets identify the regions of the gigapixel image that benefit from an analysis on a finer scale. This process is repeated, resulting in an attention-driven coarse-to-fine analysis of only a small portion of the information contained in the original whole-slide images. Importantly, this is achieved using minimal ground truth annotation, namely, using only global, slide-level labels. The results from our tests on the publicly available Camelyon16 and Camelyon17 datasets demonstrate the effectiveness of MagNets—as well as the proposed optimization framework—in the task of whole-slide image classification. Importantly, MagNets process at least five times fewer patches from each whole-slide image than any of the existing end-to-end approaches.
43

Ahmad Fauzi, Mohammad Faizal, Wan Siti Halimatul Munirah Wan Ahmad, Mohammad Fareed Jamaluddin, Jenny Tung Hiong Lee, See Yee Khor, Lai Meng Looi, Fazly Salleh Abas, and Nouar Aldahoul. "Allred Scoring of ER-IHC Stained Whole-Slide Images for Hormone Receptor Status in Breast Carcinoma." Diagnostics 12, no. 12 (December 8, 2022): 3093. http://dx.doi.org/10.3390/diagnostics12123093.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Hormone receptor status is determined primarily to identify breast cancer patients who may benefit from hormonal therapy. The current clinical practice for the testing using either Allred score or H-score is still based on laborious manual counting and estimation of the amount and intensity of positively stained cancer cells in immunohistochemistry (IHC)-stained slides. This work integrates cell detection and classification workflow for breast carcinoma estrogen receptor (ER)-IHC-stained images and presents an automated evaluation system. The system first detects all cells within the specific regions and classifies them into negatively, weakly, moderately, and strongly stained, followed by Allred scoring for ER status evaluation. The generated Allred score relies heavily on accurate cell detection and classification and is compared against pathologists’ manual estimation. Experiments on 40 whole-slide images show 82.5% agreement on hormonal treatment recommendation, which we believe could be further improved with an advanced learning model and enhancement to address the cases with 0% ER status. This promising system can automate the exhaustive exercise to provide fast and reliable assistance to pathologists and medical personnel. The system has the potential to improve the overall standards of prognostic reporting for cancer patients, benefiting pathologists, patients, and also the public at large.
44

Che, Yuxuan, Fei Ren, Xueyuan Zhang, Li Cui, Huanwen Wu, and Ze Zhao. "Immunohistochemical HER2 Recognition and Analysis of Breast Cancer Based on Deep Learning." Diagnostics 13, no. 2 (January 10, 2023): 263. http://dx.doi.org/10.3390/diagnostics13020263.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Breast cancer is one of the common malignant tumors in women. It seriously endangers women’s life and health. The human epidermal growth factor receptor 2 (HER2) protein is responsible for the division and growth of healthy breast cells. The overexpression of the HER2 protein is generally evaluated by immunohistochemistry (IHC). The IHC evaluation criteria mainly includes three indexes: staining intensity, circumferential membrane staining pattern, and proportion of positive cells. Manually scoring HER2 IHC images is an error-prone, variable, and time-consuming work. To solve these problems, this study proposes an automated predictive method for scoring whole-slide images (WSI) of HER2 slides based on a deep learning network. A total of 95 HER2 pathological slides from September 2021 to December 2021 were included. The average patch level precision and f1 score were 95.77% and 83.09%, respectively. The overall accuracy of automated scoring for slide-level classification was 97.9%. The proposed method showed excellent specificity for all IHC 0 and 3+ slides and most 1+ and 2+ slides. The evaluation effect of the integrated method is better than the effect of using the staining result only.
45

Bhatt, Anant R., Amit Ganatra, and Ketan Kotecha. "Cervical cancer detection in pap smear whole slide images using convNet with transfer learning and progressive resizing." PeerJ Computer Science 7 (February 18, 2021): e348. http://dx.doi.org/10.7717/peerj-cs.348.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Cervical intraepithelial neoplasia (CIN) and cervical cancer are major health problems faced by women worldwide. The conventional Papanicolaou (Pap) smear analysis is an effective method to diagnose cervical pre-malignant and malignant conditions by analyzing swab images. Various computer vision techniques can be explored to identify potential precancerous and cancerous lesions by analyzing the Pap smear image. The majority of existing work cover binary classification approaches using various classifiers and Convolution Neural Networks. However, they suffer from inherent challenges for minute feature extraction and precise classification. We propose a novel methodology to carry out the multiclass classification of cervical cells from Whole Slide Images (WSI) with optimum feature extraction. The actualization of Conv Net with Transfer Learning technique substantiates meaningful Metamorphic Diagnosis of neoplastic and pre-neoplastic lesions. As the Progressive Resizing technique (an advanced method for training ConvNet) incorporates prior knowledge of the feature hierarchy and can reuse old computations while learning new ones, the model can carry forward the extracted morphological cell features to subsequent Neural Network layers iteratively for elusive learning. The Progressive Resizing technique superimposition in consultation with the Transfer Learning technique while training the Conv Net models has shown a substantial performance increase. The proposed binary and multiclass classification methodology succored in achieving benchmark scores on the Herlev Dataset. We achieved singular multiclass classification scores for WSI images of the SIPaKMed dataset, that is, accuracy (99.70%), precision (99.70%), recall (99.72%), F-Beta (99.63%), and Kappa scores (99.31%), which supersede the scores obtained through principal methodologies. GradCam based feature interpretation extends enhanced assimilation of the generated results, highlighting the pre-malignant and malignant lesions by visual localization in the images.
46

Cho, Joonyoung, Tae-Yeong Kwak, Sun Woo Kim, and Hyeyoon Chang. "Abstract 5056: Automated Gleason grading of digitized frozen section prostate tissue slide images." Cancer Research 82, no. 12_Supplement (June 15, 2022): 5056. http://dx.doi.org/10.1158/1538-7445.am2022-5056.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract Frozen sections are used in intraoperative consultations which need rapid microscopic analysis and in certain pathological procedures that require fresh tissue. Due to the various artifacts present on frozen section tissue slides, it is difficult to automatically analyze them. We tried automatic analysis of frozen sections. Two types of segmentation deep learning models detecting Gleason patterns were used. Both models were trained from hematoxylin and eosin (H&E) stained formalin-fixed, paraffin-embedded prostate tissue slide images. Each slide image was reviewed by an experienced pathologist, and all prostate cancer lesions were annotated with corresponding Gleason grades. One model was trained with H&E stained images (RS model), and the other model was trained with Hematoxylin-only (H-only) stained images (H-only model). A color deconvolution method was used to generate H-only stained images from original H&E stained images. In evaluation, frozen section prostate cancer slide images from The Cancer Genome Atlas (TCGA) were used. To diagnose the slide, we splitted whole slide images (WSI) into patches and analyze those patches to segment the Gleason pattern area. For the WSI result, we reconstructed the result as a whole size heatmap, then we counted the Gleason pattern to diagnose the slide as the prostate grade group. In detecting malignant cases, the RS model achieved a sensitivity of 98% and the H only model of 97%, respectively. In detecting clinically significant risk cases (grade group 2 or over), sensitivities of 99% and 96% were achieved, respectively. Most of the errors were all false-positive cases. In particular, the RS model mainly had false positives because of the ice crystal artifact, and the H-only model had many false positives for the folded part and the part suspected of being a prostatic intraepithelial neoplasia (PIN). False negatives were common in the compression artifacts for both models. We confirmed that both models showed high performance in cancer detection. In addition, they showed high accuracy in the classification of high-risk and low-risk groups of cancer. Further performance improvement can be expected with additional training of data related to artifacts or PIN. Citation Format: Joonyoung Cho, Tae-Yeong Kwak, Sun Woo Kim, Hyeyoon Chang. Automated Gleason grading of digitized frozen section prostate tissue slide images [abstract]. In: Proceedings of the American Association for Cancer Research Annual Meeting 2022; 2022 Apr 8-13. Philadelphia (PA): AACR; Cancer Res 2022;82(12_Suppl):Abstract nr 5056.
47

Wang, Pin, Pufei Li, Yongming Li, Jin Xu, and Mingfeng Jiang. "Classification of histopathological whole slide images based on multiple weighted semi-supervised domain adaptation." Biomedical Signal Processing and Control 73 (March 2022): 103400. http://dx.doi.org/10.1016/j.bspc.2021.103400.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Kanavati, Fahdi, Shin Ichihara, Michael Rambeau, Osamu Iizuka, Koji Arihiro, and Masayuki Tsuneki. "Deep Learning Models for Gastric Signet Ring Cell Carcinoma Classification in Whole Slide Images." Technology in Cancer Research & Treatment 20 (January 1, 2021): 153303382110279. http://dx.doi.org/10.1177/15330338211027901.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Signet ring cell carcinoma (SRCC) of the stomach is a rare type of cancer with a slowly rising incidence. It tends to be more difficult to detect by pathologists, mainly due to its cellular morphology and diffuse invasion manner, and it has poor prognosis when detected at an advanced stage. Computational pathology tools that can assist pathologists in detecting SRCC would be of a massive benefit. In this paper, we trained deep learning models using transfer learning, fully-supervised learning, and weakly-supervised learning to predict SRCC in Whole Slide Images (WSIs) using a training set of 1,765 WSIs. We evaluated the models on two different test sets (n = 999, n = 455). The best model achieved a ROC-AUC of at least 0.99 on all two test sets, setting a top baseline performance for SRCC WSI classification.
49

Hart, StevenN, William Flotte, AndrewP Norgan, KabeerK Shah, ZacharyR Buchan, Taofic Mounajjed, and ThomasJ Flotte. "Classification of melanocytic lesions in selected and whole-slide images via convolutional neural networks." Journal of Pathology Informatics 10, no. 1 (2019): 5. http://dx.doi.org/10.4103/jpi.jpi_32_18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Mukashyaka, Patience, Todd B. Sheridan, Ali Foroughi pour, and Jeffrey H. Chuang. "Abstract B039: SAMPLER: Unsupervised representations of whole slide images for tumor phenotype prediction." Cancer Research 84, no. 3_Supplement_2 (February 1, 2024): B039. http://dx.doi.org/10.1158/1538-7445.canevol23-b039.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract Deep learning has revolutionized digital pathology, allowing for automatic analysis of hematoxylin and eosin (H&E) stained whole slide images (WSIs) for various tasks, such as the classification of cancers into clinical subtypes. In such analyses, WSIs are broken down into smaller images, hereafter called tiles, for efficient processing, with each tile encoded by a deep learning backbone. To reconstruct a slide level representation, a widely applied approach is to combine tile-level features using attention-based deep learning models for each downstream prediction task. These training strategies are (a) computationally intensive, (b) challenging to optimize, (c) sensitive to stain variations across datasets, and (d) require sufficiently large and labeled datasets for supervised training, which are not always available. We propose SAMpling of multiscale empirical distributions for LEarning Representations (SAMPLER), a fully statistical approach to generate WSI representations by encoding the empirical cumulative distribution function (CDF) of multiscale tile features. We evaluated this approach by training logistic regression classifiers from SAMPLER representations. SAMPLER-based classifiers were able to accurately separate subtypes of breast carcinoma (BRCA: AUC=0.911 ± 0.029), subtypes of non-small cell lung carcinoma (NSCLC: AUC=0.940±0.018), and subtypes of renal cell carcinoma (RCC: AUC=0.987±0.006) diagnostic slides of the cancer genome atlas (TCGA). Performance was similar to fully deep learning attention models but >100 times faster. We further validated out models on external test sets. Histopathological review confirms that SAMPLER-identified high attention tiles contain tumor morphological features specific to the tumor type, while low attention tiles contain fibrous stroma, blood, or tissue folding artifacts. SAMPLER is a fast and accurate approach for analyzing WSIs, with greatly improved scalability over attention methods to benefit digital pathology analysis. Citation Format: Patience Mukashyaka, Todd B. Sheridan, Ali Foroughi pour, Jeffrey H. Chuang. SAMPLER: Unsupervised representations of whole slide images for tumor phenotype prediction [abstract]. In: Proceedings of the AACR Special Conference in Cancer Research: Translating Cancer Evolution and Data Science: The Next Frontier; 2023 Dec 3-6; Boston, Massachusetts. Philadelphia (PA): AACR; Cancer Res 2024;84(3 Suppl_2):Abstract nr B039.

To the bibliography