Academic literature on the topic 'Whole slide image classification'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Whole slide image classification.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Whole slide image classification"

1

Feng, Ming, Kele Xu, Nanhui Wu, Weiquan Huang, Yan Bai, Yin Wang, Changjian Wang, and Huaimin Wang. "Trusted multi-scale classification framework for whole slide image." Biomedical Signal Processing and Control 89 (March 2024): 105790. http://dx.doi.org/10.1016/j.bspc.2023.105790.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Fridman, M. V., A. A. Kosareva, E. V. Snezhko, P. V. Kamlach, and V. A. Kovalev. "Papillary thyroid carcinoma whole-slide images as a basis for deep learning." Informatics 20, no. 2 (June 29, 2023): 28–38. http://dx.doi.org/10.37661/1816-0301-2023-20-2-28-38.

Full text
Abstract:
Objectives. Morphological analysis of papillary thyroid cancer is a cornerstone for further treatment planning. Traditional and neural network methods of extracting parts of images are used to automate the analysis. It is necessary to prepare a set of data for teaching neural networks to develop a system of similar anatomical region in the histopathological image. Authors discuss the second selection of signs for the marking of histological images, methodological approaches to dissect whole-slide images, how to prepare raw data for a future analysis. The influence of the representative size of the fragment of the full-to-suction image of papillary thyroid cancer on the accuracy of the classification of trained neural network EfficientNetB0 is conducted. The analysis of the resulting results is carried out, the weaknesses of the use of fragments of images of different representative size and the cause of the unsatisfactory accuracy of the classification on large increase are evaluated.Materials and methods. Histopathological whole-slide imaged of 129 patients were used. Histological micropreparations containing elements of a tumor and surrounding tissue were scanned in the Aperio AT2 (Leica Biosystems, Germany) apparatus with maximum resolution. The marking was carried out in the ASAP software package. To choose the optimal representative size of the fragment the problem of classification was solved using the pre-study neural network EfficientNetB0.Results. A methodology for preparing a database of histopathological images of papillary thyroid cancer was proposed. Experiments were conducted to determine the optimal representative size of the image fragment. The best result of the accuracy of determining the class of test sample showed the size of a representative fragment as 394.32×394.32 microns.Conclusion. The analysis of the influence of the representative sizes of fragments of histopathological images showed the problems in solving the classification tasks because of cutting and staining images specifics, morphological complex and textured differences in the images of the same class. At the same time, it was determined that the task of preparing a set of data for training neural network to solve the problem of finding invasion of vessels in a histopathological image is not trivial and it requires additional stages of data preparation.
APA, Harvard, Vancouver, ISO, and other styles
3

Zarella, Mark D., Matthew R. Quaschnick;, David E. Breen, and Fernando U. Garcia. "Estimation of Fine-Scale Histologic Features at Low Magnification." Archives of Pathology & Laboratory Medicine 142, no. 11 (June 18, 2018): 1394–402. http://dx.doi.org/10.5858/arpa.2017-0380-oa.

Full text
Abstract:
Context.— Whole-slide imaging has ushered in a new era of technology that has fostered the use of computational image analysis for diagnostic support and has begun to transfer the act of analyzing a slide to computer monitors. Due to the overwhelming amount of detail available in whole-slide images, analytic procedures—whether computational or visual—often operate at magnifications lower than the magnification at which the image was acquired. As a result, a corresponding reduction in image resolution occurs. It is unclear how much information is lost when magnification is reduced, and whether the rich color attributes of histologic slides can aid in reconstructing some of that information. Objective.— To examine the correspondence between the color and spatial properties of whole-slide images to elucidate the impact of resolution reduction on the histologic attributes of the slide. Design.— We simulated image resolution reduction and modeled its effect on classification of the underlying histologic structure. By harnessing measured histologic features and the intrinsic spatial relationships between histologic structures, we developed a predictive model to estimate the histologic composition of tissue in a manner that exceeds the resolution of the image. Results.— Reduction in resolution resulted in a significant loss of the ability to accurately characterize histologic components at magnifications less than ×10. By utilizing pixel color, this ability was improved at all magnifications. Conclusions.— Multiscale analysis of histologic images requires an adequate understanding of the limitations imposed by image resolution. Our findings suggest that some of these limitations may be overcome with computational modeling.
APA, Harvard, Vancouver, ISO, and other styles
4

Chen, Kaitao, Shiliang Sun, and Jing Zhao. "CaMIL: Causal Multiple Instance Learning for Whole Slide Image Classification." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 2 (March 24, 2024): 1120–28. http://dx.doi.org/10.1609/aaai.v38i2.27873.

Full text
Abstract:
Whole slide image (WSI) classification is a crucial component in automated pathology analysis. Due to the inherent challenges of high-resolution WSIs and the absence of patch-level labels, most of the proposed methods follow the multiple instance learning (MIL) formulation. While MIL has been equipped with excellent instance feature extractors and aggregators, it is prone to learn spurious associations that undermine the performance of the model. For example, relying solely on color features may lead to erroneous diagnoses due to spurious associations between the disease and the color of patches. To address this issue, we develop a causal MIL framework for WSI classification, effectively distinguishing between causal and spurious associations. Specifically, we use the expectation of the intervention P(Y | do(X)) for bag prediction rather than the traditional likelihood P(Y | X). By applying the front-door adjustment, the spurious association is effectively blocked, where the intervened mediator is aggregated from patch-level features. We evaluate our proposed method on two publicly available WSI datasets, Camelyon16 and TCGA-NSCLC. Our causal MIL framework shows outstanding performance and is plug-and-play, seamlessly integrating with various feature extractors and aggregators.
APA, Harvard, Vancouver, ISO, and other styles
5

Lewis, Joshua, Conrad Shebelut, Bradley Drumheller, Xuebao Zhang, Nithya Shanmugam, Michel Attieh, Michael Horwath, Anurag Khanna, Geoffrey Smith, and David Gutman. "An Automated Pipeline for Cell Differentials on Whole-Slide Bone Marrow Aspirate Smears." American Journal of Clinical Pathology 158, Supplement_1 (November 1, 2022): S12. http://dx.doi.org/10.1093/ajcp/aqac126.020.

Full text
Abstract:
Abstract Current pathologic diagnosis of benign and neoplastic bone marrow disorders relies in part on the microscopic analysis of bone marrow aspirate (BMA) smears and manual counting of nucleated cell populations to obtain a cell differential. This manual process has significant limitations, including the limited sample of cells analyzed by a conventional 500-cell differential compared to the thousands of nucleated cells present, as well as the inter-observer variability seen between differentials on single samples due to differences in cell selection and classification. To address these shortcomings, we developed an automated computational platform for obtaining cell differentials from scanned whole-slide BMAs at 40x magnification. This pipeline utilizes a sequential process of identifying BMA regions with high proportions of marrow nucleated cells that are ideal for cell counting, detecting individual cells within these optimal regions, and classifying cells into one of 11 types within the differential. Training of convolutional neural network models for region and cell classification, as well as a region-based convolutional neural network for cell detection, involved the generation of an annotated training data set containing 10,948 BMA regions, 28,914 cell boundaries, and 23,609 cell classifications from 73 BMA slides. Among 44 testing BMA slides, an average of 19,209 viable cells per slide were identified and used in automated cell differentials, with a range of 237 to 126,483 cells. In comparing these automated cell differential percentages with corresponding manual differentials, cell type-specific correlation coefficients ranged from 0.913 for blast cells to 0.365 for myelocytes, with an average coefficient of 0.654 among all 11 cell types. A statistically significant concordance was observed among slides with blast percentages less or greater than 20% (p=1.0x10-5) and with plasma cell percentages less or greater than 10% (p=5.9x10-6) between automated and manual differentials, suggesting potential diagnostic utility of this automated pipeline for malignancies such as acute myeloid leukemia and multiple myeloma. Additionally, by simulating the manual counting of 500 cells within localized areas of a BMA slide and iterating over all optimal slide locations, we quantified the inter-observer variability associated with limited sample size in traditional BMA cell counting. Localized differentials exemplify an average variance ranging from 24.1% for erythroid precursors to 1.8% for basophils. Variance in localized differentials of up to 44.8% for blast cells and 36.9% for plasma cells was observed, demonstrating that sample classification based on diagnostic thresholds of cell populations is variable even between different areas within a single slide. Finally, pipeline outputs of region classification, cell detection, cell classification, and localized cell differentials can be visualized using whole-slide image analysis software. By improving cell sampling and reducing inter-observer variability, this automated pipeline has potential to improve the current standard of practice for utilizing BMA smears in the diagnosis of hematologic disorders.
APA, Harvard, Vancouver, ISO, and other styles
6

Ahmed, Shakil, Asadullah Shaikh, Hani Alshahrani, Abdullah Alghamdi, Mesfer Alrizq, Junaid Baber, and Maheen Bakhtyar. "Transfer Learning Approach for Classification of Histopathology Whole Slide Images." Sensors 21, no. 16 (August 9, 2021): 5361. http://dx.doi.org/10.3390/s21165361.

Full text
Abstract:
The classification of whole slide images (WSIs) provides physicians with an accurate analysis of diseases and also helps them to treat patients effectively. The classification can be linked to further detailed analysis and diagnosis. Deep learning (DL) has made significant advances in the medical industry, including the use of magnetic resonance imaging (MRI) scans, computerized tomography (CT) scans, and electrocardiograms (ECGs) to detect life-threatening diseases, including heart disease, cancer, and brain tumors. However, more advancement in the field of pathology is needed, but the main hurdle causing the slow progress is the shortage of large-labeled datasets of histopathology images to train the models. The Kimia Path24 dataset was particularly created for the classification and retrieval of histopathology images. It contains 23,916 histopathology patches with 24 tissue texture classes. A transfer learning-based framework is proposed and evaluated on two famous DL models, Inception-V3 and VGG-16. To improve the productivity of Inception-V3 and VGG-16, we used their pre-trained weights and concatenated these with an image vector, which is used as input for the training of the same architecture. Experiments show that the proposed innovation improves the accuracy of both famous models. The patch-to-scan accuracy of VGG-16 is improved from 0.65 to 0.77, and for the Inception-V3, it is improved from 0.74 to 0.79.
APA, Harvard, Vancouver, ISO, and other styles
7

Franklin, Daniel L., Tara Pattilachan, and Anthony Magliocco. "Abstract 5048: Imaging based EGFR mutation subtype classification using EfficientNet." Cancer Research 82, no. 12_Supplement (June 15, 2022): 5048. http://dx.doi.org/10.1158/1538-7445.am2022-5048.

Full text
Abstract:
Abstract This study aimed to determine whether EfficientNet-B0 was able to classify EGFR mutation subtypes with H&E stained whole slide images of lung and lymph node tissue. Background: Non-small cell lung cancer (NSCLC) accounts for the majority of all lung adenocarcinomas, with estimates that up to a third of such cases have a mutation in their epidermal growth factor receptor (EGFR). EGFR mutations can occur in various subtypes, such as Exon19 deletion, and L858R substitution, which are important for early therapy decisions. Here, we propose a deep learning approach for detecting and classifying EGFR mutation subtypes, which will greatly reduce the cost of determining mutation status, allowing for testing in a low resource setting. Methods: An EfficientNet-B0 model was trained with whole slide images of lung tissue or metastatic lymph nodes with known EGFR mutation subtype (wild type, exon19 deletion or L858R substitution). Regions of interest were tiled into 512x512 pixel images. The RGB .jpeg tiles are augmented by rotating 90°, 180°, 270°, and mirroring. The model was initialized with random parameters and trained with a batch size of 32, a learning rate of 0.0001 for 1 epoch before the validation loss increased for the next 5 epochs. Results: The model achieved a slide AUC of 0.8333, and a tile AUC of 0.8010. Slide AUC is the result of averaging all tiles within a slide and measuring performance based on correctly predicted slides (n=18). Tile AUC is the result of measuring performance based on correctly predicted tiles (n=102,000). Conclusion: Using EfficientNet-B0 architecture as the basis for our EGFR mutation classification system, we were able to create a top performing model and achieve a slide AUC of 0.833 and tile AUC of 0.801. Healthcare providers and researchers may utilize this AI model in clinical settings to allow for detection of EGFR mutation from routinely captured images and bypass expensive and time consuming sequencing methods. Table 1. Number of image tiles used and the number of slides they were extracted from. Train Validation Test Exon19 tiles 187,384 47,904 33,096 L858R tiles 166,288 19,512 26,136 Wild type tiles 225,944 27,696 42,768 Exon19 slides 47 6 6 L858R slides 46 6 6 WIld type slides 43 6 6 Citation Format: Daniel L. Franklin, Tara Pattilachan, Anthony Magliocco. Imaging based EGFR mutation subtype classification using EfficientNet [abstract]. In: Proceedings of the American Association for Cancer Research Annual Meeting 2022; 2022 Apr 8-13. Philadelphia (PA): AACR; Cancer Res 2022;82(12_Suppl):Abstract nr 5048.
APA, Harvard, Vancouver, ISO, and other styles
8

Jansen, Philipp, Adelaida Creosteanu, Viktor Matyas, Amrei Dilling, Ana Pina, Andrea Saggini, Tobias Schimming, et al. "Deep Learning Assisted Diagnosis of Onychomycosis on Whole-Slide Images." Journal of Fungi 8, no. 9 (August 28, 2022): 912. http://dx.doi.org/10.3390/jof8090912.

Full text
Abstract:
Background: Onychomycosis numbers among the most common fungal infections in humans affecting finger- or toenails. Histology remains a frequently applied screening technique to diagnose onychomycosis. Screening slides for fungal elements can be time-consuming for pathologists, and sensitivity in cases with low amounts of fungi remains a concern. Convolutional neural networks (CNNs) have revolutionized image classification in recent years. The goal of our project was to evaluate if a U-NET-based segmentation approach as a subcategory of CNNs can be applied to detect fungal elements on digitized histologic sections of human nail specimens and to compare it with the performance of 11 board-certified dermatopathologists. Methods: In total, 664 corresponding H&E- and PAS-stained histologic whole-slide images (WSIs) of human nail plates from four different laboratories were digitized. Histologic structures were manually annotated. A U-NET image segmentation model was trained for binary segmentation on the dataset generated by annotated slides. Results: The U-NET algorithm detected 90.5% of WSIs with fungi, demonstrating a comparable sensitivity with that of the 11 board-certified dermatopathologists (sensitivity of 89.2%). Conclusions: Our results demonstrate that machine-learning-based algorithms applied to real-world clinical cases can produce comparable sensitivities to human pathologists. Our established U-NET may be used as a supportive diagnostic tool to preselect possible slides with fungal elements. Slides where fungal elements are indicated by our U-NET should be reevaluated by the pathologist to confirm or refute the diagnosis of onychomycosis.
APA, Harvard, Vancouver, ISO, and other styles
9

Amgad, Mohamed, Habiba Elfandy, Hagar Hussein, Lamees A. Atteya, Mai A. T. Elsebaie, Lamia S. Abo Elnasr, Rokia A. Sakr, et al. "Structured crowdsourcing enables convolutional segmentation of histology images." Bioinformatics 35, no. 18 (February 6, 2019): 3461–67. http://dx.doi.org/10.1093/bioinformatics/btz083.

Full text
Abstract:
Abstract Motivation While deep-learning algorithms have demonstrated outstanding performance in semantic image segmentation tasks, large annotation datasets are needed to create accurate models. Annotation of histology images is challenging due to the effort and experience required to carefully delineate tissue structures, and difficulties related to sharing and markup of whole-slide images. Results We recruited 25 participants, ranging in experience from senior pathologists to medical students, to delineate tissue regions in 151 breast cancer slides using the Digital Slide Archive. Inter-participant discordance was systematically evaluated, revealing low discordance for tumor and stroma, and higher discordance for more subjectively defined or rare tissue classes. Feedback provided by senior participants enabled the generation and curation of 20 000+ annotated tissue regions. Fully convolutional networks trained using these annotations were highly accurate (mean AUC=0.945), and the scale of annotation data provided notable improvements in image classification accuracy. Availability and Implementation Dataset is freely available at: https://goo.gl/cNM4EL. Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
10

Wang, Qian, Ying Zou, Jianxin Zhang, and Bin Liu. "Second-order multi-instance learning model for whole slide image classification." Physics in Medicine & Biology 66, no. 14 (July 12, 2021): 145006. http://dx.doi.org/10.1088/1361-6560/ac0f30.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Whole slide image classification"

1

Lerousseau, Marvin. "Weakly Supervised Segmentation and Context-Aware Classification in Computational Pathology." Electronic Thesis or Diss., université Paris-Saclay, 2022. http://www.theses.fr/2022UPASG015.

Full text
Abstract:
L’anatomopathologie est la discipline médicale responsable du diagnostic et de la caractérisation des maladies par inspection macroscopique, microscopique, moléculaire et immunologique des tissus. Les technologies modernes permettent de numériser des lames tissulaire en images numériques qui peuvent être traitées par l’intelligence artificielle pour démultiplier les capacités des pathologistes. Cette thèse a présenté plusieurs approches nouvelles et puissantes qui s’attaquent à la segmentation et à la classification pan-cancer des images de lames numériques. L’apprentissage de modèles de segmentation pour des lames numériques est compliqué à cause de difficultés d’obtention d’annotations qui découlent (i) d’une pénurie de pathologistes, (ii) d’un processus d’annotation ennuyeux, et (iii) de différences majeurs entre les annotations inter-pathologistes. Mon premier axe de travail a abordé la segmentation des tumeurs pan-cancéreuses en concevant deux nouvelles approches d’entraînement faiblement supervisé qui exploitent des annotations à l’échelle de la lame qui sont faciles et rapides à obtenir. En particulier, ma deuxième contribution à la segmentation était un algorithme générique et très puissant qui exploite les annotations de pourcentages de tumeur pour chaque lame, sans recourir à des annotations de pixels. De vastes expériences à grande échelle ont montré la supériorité de mes approches par rapport aux méthodes faiblement supervisées et supervisées pour la segmentation des tumeurs pan-cancer sur un ensemble de données de plus de 15 000 lames de tissus congelés. Mes résultats ont également démontré la robustesse de nos approches au bruit et aux biais systémiques dans les annotations. Les lames numériques sont difficiles à classer en raison de leurs tailles colossales, qui vont de millions de pixels à plusieurs milliards de pixels, avec un poids souvent supérieur à 500 mégaoctets. L’utilisation directe de la vision par ordinateur traditionnelle n’est donc pas possible, incitant l’utilisation de l’apprentissage par instances multiples, un paradigme d’apprentissage automatique consistant à assimiler une lame comme un ensemble de tuiles uniformément échantillonnés à partir de cette dernière. Jusqu’à mes travaux, la grande majorité des approches d’apprentissage à instances multiples considéraient les tuiles comme échantillonnées de manière indépendante et identique, c’est-à-dire qu’elles ne prenaient pas en compte la relation spatiale des tuiles extraites d’une image de lame numérique. Certaines approches ont exploité une telle interconnexion spatiale en tirant parti de modèles basés sur des graphes, bien que le véritable domaine des lames numériques soit spécifiquement le domaine de l’image qui est plus adapté aux réseaux de neurones convolutifs. J’ai conçu un cadre d’apprentissage à instances multiples puissant et modulaire qui exploite la relation spatiale des tuiles extraites d’une lame numérique en créant une carte clairsemée des projections multidimensionnelles de patches, qui est ensuite traitée en projection de lame numérique par un réseau convolutif à entrée clairsemée, avant d’être classée par un modèle générique de classification. J’ai effectué des expériences approfondies sur trois tâches de classification d’images de lames numériques, dont la tâche par excellence du cancérologue de soustypage des tumeurs, sur un ensemble de données de plus de 20 000 images de lames numériques provenant de données publiques. Les résultats ont mis en évidence la supériorité de mon approche vis-à-vis les méthodes d’apprentissage à instances multiples les plus répandues. De plus, alors que mes expériences n’ont étudié mon approche qu’avec des réseaux de neurones convolutifs à faible entrée avec deux couches convolutives, les résultats ont montré que mon approche fonctionne mieux à mesure que le nombre de paramètres augmente, suggérant que des réseaux de neurones convolutifs plus sophistiqués peuvent facilement obtenir des résultats su
Anatomic pathology is the medical discipline responsible for the diagnosis and characterization of diseases through the macroscopic, microscopic, molecular and immunologic inspection of tissues. Modern technologies have made possible the digitization of tissue glass slides into whole slide images, which can themselves be processed by artificial intelligence to enhance the capabilities of pathologists. This thesis presented several novel and powerful approaches that tackle pan-cancer segmentation and classification of whole slide images. Learning segmentation models for whole slide images is challenged by an annotation bottleneck which arises from (i) a shortage of pathologists, (ii) an intense cumbersomeness and boring annotation process, and (iii) major inter-annotators discrepancy. My first line of work tackled pan-cancer tumor segmentation by designing two novel state-of-the-art weakly supervised approaches that exploit slide-level annotations that are fast and easy to obtain. In particular, my second segmentation contribution was a generic and highly powerful algorithm that leverages percentage annotations on a slide basis, without needing any pixelbased annotation. Extensive large-scale experiments showed the superiority of my approaches over weakly supervised and supervised methods for pan-cancer tumor segmentation on a dataset of more than 15,000 unfiltered and extremely challenging whole slide images from snap-frozen tissues. My results indicated the robustness of my approaches to noise and systemic biases in annotations. Digital slides are difficult to classify due to their colossal sizes, which range from millions of pixels to billions of pixels, often weighing more than 500 megabytes. The straightforward use of traditional computer vision is therefore not possible, prompting the use of multiple instance learning, a machine learning paradigm consisting in assimilating a whole slide image as a set of patches uniformly sampled from it. Up to my works, the greater majority of multiple instance learning approaches considered patches as independently and identically sampled, i.e. discarded the spatial relationship of patches extracted from a whole slide image. Some approaches exploited such spatial interconnection by leveraging graph-based models, although the true domain of whole slide images is specifically the image domain which is more suited with convolutional neural networks. I designed a highly powerful and modular multiple instance learning framework that leverages the spatial relationship of patches extracted from a whole slide image by building a sparse map from the patches embeddings, which is then further processed into a whole slide image embedding by a sparse-input convolutional neural network, before being classified by a generic classifier model. My framework essentially bridges the gap between multiple instance learning, and fully convolutional classification. I performed extensive experiments on three whole slide image classification tasks, including the golden task of cancer pathologist of subtyping tumors, on a dataset of more than 20,000 whole slide images from public data. Results highlighted the superiority of my approach over all other widespread multiple instance learning methods. Furthermore, while my experiments only investigated my approach with sparse-input convolutional neural networks with two convolutional layers, the results showed that my framework works better as the number of parameters increases, suggesting that more sophisticated convolutional neural networks can easily obtain superior results
APA, Harvard, Vancouver, ISO, and other styles
2

Zaidi, Fatima. "Deep learning-based scale-invariant cancer detection from whole slide image." Thesis, Zaidi, Fatima (2021) Deep learning-based scale-invariant cancer detection from whole slide image. Masters by Research thesis, Murdoch University, 2021. https://researchrepository.murdoch.edu.au/id/eprint/63326/.

Full text
Abstract:
Convential cancer diagnosis methods from whole slide images (WSI) train a deep Convolutional Neural Network (CNN) to make patch level predictions, and then aggregate the image-level predictions to classify a tumour as either benign or malignant. To classify a patch, the CNN extracts features through convolutional layers and then process the feature maps using fully connected layers. The size of the filters used in the convolutional layers defines the receptive field of the network. Small filters are computationally efficient but do not capture a large context. On the other hand, large filters allow learning features that capture a larger context but are very expensive both in terms of computational time and memory requirements. This paper focuses on two main challenges. The First one is how to incorporate a large context while minimizing the computational overhead. The second one is that the cancerous cells can be of arbitrary size, and thus any detection and recognition approach should be scale-invariant. We introduce the Dilated SPP VGG-16 network with different dilation rates applied to every block of the VGG-16 network. The proposed dilated SPP VGG-16 architecture allows to increase the receptive field of the network without increasing the filter size or the depth of the network, and thus are very efficient to train. It also enables the multiscale analysis without changing the architecture of the network and retraining. We tested the proposed approach on the publicly available Camelyon17 dataset. Our experiments show that the proposed CNN achieves comparable or better accuracy than a conventional deep learning method, but with significantly less computational time and memory requirements.
APA, Harvard, Vancouver, ISO, and other styles
3

Дяченко, Є. В. "Інформаційна технологія розпізнавання онкопатологій на повнослайдових гістологічних зображеннях." Master's thesis, Сумський державний університет, 2020. https://essuir.sumdu.edu.ua/handle/123456789/78594.

Full text
Abstract:
Виконано аналіз метаданих повнослайдових гістологічних зображень та отримано результати їх впливу на швидкодію і точність класифікаційного алгоритму. Розроблено програмний модуль онкодіагностування з використанням методу опорних векторів SVM та виконана його оптимізація, в результаті якої алгоритм здатен встановлювати вірний діагноз з точністю 95%. Розроблений модуль створено за допомогою мови програмування Python та імпортовано до WSI-системи QuPath.
APA, Harvard, Vancouver, ISO, and other styles
4

Rydell, Christopher. "Deep Learning for Whole Slide Image Cytology : A Human-in-the-Loop Approach." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-450356.

Full text
Abstract:
With cancer being one of the leading causes of death globally, and with oral cancers being among the most common types of cancer, it is of interest to conduct large-scale oral cancer screening among the general population. Deep Learning can be used to make this possible despite the medical expertise required for early detection of oral cancers. A bottleneck of Deep Learning is the large amount of data required to train a good model. This project investigates two topics: certainty calibration, which aims to make a machine learning model produce more reliable predictions, and Active Learning, which aims to reduce the amount of data that needs to be labeled for Deep Learning to be effective. In the investigation of certainty calibration, five different methods are compared, and the best method is found to be Dirichlet calibration. The Active Learning investigation studies a single method, Cost-Effective Active Learning, but it is found to produce poor results with the given experiment setting. These two topics inspire the further development of the cytological annotation tool CytoBrowser, which is designed with oral cancer data labeling in mind. The proposedevolution integrates into the existing tool a Deep Learning-assisted annotation workflow that supports multiple users.
APA, Harvard, Vancouver, ISO, and other styles
5

Williams, Paul James. "Near infrared (NIR) hyperspectral imaging for evaluation of whole maize kernels: chemometrics for exploration and classification." Thesis, Stellenbosch : University of Stellenbosch, 2009. http://hdl.handle.net/10019.1/1696.

Full text
Abstract:
Thesis (Msc Food Sc (Food Science))--University of Stellenbosch, 2009.
The use of near infrared (NIR) hyperspectral imaging and hyperspectral image analysis for distinguishing between whole maize kernels of varying degrees of hardness and fungal infected and non-infected kernels have been investigated. Near infrared hyperspectral images of whole maize kernels of varying degrees of hardness were acquired using a Spectral Dimensions MatrixNIR camera with a spectral range of 960-1662 nm as well as a sisuChema SWIR (short wave infrared) hyperspectral pushbroom imaging system with a spectral range of 1000-2498 nm. Exploratory principal component analysis (PCA) on absorbance images was used to remove background, bad pixels and shading. On the cleaned images, PCA could be used effectively to find histological classes including glassy (hard) and floury (soft) endosperm. PCA illustrated a distinct difference between floury and glassy endosperm along principal component (PC) three. Interpreting the PC loading line plots important absorbance peaks responsible for the variation were 1215, 1395 and 1450 nm, associated with starch and moisture for both MatrixNIR images (12 and 24 kernels). The loading line plots for the sisuChema (24 kernels) illustrated peaks of importance at the aforementioned wavelengths as well as 1695, 1900 and 1940 nm, also associated with starch and moisture. Partial least squares-discriminant analysis (PLS-DA) was applied as a means to predict whether the different endosperm types observed, were glassy or floury. For the MatrixNIR image (12 kernels), the PLS-DA model exhibited a classification rate of up to 99% for the discrimination of both floury and glassy endosperm. The PLS-DA model for the second MatrixNIR image (24 kernels) yielded a classification rate of 82% for the discrimination of glassy and 73% for floury endosperm. The sisuChema image (24 kernels) yielded a classification rate of 95% for the discrimination of floury and 92% for glassy endosperm. The fungal infected and sound whole maize kernels were imaged using the same instruments. Background, bad pixels and shading were removed by applying PCA on absorbance images. On the cleaned images, PCA could be used effectively to find the infected regions, pedicle as well as non-infected regions. A distinct difference between infected and sound kernels was illustrated along PC1. Interpreting the PC loading line plots showed important absorbance peaks responsible for the variation and predominantly associated with starch and moisture: 1215, 1450, 1480, 1690, 1940 and 2136 nm for both MatrixNIR images (15 and 21 kernels). The MatrixNIR image (15 kernels) exhibited a PLS-DA classification rate of up to 96.1% for the discrimination of infected kernels and the sisuChema had a classification rate of 99% for the same region of interest. The The iv sisuChema image (21-kernels) had a classification rate for infected kernels of 97.6% without pre-processing, 97.7% with multiplicative scatter correction (MSC) and 97.4% with standard normal variate (SNV). Near infrared hyperspectral imaging is a promising technique, capable of distinguishing between maize kernels of varying hardness and between fungal infected and sound kernels. While there are still limitations with hardware and software, these results provide the platform which would greatly assist with the determination of maize kernel hardness in breeding programmes without having to destroy the kernel. Further, NIR hyperspectral imaging could serve as an objective, rapid tool for identification of fungal infected kernels.
APA, Harvard, Vancouver, ISO, and other styles
6

Khire, Sourabh Mohan. "Time-sensitive communication of digital images, with applications in telepathology." Thesis, Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/29761.

Full text
Abstract:
Thesis (M. S.)--Electrical and Computer Engineering, Georgia Institute of Technology, 2010.
Committee Chair: Jayant, Nikil; Committee Member: Anderson, David; Committee Member: Lee, Chin-Hui. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
7

Venâncio, Rui Miguel Morgado. "Micrometastasis detection guidance by whole-slide image texture analysis in colorectal lymph nodes correlated with QUS parameters." Master's thesis, 2016. http://hdl.handle.net/10316/32150.

Full text
Abstract:
Dissertação de Mestrado em Engenharia Biomédica apresentada à Faculdade de Ciências e Tecnologia da Universidade de Coimbra.
O cancro ´e uma doen¸ca que afeta milh˜oes por todo o mundo e uma identifica¸c˜ao correta de gˆanglios linf´aticos pr´oximos do tumor prim´ario, que contenham regi˜oes metast´aticas ´e de extrema importˆancia para um correto gerenciamento dos pacientes. A avalia¸c˜ao histopatol´ogica ´e o ´unico m´etodo aceite para fazer essa identifica¸c˜ao. Novas t´ecnicas emergentes como os ultrassons quantitativos podem ajudar nessa identifica¸c˜ao, detetando regi˜oes metast´aticas no gˆanglio linf´atico antes mesmo de o cortar. Propomos e avaliamos dois m´etodos para analisar e identificar automaticamente regi˜oes suspeitas que contenham met´astases em lˆaminas histopatol´ogicas digitalizadas em alta resolu¸c˜ao, guiando o patologista em dire¸c˜ao `as regi˜oes suspeitas e classificando os gˆanglios como metast´aticos ou n˜ao-metast´aticos. O primeiro m´etodo, ´e um m´etodo convencional de an´alise de texturas e o segundo ´e baseado na aprendizagem profunda. Utilizando o m´etodo mais convencional particip´amos numa competi¸c˜ao europeia chamada CAMELYON16. Esta competi¸c˜ao tinha duas avalia¸c˜oes. Os m´etodos de textura utilizados foram as matrizes de coocorrˆencia de n´ıveis de cinzento e medidas de energia de texturas de Laws. Os parˆametros de textura ser˜ao utilizados para tentar encontrar rela¸c˜oes entre os ultrassons quantitativos e a histopatologia. Para a aprendizagem profunda utilizamos uma rede bem documentada chamada VGG16. Imagens digitalizadas de lˆaminas histol´ogicas de 44 gˆanglios foram utilizadas. Para avaliar os m´etodos foram desenhadas curvas ROC e F-Scores s˜ao calculados. Como resultados, obtivemos uma ´area sob a curva de 0.986 e um F-Score de 91.67 para o m´etodo mais convencional. Para a aprendizagem profunda obtivemos uma ´area sob a curva e um F-score igual a 1.0. Na competi¸c˜ao fic´amos em ´ultimo numa avalia¸c˜ao e em pen´ultimo na outra. Para finalizar, n˜ao foi poss´ıvel encontrar nenhuma correla¸c˜ao entre os ultrassons e a histologia.
Cancer is a disease that affects millions worldwide and accurate determination of whether lymph nodes (LNs) near the primary tumor contain metastatic foci is of critical importance for proper patient management. Histopathological evaluation is the only accepted method to make that determination. New emerging techniques like quantitative ultrasound (QUS) may help in the determination by detect metastatic regions in the LN before cutting it. We propose and evaluate two methods to automatically analyze and identify suspicious regions for metastatic foci in highresolution digitized histopathological slides (whole-slide images (WSI)) to helping the guidance of the pathologist towards cancer-suspicious regions and to classify LNs as metastatic or non-metastatic. The first method is a conventional texture-based method and the second one is based in deep convolutional neural networks (DCNNs). We have participated in the CAMELYON16 challenge with the conventional method. The texture methods used are based on gray-level co-occurrence matrices (GLCM) and Laws’ energy texture measures, which parameters will be used for find correlations with the QUS. As DCNN we used a known network called VGG16. Whole slide images (WSIs) of 44 lymph nodes (LNs) were used. For evaluate both methods Receiver Operating Characteristic (ROC) curves were drawn. For the most conventional method we obtained an Area Under the Curve (AUC) of 0.986 and a F-Score of 91.67. For the CNN based method we obtained an AUC and a F-Score of 1.0. The challenge had 2 evaluations, and we came last in one and second-to-last in the second. We could not find any correlation between the ultrasounds and the histology.
APA, Harvard, Vancouver, ISO, and other styles
8

Rosenbloom, Raymond. "Multiplex immunohistochemical analysis of granulomatous inflammation in lung tissue sections using a mouse model of M. avium infection." Thesis, 2020. https://hdl.handle.net/2144/41719.

Full text
Abstract:
INTRODUCTION: Investigating mechanisms of how intracellular bacterial pathogens such as Mycobacterium. avium (M. avium) evade the host immune response and replicate within macrophages is crucial to devising rational targets for host-directed therapies (HDT) against these associated diseases. This studied utilized the congenic mouse strain B6.Sst1S, which contains the super-susceptibility to tuberculosis (TB) allele. Among murine models of TB, this strain uniquely replicates human disease because mice develop granulomas with central caseous necrosis. Utilizing a susceptible model for M. avium infection, this study investigated the effect of mycobacterial pathogenesis on altering macrophage phenotypes and T cells distribution in areas of pulmonary granulomatous inflammation. METHODS:12 formalin fixed paraffin embedded (FFPE) lung sections from M. avium infected B6.Sst1S and B6 mice were examined microscopically (12 weeks post infection (wpi) n=5, 16 wpi=7). A targeted histology approach was initiated by using MRI coordinates to dictate the depths at which formalin fixed paraffin embedded (FFPE) lung samples were sectioned. Since interpretation of MRI images displayed no evidence of 2 discrete necrotizing granulomas, lungs were cut at sections representative of diffuse pathology at 2 mm into FFPE blocks. Using the Opal MethodTM (Akoya Biosciences), 6- plex immunohistochemical staining was performed with Arginase-1 (Arg1), inducible nitric oxide synthase (iNOS), CD68, CD3, M. tuberculosis antigen (cross-reacts with M. avium) and DAPI to segment nuclei. Slides were digitized by a Vectra PolarisTM fluorescent whole slide scanner. Autofluorescence was removed by InFormTM, and image analysis (IA) was conducted using HaloTM IA software. Statistical analysis was conducted using GraphPad PrismTM 8.0. RESULTS: Sst1 mediated susceptibility was statistically evident at 16 wpi but not at 12 wpi. B6.Sst1S mice showed a statistically significant (P <0.05) increase in M. avium+ cell expression in the non-inoculated lung lobes, but not the inoculated lung lobes. Pulmonary lesions within the inoculated and non-inoculated lung lobes contain different immune signatures. The predominately primary lesions of the inoculated lung lobes were associated with increased CD3+, M. avium+, and iNOS+ cell levels. When controlling for level of infection, there was lower levels of CD3+ cells within granulomatous lesions of B6.Sst1S mice, especially in the non-inoculated lung lobe. Controlling for level of infection also revealed elevated iNOS+ M. avium- cell expression in B6 mice. We observed elevated Arg1+ cell expression near iNOS+ M. avium+ cells, and, qualitatively, around larger lesions. T cell proximity analysis was contradictory and offers lessons for future the development of future IA modules. CONCLUSIONS: Sst1 mediated susceptibility was evident at 16 wpi and predominately mediated through secondary, metastatic lesions. Sst1 mediated susceptibility was also associated with fewer supportive cells (T cells and iNOS+ M. avium- cells) within granulomatous lesions. Future studies are necessary to evaluate to what degree granulomatous lesion Arg1+ cell expression and CD3+ proximity correlate to susceptibility.
APA, Harvard, Vancouver, ISO, and other styles
9

Donner, Ralf. "Die visuelle Interpretation von Fernerkundungsdaten." Doctoral thesis, 2007. https://tubaf.qucosa.de/id/qucosa%3A22626.

Full text
Abstract:
Die Fähigkeit, in Luft- und Satellitenbildern Objekte wiederzuerkennen, kann folgendermaßen erklärt werden: Aus der Kenntnis einer Landschaft und ihrer Abbildung im Bild werden Interpretationsregeln entwickelt, die bestimmten Kombinationen von Bildmerkmalen wie Farbe, Form, Größe, Textur oder Kontext festgelegte Bedeutungen zuordnen. Kommt es nicht auf das Wiedererkennen mit festen Wahrnehmungsmustern an, stellt sich die bislang offene Frage nach einer wissenschaftlichen Kriterien genügenden Methode, wie der gedankliche Zusammenhang zwischen den Sinneswahrnehmungen erfasst werden kann. Die Erfahrung von Umkehrbildern und optischen Täuschungen führt zur Frage nach dem festen Element bei der visuellen Interpretation von Fernerkundungsdaten. Galileis Antwort, Messwerte als Ausgangspunkt naturwissenschaftlicher Erkenntnis zu nehmen, löst die Mehrdeutigkeiten und Unsicherheiten gedanklicher Interpretationen nicht auf, denn zu jeder Zahl gehört bereits eine gedankliche Bestimmung darüber, was sie bedeutet: Äpfel, Birnen, Höhendifferenzen … Daher muss für die begriffliche Interpretation der Wahrnehmungen ein anderer Ausgangspunkt bestimmt werden: Weder kann der Mensch beobachten oder wahrnehmen, ohne seine Erlebnisse gedanklich zu fassen und zu ordnen, noch ist die Sinnesempfindung ein subjektives, vom Gegenstand abgespaltenes, nur persönliches Erlebnis. Wahrnehmung realisiert sich als Einheit von Wahrnehmendem und Wahrgenommenen. Demzufolge gibt es keine binäre Unterscheidung von objektiver Tatsache und subjektiver Interpretation. Wahrnehmung findet zwischen den Polen reiner Empfindung eines gegebenen und sinnlichkeitsfreien Denkens statt. Die reinen Erlebnisqualitäten der Sinnesempfindungen (warm, kalt, hell, rau) stellen sich als die am wenigsten von der subjektiven gedanklichen Interpretation abhängigen Elemente des Erkenntnisprozesses dar. Diesem Verhältnis von Beobachtung und gedanklicher Deutung entspricht ein phänomenologischer Untersuchungsansatz. Mit ihm bekommen Erfahrungen als absolute Elemente der Wahrnehmung primäre Bedeutung, gedankliche Interpretationen werden zu Abhängigen. Daher werden in der Untersuchung Ergebnisse phänomenologischer Arbeiten bevorzugt. Auch die eigene Bearbeitung des Themas geht von einer konsequent empirischen Position aus. Um einen Sachverhalt zu verstehen, genügt es nicht, bei den Sinnesempfindungen stehen zu bleiben, denn zu ihrem Verständnis fehlt der gedankliche Zusammenhang, die Erlebnisse müssen begrifflich interpretiert werden. Dabei ist die Doppelrolle der Begriffe von entscheidender Bedeutung: In der Analyse grenzen sie innerhalb des Erfahrungsfeldes Teilaspekte gegeneinander ab, welche in der Synthese durch dieselben Begriffe gedanklich verbunden werden. Diese Funktion der Begriffe wird ausgenutzt, um Wiedererkennen und Bildung von Verständnis zu differenzieren: Die Interpretation der Erfahrung nach a priori vorgegebenen Mustern zielt auf das Wiedererkennen. Im Gegensatz dazu emergiert Verständnis im Prozess der Begriffsbildung aus den Beobachtungen: Man sucht erst nach einer Gliederung, welche eine gedankliche Synthese plausibel erscheinen lässt. Das Konzept der Selbstorganisation hat in der Ökologie mechanistische Vorstellungen weitgehend abgelöst und im letzten Jahrzehnt auch in die Technik Eingang gefunden. Mit den Worten dieses Konzeptes kann die Begriffsbildung als Erkenntnisprozess beschrieben werden, in welchem sich gedankliche und nichtgedankliche Wahrnehmungen selbst organisieren. Sinnesempfindungen haben auch in anderen Zugangsweisen zur Natur eine dominierende Stellung. Daher können Goetheanismus, wissenschaftliche Ästhetik und Kunst zu einer voraussetzungslosen Naturerkundung beitragen. Die nahe Verwandtschaft von Phänomenologie, Ästhetik und Kunst lässt künstlerisches Schaffen als Vervollkommnung des in der Natur Veranlagten erscheinen. Weitere Querbeziehungen ergeben sich aus der Interpretation topografischer oder thematischer Karten oder sonstiger visualisierter raumbezogener Daten. Parallelen und Unterschiede werden herausgearbeitet. Moderne Naturwissenschaft ist quantitativ. Daher ist zu klären, was mathematische Modellierung zur Verständnisbildung beiträgt. In diesem Teil der Arbeit ist es der folgende Gedanke, welcher über die hinlänglich bekannte Nützlichkeit mathematischer Modellierungen, Vorausberechnungen und Simulationen hinausgeht: Die Mathematik überzeugt durch ihre logische Strenge in der Ableitung und Beweisführung: Aus Obersatz und Untersatz folgt die Konklusion. Eine Beobachtungsmethode, bei welcher eine Beobachtung an die nächste gereiht wird, so dass sich das Eine aus dem Anderen ergibt, wobei kein Sprung die Folge unterbricht, käme der Notwendigkeit eines mathematischen Beweises gleich. Diese strenge Folge des Einen aus dem Anderen tritt in der wissenschaftlichen Argumentation an die Stelle der spontanen Intuition mit Verifikation, Falsifikation und bestätigendem Beispiel. Auf diese Weise kann durch die Anwendung der mathematischen Methode eine realitätsnahe Begriffsbildung erreicht werden. Die bis hierher dargelegten Aspekte der Wahrnehmung, der Ästhetik, der Kunst und der Mathematik werden in der Methode einer voraussetzungslosen Begriffsentwicklung zusammengefasst. Damit ist das Hauptziel der Untersuchung, die Entwicklung einer auf das Verständnis gerichteten erfahrungsbasierten Beobachtungsmethode, erreicht. Die Abhandlung wird mit der Anwendung der entwickelten Methode auf einen Grundbegriff der Geoinformatik in folgender Weise fortgesetzt: Für die Geoinformatik ist der Raumbegriff von grundlegender Bedeutung. Daher bietet es sich an, diesen Begriff unter Anwendung der entwickelten Methode zu untersuchen. Unter phänomenologischen Gesichtspunkten stehen Raum und Zeit in einem engen Zusammenhang. Beide werden in der Bewegung erfahren. Interpretiert man Bewegung mit den Begriffen des Nebeneinander und des Nacheinander, entsteht Wissen von Raum und Zeit. Mit anderen Worten: Die am Leib erfahrene Bewegung wird durch Interpretation mit dem Raumbegriff zur Vorstellung durchlaufener Orte. Je nachdem, welche Sinneserfahrungen, allgemeiner: Beobachtungen, zugrunde gelegt werden, hat der Raum unterschiedliche geometrische Eigenschaften. Die Erfahrungen des Tastsinnes begründen euklidische Beobachtungen. Die Begriffe Raum und Zeit haben für die Verständnisbildung eine fundamentale Bedeutung. Mit ihrer Hilfe können die Erlebnisse als zugleich und nebeneinander oder als nacheinander geordnet werden. Sie ermöglichen Erkenntnis durch Analyse und Synthese. Wesentliches Motiv der Untersuchung ist die Frage nach der Bildung von Verständnis im Rahmen der visuellen Interpretation. Das Erkennen von Objekten stellt sich als synästhetische Synthese der Sinnesempfindungen und gedanklicher Inhalte dar. Unterschiedliche Gewichtungen der Gedankeninhalte lassen zwei Vorgehensweisen unterscheiden: 1, Für das Erkennen von Neuem ist es von grundlegender Bedeutung, dass die gedanklichen Inhalte den nichtgedanklichen Sinnesempfindungen untergeordnet sind, das heißt von diesen modifiziert werden können. Vorgewussten Begriffen kommt die Rolle von Hypothesen zu. 2, Beim Wiedererkennen haben gedankliche Inhalte eine dominierende Rolle – anhand von Interpretationsmerkmalen sollen Bildinhalten Bedeutungen zugewiesen werden. Das Bild wird hierzu nach Bildflächen mit solchen Kombinationen von Farben, Formen, Mustern, Texturen und räumlichen Anordnungen durchsucht, die dem zu suchenden Begriff entsprechen können. Oder das Bild wird, bei Verwendung einer Art Beispielschlüssel, nach Übereinstimmungen mit kompletten Bildmustern durchsucht. Auch das ist eine Form des Wiedererkennens. Um ein Phänomen zu verstehen, kommt es darauf an, in der Regelmäßigkeit der äußeren Form den Zusammenhang zu entdecken, der den verschiedenartigen Ausgestaltungen als regelndes Element zugrunde liegt. Dazu muss über eine Klassifizierung von Interpretationsmerkmalen hinausgegangen werden. Die gedankliche Auseinandersetzung mit den visualisierten Repräsentanzen der Phänomene unterstützt die Bildung eines solchen ideellen Zusammenhangs, welcher das in aller Mannigfaltigkeit der natürlichen Erscheinungen Gleichbleibende, Ruhende darstellt, aus welchem die Einzelphänomene hervorgegangen sein könnten. Die Funktionen beschreiben die Wechselwirkungen zwischen den räumlichen Elementen, welche sich in Austauschprozessen von Energie, Material und Stoffen ausdrücken. Überall dort, wo räumliche Anordnung ein Ausdruck funktionaler Beziehungen ist, unterstützt die visuelle Wahrnehmung der räumlichen Beziehungen die Einsicht in die sachlichen. In den Schlussfolgerungen wird die Visualisierung von Geodaten als Mittel zur Sichtbarmachung des Zusammenhanges zwischen den Erscheinungen charakterisiert. Die Bezugnahme zur Fernerkundung führt zu der Feststellung, dass die Anwendung der vorgeschlagenen Forschungsstrategie im Bereich der Geofernerkundung nur eingeschränkt möglich ist.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Whole slide image classification"

1

Karapapa, Stavroula. Defences to Copyright Infringement. Oxford University Press, 2020. http://dx.doi.org/10.1093/oso/9780198795636.001.0001.

Full text
Abstract:
Defences to copyright infringement have gained increased significance over the past twenty years. The fourth industrial revolution emerged with the development of innovative copy-reliant services and business models, transforming the way in which copyright works can be used, from digital learning methods to mass digitization initiatives, media monitoring services, image transformation tools, and content mining technologies. The lawfulness of such innovative services and business methods, which arguably have the potential to enhance public welfare, is dubious and challenges copyright law. EU copyright contains specifically enumerated, narrowly drafted, and strictly interpreted defensive rules, often taking the form of the so-called exceptions and limitations to copyright. Because the fourth industrial revolution promises innovation and business growth—stated objectives of EU copyright—it invites an examination of defensive rules as a whole. The book adopts a holistic approach in its exploration of the limits of permissibility under EU copyright, including legislatively mentioned exceptions and limitations, doctrinal principles, and rules external to copyright, with a view to unveiling possible gaps and overlaps, offering a novel classification of defensive rules, and evaluating the adaptability of the law towards technological change. Discussing recent legislative developments, such as the provisions of the Digital Single Market Directive, Court of Justice of the European Union case law, and insights from national laws and cases, the book tells the story of copyright from the perspective of copyright defences, offering positivist and normative insights into law and doctrine and arguing towards a principle-based understanding of the scope of defences that could inform future law and policy making.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Whole slide image classification"

1

Rymarczyk, Dawid, Adam Pardyl, Jarosław Kraus, Aneta Kaczyńska, Marek Skomorowski, and Bartosz Zieliński. "ProtoMIL: Multiple Instance Learning with Prototypical Parts for Whole-Slide Image Classification." In Machine Learning and Knowledge Discovery in Databases, 421–36. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-26387-3_26.

Full text
Abstract:
AbstractThe rapid development of histopathology scanners allowed the digital transformation of pathology. Current devices fastly and accurately digitize histology slides on many magnifications, resulting in whole slide images (WSI). However, direct application of supervised deep learning methods to WSI highest magnification is impossible due to hardware limitations. That is why WSI classification is usually analyzed using standard Multiple Instance Learning (MIL) approaches, that do not explain their predictions, which is crucial for medical applications. In this work, we fill this gap by introducing ProtoMIL, a novel self-explainable MIL method inspired by the case-based reasoning process that operates on visual prototypes. Thanks to incorporating prototypical features into objects description, ProtoMIL unprecedentedly joins the model accuracy and fine-grained interpretability, as confirmed by the experiments conducted on five recognized whole-slide image datasets.
APA, Harvard, Vancouver, ISO, and other styles
2

Li, Jiahui, Wen Chen, Xiaodi Huang, Shuang Yang, Zhiqiang Hu, Qi Duan, Dimitris N. Metaxas, Hongsheng Li, and Shaoting Zhang. "Hybrid Supervision Learning for Pathology Whole Slide Image Classification." In Medical Image Computing and Computer Assisted Intervention – MICCAI 2021, 309–18. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-87237-3_30.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ding, Saisai, Jun Wang, Juncheng Li, and Jun Shi. "Multi-scale Prototypical Transformer for Whole Slide Image Classification." In Lecture Notes in Computer Science, 602–11. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-43987-2_58.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Shen, Yiqing, and Jing Ke. "A Deformable CRF Model for Histopathology Whole-Slide Image Classification." In Medical Image Computing and Computer Assisted Intervention – MICCAI 2020, 500–508. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-59722-1_48.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zheng, Yushan, Jun Li, Jun Shi, Fengying Xie, and Zhiguo Jiang. "Kernel Attention Transformer (KAT) for Histopathology Whole Slide Image Classification." In Lecture Notes in Computer Science, 283–92. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-16434-7_28.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Gavade, Anil B., Rajendra B. Nerli, Shridhar Ghagane, Priyanka A. Gavade, and Venkata Siva Prasad Bhagavatula. "Cancer Cell Detection and Classification from Digital Whole Slide Image." In Smart Technologies in Data Science and Communication, 289–99. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-19-6880-8_31.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Apou, Grégory, Benoît Naegel, Germain Forestier, Friedrich Feuerhake, and Cédric Wemmert. "Efficient Region-based Classification for Whole Slide Images." In Communications in Computer and Information Science, 239–56. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-25117-2_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ren, Jian, Ilker Hacihaliloglu, Eric A. Singer, David J. Foran, and Xin Qi. "Adversarial Domain Adaptation for Classification of Prostate Histopathology Whole-Slide Images." In Medical Image Computing and Computer Assisted Intervention – MICCAI 2018, 201–9. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-00934-2_23.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Qu, Linhao, Xiaoyuan Luo, Shaolei Liu, Manning Wang, and Zhijian Song. "DGMIL: Distribution Guided Multiple Instance Learning for Whole Slide Image Classification." In Lecture Notes in Computer Science, 24–34. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-16434-7_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Wu, Wei, Zhonghang Zhu, Baptiste Magnier, and Liansheng Wang. "Clustering-Based Multi-instance Learning Network for Whole Slide Image Classification." In Computational Mathematics Modeling in Cancer Analysis, 100–109. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-17266-3_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Whole slide image classification"

1

Zhang, Chaoyi, Yang Song, Donghao Zhang, Sidong Liu, Mei Chen, and Weidong Cai. "Whole Slide Image Classification via Iterative Patch Labelling." In 2018 25th IEEE International Conference on Image Processing (ICIP). IEEE, 2018. http://dx.doi.org/10.1109/icip.2018.8451551.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Akbarnejad, Amir, Nilanjan Ray, and Gilbert Bigras. "Deep Fisher Vector Coding For Whole Slide Image Classification." In 2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI). IEEE, 2021. http://dx.doi.org/10.1109/isbi48211.2021.9433836.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Poudel, Sahadev, and Sang-Woong Lee. "Whole Slide Image Classification and Segmentation using Deep Learning." In ACM ICEA '20: 2020 ACM International Conference on Intelligent Computing and its Emerging Applications. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3440943.3444357.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Zhou, Yuanpin, and Yao Lu. "Deep Hierarchical Multiple Instance Learning for Whole Slide Image Classification." In 2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI). IEEE, 2022. http://dx.doi.org/10.1109/isbi52829.2022.9761678.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zhou, Yuanpin, and Yao Lu. "Deep Hierarchical Multiple Instance Learning for Whole Slide Image Classification." In 2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI). IEEE, 2022. http://dx.doi.org/10.1109/isbi52829.2022.9761678.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zhou, Yuanpin, and Yao Lu. "Deep Hierarchical Multiple Instance Learning for Whole Slide Image Classification." In 2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI). IEEE, 2022. http://dx.doi.org/10.1109/isbi52829.2022.9761678.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Maksoud, Sam, Kun Zhao, Peter Hobson, Anthony Jennings, and Brian C. Lovell. "SOS: Selective Objective Switch for Rapid Immunofluorescence Whole Slide Image Classification." In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2020. http://dx.doi.org/10.1109/cvpr42600.2020.00392.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Hou, Le, Dimitris Samaras, Tahsin M. Kurc, Yi Gao, James E. Davis, and Joel H. Saltz. "Patch-Based Convolutional Neural Network for Whole Slide Tissue Image Classification." In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2016. http://dx.doi.org/10.1109/cvpr.2016.266.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ye, Zehua, Yonghong He, and Tian Guan. "Semantic-Similarity Collaborative Knowledge Distillation Framework for Whole Slide Image Classification." In 2023 IEEE International Conference on Bioinformatics and Biomedicine (BIBM). IEEE, 2023. http://dx.doi.org/10.1109/bibm58861.2023.10385681.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Song, Hongjian, Jie Tang, Hongzhao Xiao, and Juncheng Hu. "Rethinking Overfitting of Multiple Instance Learning for Whole Slide Image Classification." In 2023 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 2023. http://dx.doi.org/10.1109/icme55011.2023.00100.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography