Статті в журналах з теми "HISTOPATHOLOGY IMAGE"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: HISTOPATHOLOGY IMAGE.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "HISTOPATHOLOGY IMAGE".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Chen, Jia-Mei, Yan Li, Jun Xu, Lei Gong, Lin-Wei Wang, Wen-Lou Liu, and Juan Liu. "Computer-aided prognosis on breast cancer with hematoxylin and eosin histopathology images: A review." Tumor Biology 39, no. 3 (March 2017): 101042831769455. http://dx.doi.org/10.1177/1010428317694550.

Повний текст джерела
Анотація:
With the advance of digital pathology, image analysis has begun to show its advantages in information analysis of hematoxylin and eosin histopathology images. Generally, histological features in hematoxylin and eosin images are measured to evaluate tumor grade and prognosis for breast cancer. This review summarized recent works in image analysis of hematoxylin and eosin histopathology images for breast cancer prognosis. First, prognostic factors for breast cancer based on hematoxylin and eosin histopathology images were summarized. Then, usual procedures of image analysis for breast cancer prognosis were systematically reviewed, including image acquisition, image preprocessing, image detection and segmentation, and feature extraction. Finally, the prognostic value of image features and image feature–based prognostic models was evaluated. Moreover, we discussed the issues of current analysis, and some directions for future research.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Arevalo, John, Angel Cruz-Roa, and Fabio A. González O. "Representación de imágenes de histopatología utilizada en tareas de análisis automático: estado del arte." Revista Med 22, no. 2 (December 1, 2014): 79. http://dx.doi.org/10.18359/rmed.1184.

Повний текст джерела
Анотація:
<p>This paper presents a review of the state-of-the-art in histopathology image representation used in automatic image analysis tasks. Automatic analysis of histopathology images is important for building computer-assisted diagnosis tools, automatic image enhancing systems and virtual microscopy systems, among other applications. Histopathology images have a rich mix of visual patterns with particularities that make them difficult to analyze. The paper discusses these particularities, the acquisition process and the challenges found when doing automatic analysis. Second an overview of recent works and methods addressed to deal with visual content representation in different automatic image analysis tasks is presented. Third an overview of applications of image representation methods in several medical domains and tasks is presented. Finally, the paper concludes with current trends of automatic analysis of histopathology images like digital pathology.</p>
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Wang, Pin, Shanshan Lv, Yongming Li, Qi Song, Linyu Li, Jiaxin Wang, and Hehua Zhang. "Hybrid Deep Transfer Network and Rotational Sample Subspace Ensemble Learning for Early Cancer Detection." Journal of Medical Imaging and Health Informatics 10, no. 10 (October 1, 2020): 2289–96. http://dx.doi.org/10.1166/jmihi.2020.3172.

Повний текст джерела
Анотація:
Accurate histopathology cell image classification plays an important role in early cancer detection and diagnosis. Currently, Convolutional Neural Network is used to assist pathologists for histopathology image classification. In the paper, a Min mice model was applied to evaluate the capability of Convolutional Neural Network features for detecting early-stage carcinogenesis. However, due to the limited histopathology images of the mice model, it may cause overfitting for the classification. Hence, hybrid deep transfer network and rotational sample subspace ensemble learning is proposed for the histopathology image classification. First, deep features are obtained by deep transfer network based on regularized loss functions. Then, the rotational sample subspace sampling is applied to increase the diversity between training sets. Subsequently, subspace projection learning is introduced to achieve dimensionality reduction. Finally, the ensemble learning is used for histopathology image classification. The proposed method was tested on 126 histopathology images of the mouse model. The experimental results demonstrate that the proposed method has achieved a remarkable classification accuracy (99.39%, 99.74%, 100%). It has demonstrated that the proposed approach is promising for early cancer diagnosis.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Wang, Pin, Shanshan Lv, Yongming Li, Qi Song, Linyu Li, Jiaxin Wang, and Hehua Zhang. "Hybrid Deep Transfer Network and Rotational Sample Subspace Ensemble Learning for Early Cancer Detection." Journal of Medical Imaging and Health Informatics 10, no. 10 (October 1, 2020): 2289–96. http://dx.doi.org/10.1166/jmihi.2020.31722289.

Повний текст джерела
Анотація:
Accurate histopathology cell image classification plays an important role in early cancer detection and diagnosis. Currently, Convolutional Neural Network is used to assist pathologists for histopathology image classification. In the paper, a Min mice model was applied to evaluate the capability of Convolutional Neural Network features for detecting early-stage carcinogenesis. However, due to the limited histopathology images of the mice model, it may cause overfitting for the classification. Hence, hybrid deep transfer network and rotational sample subspace ensemble learning is proposed for the histopathology image classification. First, deep features are obtained by deep transfer network based on regularized loss functions. Then, the rotational sample subspace sampling is applied to increase the diversity between training sets. Subsequently, subspace projection learning is introduced to achieve dimensionality reduction. Finally, the ensemble learning is used for histopathology image classification. The proposed method was tested on 126 histopathology images of the mouse model. The experimental results demonstrate that the proposed method has achieved a remarkable classification accuracy (99.39%, 99.74%, 100%). It has demonstrated that the proposed approach is promising for early cancer diagnosis.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Tawfeeq, Furat Nidhal, Nada A. S. Alwan, and Basim M. Khashman. "Optimization of Digital Histopathology Image Quality." IAES International Journal of Artificial Intelligence (IJ-AI) 7, no. 2 (April 20, 2018): 71. http://dx.doi.org/10.11591/ijai.v7.i2.pp71-77.

Повний текст джерела
Анотація:
<span lang="EN-US">One of the biomedical image problems is the appearance of the bubbles in the slide that could occur when air passes through the slide during the preparation process. These bubbles may complicate the process of analysing the histopathological images. The objective of this study is to remove the bubble noise from the histopathology images, and then predict the tissues that underlie it using the fuzzy controller in cases of remote pathological diagnosis. Fuzzy logic uses the linguistic definition to recognize the relationship between the input and the activity, rather than using difficult numerical equation. Mainly there are five parts, starting with accepting the image, passing through removing the bubbles, and ending with predict the tissues. These were implemented by defining membership functions between colours range using MATLAB. Results: 50 histopathological images were tested on four types of membership functions (MF); the results show that (nine-triangular) MF get 75.4% correctly predicted pixels versus 69.1, 72.31 and 72% for (five- triangular), (five-Gaussian) and (nine-Gaussian) respectively. Conclusions: In line with the era of digitally driven e-pathology, this process is essentially recommended to ensure quality interpretation and analyses of the processed slides; thus overcoming relevant limitations.</span>
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Gupta, Rachit Kumar, Jatinder Manhas, and Mandeep Kour. "Hybrid Feature Extraction Based Ensemble Classification Model to Diagnose Oral Carcinoma Using Histopathological Images." JOURNAL OF SCIENTIFIC RESEARCH 66, no. 03 (2022): 219–26. http://dx.doi.org/10.37398/jsr.2022.660327.

Повний текст джерела
Анотація:
Detection and classification of cancerous tissue from histopathologic images is quite a challenging task for pathologists and computer assisted medical diagnosis systems because of the complexity of the histopathology image. For a good diagnostic system, feature extraction from the medical images plays a crucial role for better classification of images. Using inappropriate or redundant features leads to poor classification results because classification algorithm learns a lot of unimportant information from the images. We propose hybrid feature extractor using different feature extraction algorithms that can extract various types of features from histopathological image. For this study, feature fused Convolution Neural Network, Gray Level Cooccurrence Matrix, and Local Binary Pattern algorithms are used. The texture and deep features obtained from these methods are used as input vector to classifiers: Support Vector Machine, KNearest Neighbor, Naïve Bayes and Boosted Tree. Prediction results of these classifiers are combined using soft majority voting algorithm to predict final output. Proposed method achieved an accuracy of 98.71%, which is quite high as compared to previous similar research works. Proposed method was capable of identifying most of cancerous histopathology images. The combination of deep and textural features can be potentially used for creating computer assisted medical imaging diagnosis system that can detect cancer from histopathology images timely and accurately.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Rani V, Sudha, and M. Jogendra Kumar. "Histopathological Image Classification Methods and Techniques in Deep Learning Field." International Journal on Recent and Innovation Trends in Computing and Communication 10, no. 2s (December 31, 2022): 158–65. http://dx.doi.org/10.17762/ijritcc.v10i2s.5923.

Повний текст джерела
Анотація:
A cancerous tumour in a woman's breast, Histopathology detects breast cancer. Histopathological images are a hotspot for medical study since they are difficult to judge manually. In addition to helping doctors identify and treat patients, this image classification can boost patient survival. This research addresses the merits and downsides of deep learning methods for histopathology imaging of breast cancer. The study's histopathology image classification and future directions are reviewed. Automatic histopathological image analysis often uses complete supervised learning where we can feed the labeled dataset to model for the classification. The research methods are frequentlytrust on feature extraction techniques tailored to specific challenges, such as texture, spatial, graph-based, and morphological features. Many deep learning models are also created for picture classification. There are various deep learning methods for classifying histopathology images.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Tellez, David, Geert Litjens, Jeroen van der Laak, and Francesco Ciompi. "Neural Image Compression for Gigapixel Histopathology Image Analysis." IEEE Transactions on Pattern Analysis and Machine Intelligence 43, no. 2 (February 1, 2021): 567–78. http://dx.doi.org/10.1109/tpami.2019.2936841.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Kwak, Deawon, Jiwoo Choi, and Sungjin Lee. "Rethinking Breast Cancer Diagnosis through Deep Learning Based Image Recognition." Sensors 23, no. 4 (February 19, 2023): 2307. http://dx.doi.org/10.3390/s23042307.

Повний текст джерела
Анотація:
This paper explored techniques for diagnosing breast cancer using deep learning based medical image recognition. X-ray (Mammography) images, ultrasound images, and histopathology images are used to improve the accuracy of the process by diagnosing breast cancer classification and by inferring their affected location. For this goal, the image recognition application strategies for the maximal diagnosis accuracy in each medical image data are investigated in terms of various image classification (VGGNet19, ResNet50, DenseNet121, EfficietNet v2), image segmentation (UNet, ResUNet++, DeepLab v3), and related loss functions (binary cross entropy, dice Loss, Tversky loss), and data augmentation. As a result of evaluations through the presented methods, when using filter-based data augmentation, ResNet50 showed the best performance in image classification, and UNet showed the best performance in both X-ray image and ultrasound image as image segmentation. When applying the proposed image recognition strategies for the maximal diagnosis accuracy in each medical image data, the accuracy can be improved by 33.3% in image segmentation in X-ray images, 29.9% in image segmentation in ultrasound images, and 22.8% in image classification in histopathology images.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Kandel, Ibrahem, Mauro Castelli, and Aleš Popovič. "Comparative Study of First Order Optimizers for Image Classification Using Convolutional Neural Networks on Histopathology Images." Journal of Imaging 6, no. 9 (September 8, 2020): 92. http://dx.doi.org/10.3390/jimaging6090092.

Повний текст джерела
Анотація:
The classification of histopathology images requires an experienced physician with years of experience to classify the histopathology images accurately. In this study, an algorithm was developed to assist physicians in classifying histopathology images; the algorithm receives the histopathology image as an input and produces the percentage of cancer presence. The primary classifier used in this algorithm is the convolutional neural network, which is a state-of-the-art classifier used in image classification as it can classify images without relying on the manual selection of features from each image. The main aim of this research is to improve the robustness of the classifier used by comparing six different first-order stochastic gradient-based optimizers to select the best for this particular dataset. The dataset used to train the classifier is the PatchCamelyon public dataset, which consists of 220,025 images to train the classifier; the dataset is composed of 60% positive images and 40% negative images, and 57,458 images to test its performance. The classifier was trained on 80% of the images and validated on the rest of 20% of the images; then, it was tested on the test set. The optimizers were evaluated based on their AUC of the ROC curve. The results show that the adaptative based optimizers achieved the highest results except for AdaGrad that achieved the lowest results.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Kausar, Tasleem, Adeeba Kausar, Muhammad Adnan Ashraf, Muhammad Farhan Siddique, Mingjiang Wang, Muhammad Sajid, Muhammad Zeeshan Siddique, Anwar Ul Haq, and Imran Riaz. "SA-GAN: Stain Acclimation Generative Adversarial Network for Histopathology Image Analysis." Applied Sciences 12, no. 1 (December 29, 2021): 288. http://dx.doi.org/10.3390/app12010288.

Повний текст джерела
Анотація:
Histopathological image analysis is an examination of tissue under a light microscope for cancerous disease diagnosis. Computer-assisted diagnosis (CAD) systems work well by diagnosing cancer from histopathology images. However, stain variability in histopathology images is inevitable due to the use of different staining processes, operator ability, and scanner specifications. These stain variations present in histopathology images affect the accuracy of the CAD systems. Various stain normalization techniques have been developed to cope with inter-variability issues, allowing standardizing the appearance of images. However, in stain normalization, these methods rely on the single reference image rather than incorporate color distributions of the entire dataset. In this paper, we design a novel machine learning-based model that takes advantage of whole dataset distributions as well as color statistics of a single target image instead of relying only on a single target image. The proposed deep model, called stain acclimation generative adversarial network (SA-GAN), consists of one generator and two discriminators. The generator maps the input images from the source domain to the target domain. Among discriminators, the first discriminator forces the generated images to maintain the color patterns as of target domain. While second discriminator forces the generated images to preserve the structure contents as of source domain. The proposed model is trained using a color attribute metric, extracted from a selected template image. Therefore, the designed model not only learns dataset-specific staining properties but also image-specific textural contents. Evaluated results on four different histopathology datasets show the efficacy of SA-GAN to acclimate stain contents and enhance the quality of normalization by obtaining the highest values of performance metrics. Additionally, the proposed method is also evaluated for multiclass cancer type classification task, showing a 6.9% improvement in accuracy on ICIAR 2018 hidden test data.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Bagchi, Arnab, Payel Pramanik, and Ram Sarkar. "A Multi-Stage Approach to Breast Cancer Classification Using Histopathology Images." Diagnostics 13, no. 1 (December 30, 2022): 126. http://dx.doi.org/10.3390/diagnostics13010126.

Повний текст джерела
Анотація:
Breast cancer is one of the deadliest diseases worldwide among women. Early diagnosis and proper treatment can save many lives. Breast image analysis is a popular method for detecting breast cancer. Computer-aided diagnosis of breast images helps radiologists do the task more efficiently and appropriately. Histopathological image analysis is an important diagnostic method for breast cancer, which is basically microscopic imaging of breast tissue. In this work, we developed a deep learning-based method to classify breast cancer using histopathological images. We propose a patch-classification model to classify the image patches, where we divide the images into patches and pre-process these patches with stain normalization, regularization, and augmentation methods. We use machine-learning-based classifiers and ensembling methods to classify the image patches into four categories: normal, benign, in situ, and invasive. Next, we use the patch information from this model to classify the images into two classes (cancerous and non-cancerous) and four other classes (normal, benign, in situ, and invasive). We introduce a model to utilize the 2-class classification probabilities and classify the images into a 4-class classification. The proposed method yields promising results and achieves a classification accuracy of 97.50% for 4-class image classification and 98.6% for 2-class image classification on the ICIAR BACH dataset.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Levine, AB, J. Peng, SJM Jones, A. Bashashati, and S. Yip. "Synthesis of glioma histopathology images using generative adversarial networks." Canadian Journal of Neurological Sciences / Journal Canadien des Sciences Neurologiques 48, s1 (May 2021): S3. http://dx.doi.org/10.1017/cjn.2021.91.

Повний текст джерела
Анотація:
Deep learning, a subset of artificial intelligence, has shown great potential in several recent applications to pathology. These have mainly involved the use of classifiers to diagnose disease, while generative modelling techniques have been less frequently used. Generative adversarial networks (GANs) are a type of deep learning model that has been used to synthesize realistic images in a range of domains, both general purpose and medical. In the GAN framework, a generator network is trained to synthesize fake images, while a dueling discriminator network aims to distinguish between the fake images and a set of real training images. As GAN training progresses, the generator network ideally learns the important features of a dataset, allowing it to create images that the discriminator cannot distinguish from the real ones. We report on our use of GANs to synthesize high resolution, realistic histopathology images of gliomas. The well- known Progressive GAN framework was trained on a set of image patches extracted from digital slides in the Cancer Genome Atlas repository, and was able to generate fake images that were visually indistinguishable from the real training images. Generative modelling in pathology has numerous potential applications, including dataset augmentation for training deep learning classifiers, image processing, and expanding educational material.LEARNING OBJECTIVESThis presentation will enable the learner to: 1.Explain basic principles of generative modelling in deep learning.2.Discuss applications of deep learning to neuropathology image synthesis.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Anjum, Sunila, Imran Ahmed, Muhammad Asif, Hanan Aljuaid, Fahad Alturise, Yazeed Yasin Ghadi, and Rashad Elhabob. "Lung Cancer Classification in Histopathology Images Using Multiresolution Efficient Nets." Computational Intelligence and Neuroscience 2023 (October 16, 2023): 1–12. http://dx.doi.org/10.1155/2023/7282944.

Повний текст джерела
Анотація:
Histopathological images are very effective for investigating the status of various biological structures and diagnosing diseases like cancer. In addition, digital histopathology increases diagnosis precision and provides better image quality and more detail for the pathologist with multiple viewing options and team annotations. As a result of the benefits above, faster treatment is available, increasing therapy success rates and patient recovery and survival chances. However, the present manual examination of these images is tedious and time-consuming for pathologists. Therefore, reliable automated techniques are needed to effectively classify normal and malignant cancer images. This paper applied a deep learning approach, namely, EfficientNet and its variants from B0 to B7. We used different image resolutions for each model, from 224 × 224 pixels to 600 × 600 pixels. We also applied transfer learning and parameter tuning techniques to improve the results and overcome the overfitting problem. We collected the dataset from the Lung and Colon Cancer Histopathological Image LC25000 image dataset. The dataset acquisition consists of 25,000 histopathology images of five classes (lung adenocarcinoma, lung squamous cell carcinoma, benign lung tissue, colon adenocarcinoma, and colon benign tissue). Then, we performed preprocessing on the dataset to remove the noisy images and bring them into a standard format. The model’s performance was evaluated in terms of classification accuracy and loss. We have achieved good accuracy results for all variants; however, the results of EfficientNetB2 stand excellent, with an accuracy of 97% for 260 × 260 pixels resolution images.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Jin, Xu, Teng Huang, Ke Wen, Mengxian Chi, and Hong An. "HistoSSL: Self-Supervised Representation Learning for Classifying Histopathology Images." Mathematics 11, no. 1 (December 26, 2022): 110. http://dx.doi.org/10.3390/math11010110.

Повний текст джерела
Анотація:
The success of image classification depends on copious annotated images for training. Annotating histopathology images is costly and laborious. Although several successful self-supervised representation learning approaches have been introduced, they are still insufficient to consider the unique characteristics of histopathology images. In this work, we propose the novel histopathology-oriented self-supervised representation learning framework (HistoSSL) to efficiently extract representations from unlabeled histopathology images at three levels: global, cell, and stain. The model transfers remarkably to downstream tasks: colorectal tissue phenotyping on the NCTCRC dataset and breast cancer metastasis recognition on the CAMELYON16 dataset. HistoSSL achieved higher accuracies than state-of-the-art self-supervised learning approaches, which proved the robustness of the learned representations.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Elazab, Naira, Hassan Soliman, Shaker El-Sappagh, S. M. Riazul Islam, and Mohammed Elmogy. "Objective Diagnosis for Histopathological Images Based on Machine Learning Techniques: Classical Approaches and New Trends." Mathematics 8, no. 11 (October 24, 2020): 1863. http://dx.doi.org/10.3390/math8111863.

Повний текст джерела
Анотація:
Histopathology refers to the examination by a pathologist of biopsy samples. Histopathology images are captured by a microscope to locate, examine, and classify many diseases, such as different cancer types. They provide a detailed view of different types of diseases and their tissue status. These images are an essential resource with which to define biological compositions or analyze cell and tissue structures. This imaging modality is very important for diagnostic applications. The analysis of histopathology images is a prolific and relevant research area supporting disease diagnosis. In this paper, the challenges of histopathology image analysis are evaluated. An extensive review of conventional and deep learning techniques which have been applied in histological image analyses is presented. This review summarizes many current datasets and highlights important challenges and constraints with recent deep learning techniques, alongside possible future research avenues. Despite the progress made in this research area so far, it is still a significant area of open research because of the variety of imaging techniques and disease-specific characteristics.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Hyun-Cheol Park, Hyun-Cheol Park, Raman Ghimire Hyun-Cheol Park, Sahadev Poudel Raman Ghimire, and Sang-Woong Lee Sahadev Poudel. "Deep Learning for Joint Classification and Segmentation of Histopathology Image." 網際網路技術學刊 23, no. 4 (July 2022): 903–10. http://dx.doi.org/10.53106/160792642022072304025.

Повний текст джерела
Анотація:
<p>Liver cancer is one of the most prevalent cancer deaths worldwide. Thus, early detection and diagnosis of possible liver cancer help in reducing cancer death. Histopathological Image Analysis (HIA) used to be carried out traditionally, but these are time-consuming and require expert knowledge. We propose a patch-based deep learning method for liver cell classification and segmentation. In this work, a two-step approach for the classification and segmentation of whole-slide image (WSI) is proposed. Since WSIs are too large to be fed into convolutional neural networks (CNN) directly, we first extract patches from them. The patches are fed into a modified version of U-Net with its equivalent mask for precise segmentation. In classification tasks, the WSIs are scaled 4 times, 16 times, and 64 times respectively. Patches extracted from each scale are then fed into the convolutional network with its corresponding label. During inference, we perform majority voting on the result obtained from the convolutional network. The proposed method has demonstrated better results in both classification and segmentation of liver cancer cells.</p> <p>&nbsp;</p>
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Naga Raju, Mallela Siva, and Battula Srinivasa Rao. "Colorectal multi-class image classification using deep learning models." Bulletin of Electrical Engineering and Informatics 11, no. 1 (February 1, 2022): 195–200. http://dx.doi.org/10.11591/eei.v11i1.3299.

Повний текст джерела
Анотація:
Colorectal image classification is a novel application area in medical image processing. Colorectal images are one of the most prevalent malignant tumour disease type in the world. However, due to the complexity of histopathological imaging, the most accurate and effective classification still needs to be addressed. In this work we proposed a novel architecture of convolution neural network with deep learning models for the multiclass classification of histopathology images. We achieved the findings using three deep learning models, including the vgg16 with 96.16% and a modified version of Resnet50 with 97.08%, however the proposed Adaptive Resnet152 model generated the best accuracy of 98.38%. The colorectal image multiclass dataset is publicly available which has 5000 images with 8 classes. In this study we have increased all classes equally, total 15000 images have been generated using image augmentation technique. This dataset consists of 60% training images and 40% testing images. The suggested method in this paper produced better results than the existing histopathology image categorization methods with the lowest error rate. For histopathological image categorization, it is a straightforward, effective, and efficient method. We were able to attain state-of-the-art outcomes by efficiently utilizing the resourced dataset.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Wu, Yawen, Michael Cheng, Shuo Huang, Zongxiang Pei, Yingli Zuo, Jianxin Liu, Kai Yang, et al. "Recent Advances of Deep Learning for Computational Histopathology: Principles and Applications." Cancers 14, no. 5 (February 25, 2022): 1199. http://dx.doi.org/10.3390/cancers14051199.

Повний текст джерела
Анотація:
With the remarkable success of digital histopathology, we have witnessed a rapid expansion of the use of computational methods for the analysis of digital pathology and biopsy image patches. However, the unprecedented scale and heterogeneous patterns of histopathological images have presented critical computational bottlenecks requiring new computational histopathology tools. Recently, deep learning technology has been extremely successful in the field of computer vision, which has also boosted considerable interest in digital pathology applications. Deep learning and its extensions have opened several avenues to tackle many challenging histopathological image analysis problems including color normalization, image segmentation, and the diagnosis/prognosis of human cancers. In this paper, we provide a comprehensive up-to-date review of the deep learning methods for digital H&E-stained pathology image analysis. Specifically, we first describe recent literature that uses deep learning for color normalization, which is one essential research direction for H&E-stained histopathological image analysis. Followed by the discussion of color normalization, we review applications of the deep learning method for various H&E-stained image analysis tasks such as nuclei and tissue segmentation. We also summarize several key clinical studies that use deep learning for the diagnosis and prognosis of human cancers from H&E-stained histopathological images. Finally, online resources and open research problems on pathological image analysis are also provided in this review for the convenience of researchers who are interested in this exciting field.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Park, Youngjin, Mujin Kim, Murtaza Ashraf, Young Sin Ko, and Mun Yong Yi. "MixPatch: A New Method for Training Histopathology Image Classifiers." Diagnostics 12, no. 6 (June 18, 2022): 1493. http://dx.doi.org/10.3390/diagnostics12061493.

Повний текст джерела
Анотація:
CNN-based image processing has been actively applied to histopathological analysis to detect and classify cancerous tumors automatically. However, CNN-based classifiers generally predict a label with overconfidence, which becomes a serious problem in the medical domain. The objective of this study is to propose a new training method, called MixPatch, designed to improve a CNN-based classifier by specifically addressing the prediction uncertainty problem and examine its effectiveness in improving diagnosis performance in the context of histopathological image analysis. MixPatch generates and uses a new sub-training dataset, which consists of mixed-patches and their predefined ground-truth labels, for every single mini-batch. Mixed-patches are generated using a small size of clean patches confirmed by pathologists while their ground-truth labels are defined using a proportion-based soft labeling method. Our results obtained using a large histopathological image dataset shows that the proposed method performs better and alleviates overconfidence more effectively than any other method examined in the study. More specifically, our model showed 97.06% accuracy, an increase of 1.6% to 12.18%, while achieving 0.76% of expected calibration error, a decrease of 0.6% to 6.3%, over the other models. By specifically considering the mixed-region variation characteristics of histopathology images, MixPatch augments the extant mixed image methods for medical image analysis in which prediction uncertainty is a crucial issue. The proposed method provides a new way to systematically alleviate the overconfidence problem of CNN-based classifiers and improve their prediction accuracy, contributing toward more calibrated and reliable histopathology image analysis.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Vallez, Noelia, Jose Luis Espinosa-Aranda, Anibal Pedraza, Oscar Deniz, and Gloria Bueno. "Deep Learning within a DICOM WSI Viewer for Histopathology." Applied Sciences 13, no. 17 (August 23, 2023): 9527. http://dx.doi.org/10.3390/app13179527.

Повний текст джерела
Анотація:
Microscopy scanners and artificial intelligence (AI) techniques have facilitated remarkable advancements in biomedicine. Incorporating these advancements into clinical practice is, however, hampered by the variety of digital file formats used, which poses a significant challenge for data processing. Open-source and commercial software solutions have attempted to address proprietary formats, but they fall short of providing comprehensive access to vital clinical information beyond image pixel data. The proliferation of competing proprietary formats makes the lack of interoperability even worse. DICOM stands out as a standard that transcends internal image formats via metadata-driven image exchange in this context. DICOM defines imaging workflow information objects for images, patients’ studies, reports, etc. DICOM promises standards-based pathology imaging, but its clinical use is limited. No FDA-approved digital pathology system natively generates DICOM, and only one high-performance whole slide images (WSI) device has been approved for diagnostic use in Asia and Europe. In a recent series of Digital Pathology Connectathons, the interoperability of our solution was demonstrated by integrating DICOM digital pathology imaging, i.e., WSI, into PACs and enabling their visualisation. However, no system that incorporates state-of-the-art AI methods and directly applies them to DICOM images has been presented. In this paper, we present the first web viewer system that employs WSI DICOM images and AI models. This approach aims to bridge the gap by integrating AI methods with DICOM images in a seamless manner, marking a significant step towards more effective CAD WSI processing tasks. Within this innovative framework, convolutional neural networks, including well-known architectures such as AlexNet and VGG, have been successfully integrated and evaluated.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Bentaieb, Aicha, and Ghassan Hamarneh. "Adversarial Stain Transfer for Histopathology Image Analysis." IEEE Transactions on Medical Imaging 37, no. 3 (March 2018): 792–802. http://dx.doi.org/10.1109/tmi.2017.2781228.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Veta, Mitko, Josien P. W. Pluim, Paul J. van Diest, and Max A. Viergever. "Breast Cancer Histopathology Image Analysis: A Review." IEEE Transactions on Biomedical Engineering 61, no. 5 (May 2014): 1400–1411. http://dx.doi.org/10.1109/tbme.2014.2303852.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Wied, George L., Peter H. Bartels, Marluce Bibbo, and Harvey E. Dytch. "Image analysis in quantitative cytopathology and histopathology." Human Pathology 20, no. 6 (June 1989): 549–71. http://dx.doi.org/10.1016/0046-8177(89)90245-1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Ukwuoma, Chiagoziem C., Md Altab Hossain, Jehoiada K. Jackson, Grace U. Nneji, Happy N. Monday, and Zhiguang Qin. "Multi-Classification of Breast Cancer Lesions in Histopathological Images Using DEEP_Pachi: Multiple Self-Attention Head." Diagnostics 12, no. 5 (May 5, 2022): 1152. http://dx.doi.org/10.3390/diagnostics12051152.

Повний текст джерела
Анотація:
Introduction and Background: Despite fast developments in the medical field, histological diagnosis is still regarded as the benchmark in cancer diagnosis. However, the input image feature extraction that is used to determine the severity of cancer at various magnifications is harrowing since manual procedures are biased, time consuming, labor intensive, and error-prone. Current state-of-the-art deep learning approaches for breast histopathology image classification take features from entire images (generic features). Thus, they are likely to overlook the essential image features for the unnecessary features, resulting in an incorrect diagnosis of breast histopathology imaging and leading to mortality. Methods: This discrepancy prompted us to develop DEEP_Pachi for classifying breast histopathology images at various magnifications. The suggested DEEP_Pachi collects global and regional features that are essential for effective breast histopathology image classification. The proposed model backbone is an ensemble of DenseNet201 and VGG16 architecture. The ensemble model extracts global features (generic image information), whereas DEEP_Pachi extracts spatial information (regions of interest). Statistically, the evaluation of the proposed model was performed on publicly available dataset: BreakHis and ICIAR 2018 Challenge datasets. Result: A detailed evaluation of the proposed model’s accuracy, sensitivity, precision, specificity, and f1-score metrics revealed the usefulness of the backbone model and the DEEP_Pachi model for image classifying. The suggested technique outperformed state-of-the-art classifiers, achieving an accuracy of 1.0 for the benign class and 0.99 for the malignant class in all magnifications of BreakHis datasets and an accuracy of 1.0 on the ICIAR 2018 Challenge dataset. Conclusion: The acquired findings were significantly resilient and proved helpful for the suggested system to assist experts at big medical institutions, resulting in early breast cancer diagnosis and a reduction in the death rate.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Zhang, Jianxin, Xiangguo Wei, Jing Dong, and Bin Liu. "Aggregated Deep Global Feature Representation for Breast Cancer Histopathology Image Classification." Journal of Medical Imaging and Health Informatics 10, no. 11 (November 1, 2020): 2778–83. http://dx.doi.org/10.1166/jmihi.2020.3215.

Повний текст джерела
Анотація:
Convolutional neural networks (CNNs), successfully used in a great number of medical image analysis applications, have also achieved the state-of-the-art performance in breast cancer histopathology image (BCHI) classification problem recently. However, due to the large varieties among within-class images and insufficient data volume, it is still a challenge to obtain more competitive results by using deep CNN models alone. In this paper, we aim to explore the combination of CNN models with a milestone feature representation method in visual tasks, i.e., vector of locally aggregated descriptors (VLAD), for the BCHI classification, and further propose a novel aggregated deep global feature representation (ADGFR) for this problem. ADGFR adopts the deep features that are extracted from the fully connected layer to form an individual descriptor vector, and augments input images to generate different descriptors for achieving the final aggregated descriptor vector. The individual descriptor vector can effectively keep the global features of benign and malignant image, whose discriminability is further reinforced by the aggregate operation, leading to the more discriminant capability of ADGFR for BCHI. Extensive experiments on the public Break His dataset illuminate that our ADGFR can achieve the optimal classification accuracies of 95.05% at image level and 95.50% at patient level, respectively.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Connolly, Laura, Amoon Jamzad, Martin Kaufmann, Catriona E. Farquharson, Kevin Ren, John F. Rudan, Gabor Fichtinger, and Parvin Mousavi. "Combined Mass Spectrometry and Histopathology Imaging for Perioperative Tissue Assessment in Cancer Surgery." Journal of Imaging 7, no. 10 (October 4, 2021): 203. http://dx.doi.org/10.3390/jimaging7100203.

Повний текст джерела
Анотація:
Mass spectrometry is an effective imaging tool for evaluating biological tissue to detect cancer. With the assistance of deep learning, this technology can be used as a perioperative tissue assessment tool that will facilitate informed surgical decisions. To achieve such a system requires the development of a database of mass spectrometry signals and their corresponding pathology labels. Assigning correct labels, in turn, necessitates precise spatial registration of histopathology and mass spectrometry data. This is a challenging task due to the domain differences and noisy nature of images. In this study, we create a registration framework for mass spectrometry and pathology images as a contribution to the development of perioperative tissue assessment. In doing so, we explore two opportunities in deep learning for medical image registration, namely, unsupervised, multi-modal deformable image registration and evaluation of the registration. We test this system on prostate needle biopsy cores that were imaged with desorption electrospray ionization mass spectrometry (DESI) and show that we can successfully register DESI and histology images to achieve accurate alignment and, consequently, labelling for future training. This automation is expected to improve the efficiency and development of a deep learning architecture that will benefit the use of mass spectrometry imaging for cancer diagnosis.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Kanadath, Anusree, J. Angel Arul Jothi, and Siddhaling Urolagin. "Multilevel Multiobjective Particle Swarm Optimization Guided Superpixel Algorithm for Histopathology Image Detection and Segmentation." Journal of Imaging 9, no. 4 (March 29, 2023): 78. http://dx.doi.org/10.3390/jimaging9040078.

Повний текст джерела
Анотація:
Histopathology image analysis is considered as a gold standard for the early diagnosis of serious diseases such as cancer. The advancements in the field of computer-aided diagnosis (CAD) have led to the development of several algorithms for accurately segmenting histopathology images. However, the application of swarm intelligence for segmenting histopathology images is less explored. In this study, we introduce a Multilevel Multiobjective Particle Swarm Optimization guided Superpixel algorithm (MMPSO-S) for the effective detection and segmentation of various regions of interest (ROIs) from Hematoxylin and Eosin (H&E)-stained histopathology images. Several experiments are conducted on four different datasets such as TNBC, MoNuSeg, MoNuSAC, and LD to ascertain the performance of the proposed algorithm. For the TNBC dataset, the algorithm achieves a Jaccard coefficient of 0.49, a Dice coefficient of 0.65, and an F-measure of 0.65. For the MoNuSeg dataset, the algorithm achieves a Jaccard coefficient of 0.56, a Dice coefficient of 0.72, and an F-measure of 0.72. Finally, for the LD dataset, the algorithm achieves a precision of 0.96, a recall of 0.99, and an F-measure of 0.98. The comparative results demonstrate the superiority of the proposed method over the simple Particle Swarm Optimization (PSO) algorithm, its variants (Darwinian particle swarm optimization (DPSO), fractional order Darwinian particle swarm optimization (FODPSO)), Multiobjective Evolutionary Algorithm based on Decomposition (MOEA/D), non-dominated sorting genetic algorithm 2 (NSGA2), and other state-of-the-art traditional image processing methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Kakimoto, Tetsuhiro, Hirotaka Kimata, Satoshi Iwasaki, Atsushi Fukunari, and Hiroyuki Utsumi. "Automated recognition and quantification of pancreatic islets in Zucker diabetic fatty rats treated with exendin-4." Journal of Endocrinology 216, no. 1 (October 22, 2012): 13–20. http://dx.doi.org/10.1530/joe-12-0456.

Повний текст джерела
Анотація:
Type 2 diabetes is characterized by impaired insulin secretion from pancreatic β-cells. Quantification of the islet area in addition to the insulin-positive area is important for detailed understanding of pancreatic islet histopathology. Here we show computerized automatic recognition of the islets of Langerhans as a novel high-throughput method to quantify islet histopathology. We utilized state-of-the-art tissue pattern recognition software to enable automatic recognition of islets, eliminating the need to laboriously trace islet borders by hand. After training by a histologist, the software successfully recognized even irregularly shaped islets with depleted insulin immunostaining, which were quite difficult to automatically recognize. The results from automated image analysis were highly correlated with those from manual image analysis. To establish whether this automated, rapid, and objective determination of islet area will facilitate studies of islet histopathology, we showed the beneficial effect of chronic exendin-4, a glucagon-like peptide-1 analog, treatment on islet histopathology in Zucker diabetic fatty (ZDF) rats. Automated image analysis provided qualitative and quantitative evidence that exendin-4 treatment ameliorated the loss of pancreatic insulin content and gave rise to islet hypertrophy. We also showed that glucagon-positive α-cell area was decreased significantly in ZDF rat islets with disorganized structure. This study is the first to demonstrate the utility of automatic quantification of digital images to study pancreatic islet histopathology. The proposed method will facilitate evaluations in preclinical drug efficacy studies as well as elucidation of the pathophysiology of diabetes.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Lor, Kuo-Lung, and Chung-Ming Chen. "FAST INTERACTIVE REGIONAL PATTERN MERGING FOR GENERIC TISSUE SEGMENTATION IN HISTOPATHOLOGY IMAGES." Biomedical Engineering: Applications, Basis and Communications 33, no. 02 (March 9, 2021): 2150012. http://dx.doi.org/10.4015/s1016237221500125.

Повний текст джерела
Анотація:
The image segmentation of histopathological tissue images has always been a challenge due to the overlapping of tissue color distributions, the complexity of extracellular texture and the large image size. In this paper, we introduce a new region-merging algorithm, namely, the Regional Pattern Merging (RPM) for interactive color image segmentation and annotation, by efficiently retrieving and applying the user’s prior knowledge of stroke-based interaction. Low-level color/texture features of each region are used to compose a regional pattern adapted to differentiating a foreground object from the background scene. This iterative region-merging is based on a modified Region Adjacency Graph (RAG) model built from initial segmented results of the mean shift to speed up the merging process. The foreground region of interest (ROI) is segmented by the reduction of the background region and discrimination of uncertain regions. We then compare our method against state-of-the-art interactive image segmentation algorithms in both natural images and histological images. Taking into account the homogeneity of both color and texture, the resulting semi-supervised classification and interactive segmentation capture histological structures more completely than other intensity or color-based methods. Experimental results show that the merging of the RAG model runs in a linear time according to the number of graph edges, which is essentially faster than both traditional graph-based and region-based methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Xiao, Shuomin, Aiping Qu, Penghui He, and Han Hong. "CA-Net: Context Aggregation Network for Nuclei Classification in Histopathology Image." Journal of Physics: Conference Series 2504, no. 1 (May 1, 2023): 012031. http://dx.doi.org/10.1088/1742-6596/2504/1/012031.

Повний текст джерела
Анотація:
Abstract Accurately classifying nuclei in histopathology images is essential for cancer diagnosis and prognosis. However, due to the touching nuclei, nucleus shape variation, background complexity, and image artifacts, end-to-end nucleus classification is still difficult and challenging. In this manuscript, we propose a context aggregation network (CA-Net) for nuclei classification by fusing global contextual information which is critical for classifying nuclei in histopathology images. Specifically, we propose a multi-level semantic supervision (MSS) module focusing on extracting multi-scale context information by varying three different kernel sizes, and dynamically aggregating the context information from high to low level. Furthermore, we employ the GPG and SAPF modules in encoder and decoder networks to exact and aggregate global context information. Finally, the proposed network is verified on a mainstream nuclei classification image datasets (PanNuke) and achieves an improved global accuracy of 0.816. Our proposed MSS module can be easily transferred into any UNet-liked architecture as a deep supervision mechanism.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Atupelage, Chamidu, Hiroshi Nagahashi, Masahiro Yamaguchi, Michiie Sakamoto, and Akinori Hashiguchi. "Multifractal Feature Descriptor for Histopathology." Analytical Cellular Pathology 35, no. 2 (2012): 123–26. http://dx.doi.org/10.1155/2012/912956.

Повний текст джерела
Анотація:
Background: Histologic image analysis plays an important role in cancer diagnosis. It describes the structure of the body tissues and abnormal structure gives the suspicion of the cancer or some other diseases. Observing the structural changes of these chaotic textures from the human eye is challenging process. However, the challenge can be defeat by forming mathematical descriptor to represent the histologic texture and classify the structural changes via a sophisticated computational method.Objective: In this paper, we propose a texture descriptor to observe the histologic texture into highly discriminative feature space.Methods: Fractal dimension describes the self-similar structures in different and more accurate manner than topological dimension. Further, the fractal phenomenon has been extended to natural structures (images) as multifractal dimension. We exploited the multifractal analysis to represent the histologic texture, which derive more discriminative feature space for classification.Results: We utilized a set of histologic images (belongs to liver and prostate specimens) to assess the discriminative power of the multifractal features. The experiment was organized to classify the given histologic texture as cancer and non-cancer. The results show the discrimination capability of multifractal features by achieving approximately 95% of correct classification rate.Conclusion: Multifractal features are more effective to describe the histologic texture. The proposed feature descriptor showed high classification rate for both liver and prostate data sample datasets.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

He, PengHui, AiPing Qu, ShuoMin Xiao, and MeiDan Ding. "A GNN-based Network for Tissue Semantic Segmentation in Histopathology Image." Journal of Physics: Conference Series 2504, no. 1 (May 1, 2023): 012047. http://dx.doi.org/10.1088/1742-6596/2504/1/012047.

Повний текст джерела
Анотація:
Abstract Segmentation of different tissue regions in pathological images hold an significant position diagnosis and prognosis of cancer. Although the convolutional neural network(CNN) and transformer which treat the image as a grid or sequence structure have been widely employed in this task, which is difficult to capture irregular and complex targets flexibly. So it is still a challenging task. At this paper, we employ a GNN-based segmentation method for pathological images which adopts the encoding-decoding structure. We first represent the input image as a graph structure which consists of a number of patches viewed as nodes and connect the nearest neighbors to build a graph. We also introduce ViG block to build a hierarchical pyramid architecture which consists of grapher module with graph convolution and FFN module with two linear layers. In addition, we utilize a pyramid CNN architecture decoder to aggregate graph information with multi-scales. The proposed method reaches 75.68% and 87.72% mean Dice on BCSS and LAUD-HistoSeg datasets respectively demonstrate the effectiveness.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Park, Joon-Hyeon, and Myung-Hoon Sunwoo. "Histopathology Image Super Resolution using Generative Adversarial Network." Journal of the Institute of Electronics and Information Engineers 59, no. 8 (August 31, 2022): 55–60. http://dx.doi.org/10.5573/ieie.2022.59.8.55.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Kurmi, Yashwant, Vijayshri Chaurasia, and Neelkamal Kapoor. "Histopathology image segmentation and classification for cancer revelation." Signal, Image and Video Processing 15, no. 6 (June 6, 2021): 1341–49. http://dx.doi.org/10.1007/s11760-021-01865-x.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Jia, Zhipeng, Xingyi Huang, Eric I.-Chao Chang, and Yan Xu. "Constrained Deep Weak Supervision for Histopathology Image Segmentation." IEEE Transactions on Medical Imaging 36, no. 11 (November 2017): 2376–88. http://dx.doi.org/10.1109/tmi.2017.2724070.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Xu, Yan, Jun-Yan Zhu, Eric I.-Chao Chang, Maode Lai, and Zhuowen Tu. "Weakly supervised histopathology cancer image segmentation and classification." Medical Image Analysis 18, no. 3 (April 2014): 591–604. http://dx.doi.org/10.1016/j.media.2014.01.010.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
38

W., Abmayr. "ADVANCES IN COMPUTER-AIDED IMAGE ANALYSIS IN HISTOPATHOLOGY." American Journal of Dermatopathology 14, no. 1 (February 1992): 73. http://dx.doi.org/10.1097/00000372-199202000-00058.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
39

King, Thomas S., Ramaswamy Sharma, Jeff Jackson, and Kristin R. Fiebelkorn. "Clinical Case-Based Image Portfolios in Medical Histopathology." Anatomical Sciences Education 12, no. 2 (August 17, 2018): 200–209. http://dx.doi.org/10.1002/ase.1794.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Li, Kailu, Ziniu Qian, Yingnan Han, Eric I.-Chao Chang, Bingzheng Wei, Maode Lai, Jing Liao, Yubo Fan, and Yan Xu. "Weakly supervised histopathology image segmentation with self-attention." Medical Image Analysis 86 (May 2023): 102791. http://dx.doi.org/10.1016/j.media.2023.102791.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Kandel, Ibrahem, and Mauro Castelli. "A Novel Architecture to Classify Histopathology Images Using Convolutional Neural Networks." Applied Sciences 10, no. 8 (April 23, 2020): 2929. http://dx.doi.org/10.3390/app10082929.

Повний текст джерела
Анотація:
Histopathology is the study of tissue structure under the microscope to determine if the cells are normal or abnormal. Histopathology is a very important exam that is used to determine the patients’ treatment plan. The classification of histopathology images is very difficult to even an experienced pathologist, and a second opinion is often needed. Convolutional neural network (CNN), a particular type of deep learning architecture, obtained outstanding results in computer vision tasks like image classification. In this paper, we propose a novel CNN architecture to classify histopathology images. The proposed model consists of 15 convolution layers and two fully connected layers. A comparison between different activation functions was performed to detect the most efficient one, taking into account two different optimizers. To train and evaluate the proposed model, the publicly available PatchCamelyon dataset was used. The dataset consists of 220,000 annotated images for training and 57,000 unannotated images for testing. The proposed model achieved higher performance compared to the state-of-the-art architectures with an AUC of 95.46%.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Alali, Mohammed H., Arman Roohi, Shaahin Angizi, and Jitender S. Deogun. "Enabling Intelligent IoTs for Histopathology Image Analysis Using Convolutional Neural Networks." Micromachines 13, no. 8 (August 22, 2022): 1364. http://dx.doi.org/10.3390/mi13081364.

Повний текст джерела
Анотація:
Medical imaging is an essential data source that has been leveraged worldwide in healthcare systems. In pathology, histopathology images are used for cancer diagnosis, whereas these images are very complex and their analyses by pathologists require large amounts of time and effort. On the other hand, although convolutional neural networks (CNNs) have produced near-human results in image processing tasks, their processing time is becoming longer and they need higher computational power. In this paper, we implement a quantized ResNet model on two histopathology image datasets to optimize the inference power consumption. We analyze classification accuracy, energy estimation, and hardware utilization metrics to evaluate our method. First, the original RGB-colored images are utilized for the training phase, and then compression methods such as channel reduction and sparsity are applied. Our results show an accuracy increase of 6% from RGB on 32-bit (baseline) to the optimized representation of sparsity on RGB with a lower bit-width, i.e., <8:8>. For energy estimation on the used CNN model, we found that the energy used in RGB color mode with 32-bit is considerably higher than the other lower bit-width and compressed color modes. Moreover, we show that lower bit-width implementations yield higher resource utilization and a lower memory bottleneck ratio. This work is suitable for inference on energy-limited devices, which are increasingly being used in the Internet of Things (IoT) systems that facilitate healthcare systems.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

O., Awoyelu I., Ojo B. R., Aregbesola S. B., and Soyele O. O. "Performance Evaluation of a Classification Model for Oral Tumor Diagnosis." Computer and Information Science 13, no. 1 (December 20, 2019): 1. http://dx.doi.org/10.5539/cis.v13n1p1.

Повний текст джерела
Анотація:
This paper extracted features from region of interest of histopathology images, formulated a classification model for diagnosis, simulated the model and evaluated the performance of the model. This is with a view to developing a histopathology image classification model for oral tumor diagnosis. The input for the classification is the oral histopathology images obtained from Obafemi Awolowo University Dental Clinic histopathology archive. The model for oral tumor diagnosis was formulated using the multilayered perceptron type of artificial neural network. Image preprocessing on the images was done using Contrast Limited Adaptive Histogram Equalization (CLAHE), features were extracted using Gray Level Confusion Matrix (GLCM). The important features were identified using Sequential Forward Selection (SFS) algorithm. The model classified oral tumor diagnosis into tive classes: Ameloblastoma, Giant Cell Lesions, Pleomorphic Adenoma, Mucoepidermoid Carcinoma and Squamous Cell Carcinoma. The performance of the model was evaluated using specificity and sensitivity. The result obtained showed that the model yielded an average accuracy of 82.14%. The sensitivity and the specificity values of Ameloblastoma were 85.71% and 89.4%, of Giant Cell Lesions were 83.33% and 94.74%, of Pleomorphic Adenoma were 75% and 95.24%, of Mucoepidermoid Carcinoma were 100% and 100%, and of Squamous Cell Carcinoma were 71.43% and 94.74% respectively. The model is capable of assisting pathologists in making consistent and accurate diagnosis. It can be considered as a second opinion to augment a pathologist&rsquo;s diagnostic decision.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Nye, Logan, Hamid Ghaednia, and Joseph H. Schwab. "Generating synthetic samples of chondrosarcoma histopathology with a denoising diffusion probabilistic model." Journal of Clinical Oncology 41, no. 16_suppl (June 1, 2023): e13592-e13592. http://dx.doi.org/10.1200/jco.2023.41.16_suppl.e13592.

Повний текст джерела
Анотація:
e13592 Background: The emergence of digital pathology — an image-based environment for the acquisition, management and interpretation of pathology information supported by computational techniques for data extraction and analysis — is changing the pathology ecosystem. The development of machine-learning approaches for the extraction of information from image data, allows for tissue interrogation in a way that was not previously possible. However, creating digital pathology algorithms requires large volumes of training data, often on the order of thousands of histopathology slides. This becomes problematic for rare diseases, where imaging datasets of such size do not exist. This makes it impossible to train digital pathology models for these rare conditions. However, recent advances in generative deep learning models may provide a method for overcoming this lack of histology data for rare diseases. Pre-trained diffusion-based probabilistic models can be used to create photorealistic variations of existing images. In this study, we explored the potential of using a deep generative model created by OpenAI for the purpose of producing synthetic histopathology images, using chondrosarcoma as our rare tumor of interest. Methods: Our team compiled a dataset of 55 chondrosarcoma histolopathology images from the annotated records of Dr. Henry Jaffe, a pioneering authority in musculoskeletal pathology. We built a deep learning image-generation application in a Jupyter notebook environment, iterating upon OpenAI’s DALL-E application processing interface (API) with python programming language. Using the chondrosarcoma histology dataset and NVIDIA GPUs, we trained the deep learning application to generate multiple synthetic variations of each real chondrosarcoma image. Results: After several hours, the deep learning model successfully generated 1,000 images of chondrosarcoma from 55 original images. The synthetic histology images retained photorealistic quality and displayed characteristic cellular features of chondrosarcoma tumor tissue. Conclusions: Deep generative models may be useful in addressing issues of data scarcity in rare diseases, such as chondrosarcoma. For example, in situations where existing imaging data is insufficient for training diagnostic computer vision models, diffusion-based generative models could be applied to create training datasets. However, further exploration of ethical considerations and qualitative analyses of these generated data are needed.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Gonçalves, Wanderson Gonçalves e., Marcelo Henrique Paula dos Santos, Leonardo Miranda Brito, Helber Gonzales Almeida Palheta, Fábio Manoel França Lobato, Samia Demachki, Ândrea Ribeiro-dos-Santos, and Gilderlanio Santana de Araújo. "DeepHP: A New Gastric Mucosa Histopathology Dataset for Helicobacter pylori Infection Diagnosis." International Journal of Molecular Sciences 23, no. 23 (November 23, 2022): 14581. http://dx.doi.org/10.3390/ijms232314581.

Повний текст джерела
Анотація:
Emerging deep learning-based applications in precision medicine include computational histopathological analysis. However, there is a lack of the required training image datasets to generate classification and detection models. This phenomenon occurs mainly due to human factors that make it difficult to obtain well-annotated data. The present study provides a curated public collection of histopathological images (DeepHP) and a convolutional neural network model for diagnosing gastritis. Images from gastric biopsy histopathological exams were used to investigate the performance of the proposed model in detecting gastric mucosa with Helicobacter pylori infection. The DeepHP database comprises 394,926 histopathological images, of which 111 K were labeled as Helicobacter pylori positive and 283 K were Helicobacter pylori negative. We investigated the classification performance of three Convolutional Neural Network architectures. The models were tested and validated with two distinct image sets of 15% (59K patches) chosen randomly. The VGG16 architecture showed the best results with an Area Under the Curve of 0.998%. The results showed that CNN could be used to classify histopathological images from gastric mucosa with marked precision. Our model evidenced high potential and application in the computational pathology field.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Popovici, Vlad, Aleš Křenek, and Eva Budinská. "Identification of “BRAF-Positive” Cases Based on Whole-Slide Image Analysis." BioMed Research International 2017 (2017): 1–7. http://dx.doi.org/10.1155/2017/3926498.

Повний текст джерела
Анотація:
A key requirement for precision medicine is the accurate identification of patients that would respond to a specific treatment or those that represent a high-risk group, and a plethora of molecular biomarkers have been proposed for this purpose during the last decade. Their application in clinical settings, however, is not always straightforward due to relatively high costs of some tests, limited availability of the biological material and time, and procedural constraints. Hence, there is an increasing interest in constructing tissue-based surrogate biomarkers that could be applied with minimal overhead directly to histopathology images and which could be used for guiding the selection of eventual further molecular tests. In the context of colorectal cancer, we present a method for constructing a surrogate biomarker that is able to predict with high accuracy whether a sample belongs to the “BRAF-positive” group, a high-risk group comprising V600E BRAF mutants and BRAF-mutant-like tumors. Our model is trained to mimic the predictions of a 64-gene signature, the current definition of BRAF-positive group, thus effectively identifying histopathology image features that can be linked to a molecular score. Since the only required input is the routine histopathology image, the model can easily be integrated in the diagnostic workflow.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Ai, Shiliang, Chen Li, Xiaoyan Li, Tao Jiang, Marcin Grzegorzek, Changhao Sun, Md Mamunur Rahaman, Jinghua Zhang, Yudong Yao, and Hong Li. "A State-of-the-Art Review for Gastric Histopathology Image Analysis Approaches and Future Development." BioMed Research International 2021 (June 26, 2021): 1–19. http://dx.doi.org/10.1155/2021/6671417.

Повний текст джерела
Анотація:
Gastric cancer is a common and deadly cancer in the world. The gold standard for the detection of gastric cancer is the histological examination by pathologists, where Gastric Histopathological Image Analysis (GHIA) contributes significant diagnostic information. The histopathological images of gastric cancer contain sufficient characterization information, which plays a crucial role in the diagnosis and treatment of gastric cancer. In order to improve the accuracy and objectivity of GHIA, Computer-Aided Diagnosis (CAD) has been widely used in histological image analysis of gastric cancer. In this review, the CAD technique on pathological images of gastric cancer is summarized. Firstly, the paper summarizes the image preprocessing methods, then introduces the methods of feature extraction, and then generalizes the existing segmentation and classification techniques. Finally, these techniques are systematically introduced and analyzed for the convenience of future researchers.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Kalra, Shivam, H. R. Tizhoosh, Charles Choi, Sultaan Shah, Phedias Diamandis, Clinton J. V. Campbell, and Liron Pantanowitz. "Yottixel – An Image Search Engine for Large Archives of Histopathology Whole Slide Images." Medical Image Analysis 65 (October 2020): 101757. http://dx.doi.org/10.1016/j.media.2020.101757.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Kate, Vandana, and Pragya Shukla. "Breast Cancer Image Multi-Classification Using Random Patch Aggregation and Depth-Wise Convolution based Deep-Net Model." International Journal of Online and Biomedical Engineering (iJOE) 17, no. 01 (January 19, 2021): 83. http://dx.doi.org/10.3991/ijoe.v17i01.18513.

Повний текст джерела
Анотація:
Adapting the profound, deep convolutional neural network models for large image classification can result in the layout of network architectures with a large number of learnable parameters and tuning of those varied parameters can considerably grow the complexity of the model. To address this problem a convolutional Deep-Net Model based on the extraction of random patches and enforcing depth-wise convolutions is proposed for training and classification of widely known benchmark Breast Cancer histopathology images. The classification result of these patches is aggregated using majority vote casting in deciding the final image classification type. It has been observed that the proposed Deep-Net model implementation results when compared with classification results of the VGG Net(16 layers) learned features, outclasses in terms of accuracy when applied to breast tumor Histopathology images. The objective of this work is to examine and comprehensively analyze the sub-class classification performance of the proposed model across all optical magnification frontiers.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Ruan, Jun, Zhikui Zhu, Chenchen Wu, Guanglu Ye, Jingfan Zhou, and Junqiu Yue. "A fast and effective detection framework for whole-slide histopathology image analysis." PLOS ONE 16, no. 5 (May 12, 2021): e0251521. http://dx.doi.org/10.1371/journal.pone.0251521.

Повний текст джерела
Анотація:
Pathologists generally pan, focus, zoom and scan tissue biopsies either under microscopes or on digital images for diagnosis. With the rapid development of whole-slide digital scanners for histopathology, computer-assisted digital pathology image analysis has attracted increasing clinical attention. Thus, the working style of pathologists is also beginning to change. Computer-assisted image analysis systems have been developed to help pathologists perform basic examinations. This paper presents a novel lightweight detection framework for automatic tumor detection in whole-slide histopathology images. We develop the Double Magnification Combination (DMC) classifier, which is a modified DenseNet-40 to make patch-level predictions with only 0.3 million parameters. To improve the detection performance of multiple instances, we propose an improved adaptive sampling method with superpixel segmentation and introduce a new heuristic factor, local sampling density, as the convergence condition of iterations. In postprocessing, we use a CNN model with 4 convolutional layers to regulate the patch-level predictions based on the predictions of adjacent sampling points and use linear interpolation to generate a tumor probability heatmap. The entire framework was trained and validated using the dataset from the Camelyon16 Grand Challenge and Hubei Cancer Hospital. In our experiments, the average AUC was 0.95 in the test set for pixel-level detection.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії