Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: BREAKHIS DATASET.

Статті в журналах з теми "BREAKHIS DATASET"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "BREAKHIS DATASET".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Joshi, Shubhangi A., Anupkumar M. Bongale, P. Olof Olsson, Siddhaling Urolagin, Deepak Dharrao, and Arunkumar Bongale. "Enhanced Pre-Trained Xception Model Transfer Learned for Breast Cancer Detection." Computation 11, no. 3 (March 13, 2023): 59. http://dx.doi.org/10.3390/computation11030059.

Повний текст джерела
Анотація:
Early detection and timely breast cancer treatment improve survival rates and patients’ quality of life. Hence, many computer-assisted techniques based on artificial intelligence are being introduced into the traditional diagnostic workflow. This inclusion of automatic diagnostic systems speeds up diagnosis and helps medical professionals by relieving their work pressure. This study proposes a breast cancer detection framework based on a deep convolutional neural network. To mine useful information about breast cancer through breast histopathology images of the 40× magnification factor that are publicly available, the BreakHis dataset and IDC(Invasive ductal carcinoma) dataset are used. Pre-trained convolutional neural network (CNN) models EfficientNetB0, ResNet50, and Xception are tested for this study. The top layers of these architectures are replaced by custom layers to make the whole architecture specific to the breast cancer detection task. It is seen that the customized Xception model outperformed other frameworks. It gave an accuracy of 93.33% for the 40× zoom images of the BreakHis dataset. The networks are trained using 70% data consisting of BreakHis 40× histopathological images as training data and validated on 30% of the total 40× images as unseen testing and validation data. The histopathology image set is augmented by performing various image transforms. Dropout and batch normalization are used as regularization techniques. Further, the proposed model with enhanced pre-trained Xception CNN is fine-tuned and tested on a part of the IDC dataset. For the IDC dataset training, validation, and testing percentages are kept as 60%, 20%, and 20%, respectively. It obtained an accuracy of 88.08% for the IDC dataset for recognizing invasive ductal carcinoma from H&E-stained histopathological tissue samples of breast tissues. Weights learned during training on the BreakHis dataset are kept the same while training the model on IDC dataset. Thus, this study enhances and customizes functionality of pre-trained model as per the task of classification on the BreakHis and IDC datasets. This study also tries to apply the transfer learning approach for the designed model to another similar classification task.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Xu, Xuebin, Meijuan An, Jiada Zhang, Wei Liu, and Longbin Lu. "A High-Precision Classification Method of Mammary Cancer Based on Improved DenseNet Driven by an Attention Mechanism." Computational and Mathematical Methods in Medicine 2022 (May 14, 2022): 1–14. http://dx.doi.org/10.1155/2022/8585036.

Повний текст джерела
Анотація:
Cancer is one of the major causes of human disease and death worldwide, and mammary cancer is one of the most common cancer types among women today. In this paper, we used the deep learning method to conduct a preliminary experiment on Breast Cancer Histopathological Database (BreakHis); BreakHis is an open dataset. We propose a high-precision classification method of mammary based on an improved convolutional neural network on the BreakHis dataset. We proposed three different MFSCNET models according to the different insertion positions and the number of SE modules, respectively, MFSCNet A, MFSCNet B, and MFSCNet C. We carried out experiments on the BreakHis dataset. Through experimental comparison, especially, the MFSCNet A network model has obtained the best performance in the high-precision classification experiments of mammary cancer. The accuracy of dichotomy was 99.05% to 99.89%. The accuracy of multiclass classification ranges from 94.36% to approximately 98.41%.Therefore, it is proved that MFSCNet can accurately classify the mammary histological images and has a great application prospect in predicting the degree of tumor. Code will be made available on http://github.com/xiaoan-maker/MFSCNet.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Ogundokun, Roseline Oluwaseun, Sanjay Misra, Akinyemi Omololu Akinrotimi, and Hasan Ogul. "MobileNet-SVM: A Lightweight Deep Transfer Learning Model to Diagnose BCH Scans for IoMT-Based Imaging Sensors." Sensors 23, no. 2 (January 6, 2023): 656. http://dx.doi.org/10.3390/s23020656.

Повний текст джерела
Анотація:
Many individuals worldwide pass away as a result of inadequate procedures for prompt illness identification and subsequent treatment. A valuable life can be saved or at least extended with the early identification of serious illnesses, such as various cancers and other life-threatening conditions. The development of the Internet of Medical Things (IoMT) has made it possible for healthcare technology to offer the general public efficient medical services and make a significant contribution to patients’ recoveries. By using IoMT to diagnose and examine BreakHis v1 400× breast cancer histology (BCH) scans, disorders may be quickly identified and appropriate treatment can be given to a patient. Imaging equipment having the capability of auto-analyzing acquired pictures can be used to achieve this. However, the majority of deep learning (DL)-based image classification approaches are of a large number of parameters and unsuitable for application in IoMT-centered imaging sensors. The goal of this study is to create a lightweight deep transfer learning (DTL) model suited for BCH scan examination and has a good level of accuracy. In this study, a lightweight DTL-based model “MobileNet-SVM”, which is the hybridization of MobileNet and Support Vector Machine (SVM), for auto-classifying BreakHis v1 400× BCH images is presented. When tested against a real dataset of BreakHis v1 400× BCH images, the suggested technique achieved a training accuracy of 100% on the training dataset. It also obtained an accuracy of 91% and an F1-score of 91.35 on the test dataset. Considering how complicated BCH scans are, the findings are encouraging. The MobileNet-SVM model is ideal for IoMT imaging equipment in addition to having a high degree of precision. According to the simulation findings, the suggested model requires a small computation speed and time.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Ukwuoma, Chiagoziem C., Md Altab Hossain, Jehoiada K. Jackson, Grace U. Nneji, Happy N. Monday, and Zhiguang Qin. "Multi-Classification of Breast Cancer Lesions in Histopathological Images Using DEEP_Pachi: Multiple Self-Attention Head." Diagnostics 12, no. 5 (May 5, 2022): 1152. http://dx.doi.org/10.3390/diagnostics12051152.

Повний текст джерела
Анотація:
Introduction and Background: Despite fast developments in the medical field, histological diagnosis is still regarded as the benchmark in cancer diagnosis. However, the input image feature extraction that is used to determine the severity of cancer at various magnifications is harrowing since manual procedures are biased, time consuming, labor intensive, and error-prone. Current state-of-the-art deep learning approaches for breast histopathology image classification take features from entire images (generic features). Thus, they are likely to overlook the essential image features for the unnecessary features, resulting in an incorrect diagnosis of breast histopathology imaging and leading to mortality. Methods: This discrepancy prompted us to develop DEEP_Pachi for classifying breast histopathology images at various magnifications. The suggested DEEP_Pachi collects global and regional features that are essential for effective breast histopathology image classification. The proposed model backbone is an ensemble of DenseNet201 and VGG16 architecture. The ensemble model extracts global features (generic image information), whereas DEEP_Pachi extracts spatial information (regions of interest). Statistically, the evaluation of the proposed model was performed on publicly available dataset: BreakHis and ICIAR 2018 Challenge datasets. Result: A detailed evaluation of the proposed model’s accuracy, sensitivity, precision, specificity, and f1-score metrics revealed the usefulness of the backbone model and the DEEP_Pachi model for image classifying. The suggested technique outperformed state-of-the-art classifiers, achieving an accuracy of 1.0 for the benign class and 0.99 for the malignant class in all magnifications of BreakHis datasets and an accuracy of 1.0 on the ICIAR 2018 Challenge dataset. Conclusion: The acquired findings were significantly resilient and proved helpful for the suggested system to assist experts at big medical institutions, resulting in early breast cancer diagnosis and a reduction in the death rate.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Mohanakurup, Vinodkumar, Syam Machinathu Parambil Gangadharan, Pallavi Goel, Devvret Verma, Sameer Alshehri, Ramgopal Kashyap, and Baitullah Malakhil. "Breast Cancer Detection on Histopathological Images Using a Composite Dilated Backbone Network." Computational Intelligence and Neuroscience 2022 (July 6, 2022): 1–10. http://dx.doi.org/10.1155/2022/8517706.

Повний текст джерела
Анотація:
Breast cancer is a lethal illness that has a high mortality rate. In treatment, the accuracy of diagnosis is crucial. Machine learning and deep learning may be beneficial to doctors. The proposed backbone network is critical for the present performance of CNN-based detectors. Integrating dilated convolution, ResNet, and Alexnet increases detection performance. The composite dilated backbone network (CDBN) is an innovative method for integrating many identical backbones into a single robust backbone. Hence, CDBN uses the lead backbone feature maps to identify objects. It feeds high-level output features from previous backbones into the next backbone in a stepwise way. We show that most contemporary detectors can easily include CDBN to improve performance achieved mAP improvements ranging from 1.5 to 3.0 percent on the breast cancer histopathological image classification (BreakHis) dataset. Experiments have also shown that instance segmentation may be improved. In the BreakHis dataset, CDBN enhances the baseline detector cascade mask R-CNN (mAP = 53.3). The proposed CDBN detector does not need pretraining. It creates high-level traits by combining low-level elements. This network is made up of several identical backbones that are linked together. The composite dilated backbone considers the linked backbones CDBN.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Nahid, Abdullah-Al, Mohamad Ali Mehrabi, and Yinan Kong. "Histopathological Breast Cancer Image Classification by Deep Neural Network Techniques Guided by Local Clustering." BioMed Research International 2018 (2018): 1–20. http://dx.doi.org/10.1155/2018/2362108.

Повний текст джерела
Анотація:
Breast Cancer is a serious threat and one of the largest causes of death of women throughout the world. The identification of cancer largely depends on digital biomedical photography analysis such as histopathological images by doctors and physicians. Analyzing histopathological images is a nontrivial task, and decisions from investigation of these kinds of images always require specialised knowledge. However, Computer Aided Diagnosis (CAD) techniques can help the doctor make more reliable decisions. The state-of-the-art Deep Neural Network (DNN) has been recently introduced for biomedical image analysis. Normally each image contains structural and statistical information. This paper classifies a set of biomedical breast cancer images (BreakHis dataset) using novel DNN techniques guided by structural and statistical information derived from the images. Specifically a Convolutional Neural Network (CNN), a Long-Short-Term-Memory (LSTM), and a combination of CNN and LSTM are proposed for breast cancer image classification. Softmax and Support Vector Machine (SVM) layers have been used for the decision-making stage after extracting features utilising the proposed novel DNN models. In this experiment the best Accuracy value of 91.00% is achieved on the 200x dataset, the best Precision value 96.00% is achieved on the 40x dataset, and the best F-Measure value is achieved on both the 40x and 100x datasets.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Sun, Yixin, Lei Wu, Peng Chen, Feng Zhang, and Lifeng Xu. "Using deep learning in pathology image analysis: A novel active learning strategy based on latent representation." Electronic Research Archive 31, no. 9 (2023): 5340–61. http://dx.doi.org/10.3934/era.2023271.

Повний текст джерела
Анотація:
<abstract><p>Most countries worldwide continue to encounter a pathologist shortage, significantly impeding the timely diagnosis and effective treatment of cancer patients. Deep learning techniques have performed remarkably well in pathology image analysis; however, they require expert pathologists to annotate substantial pathology image data. This study aims to minimize the need for data annotation to analyze pathology images. Active learning (AL) is an iterative approach to search for a few high-quality samples to train a model. We propose our active learning framework, which first learns latent representations of all pathology images by an auto-encoder to train a binary classification model, and then selects samples through a novel ALHS (Active Learning Hybrid Sampling) strategy. This strategy can effectively alleviate the sample redundancy problem and allows for more informative and diverse examples to be selected. We validate the effectiveness of our method by undertaking classification tasks on two cancer pathology image datasets. We achieve the target performance of 90% accuracy using 25% labeled samples in Kather's dataset and reach 88% accuracy using 65% labeled data in BreakHis dataset, which means our method can save 75% and 35% of the annotation budget in the two datasets, respectively.</p></abstract>
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Istighosah, Maie, Andi Sunyoto, and Tonny Hidayat. "Breast Cancer Detection in Histopathology Images using ResNet101 Architecture." sinkron 8, no. 4 (October 1, 2023): 2138–49. http://dx.doi.org/10.33395/sinkron.v8i4.12948.

Повний текст джерела
Анотація:
Cancer is a significant challenge in many fields, especially health and medicine. Breast cancer is among the most common and frequent cancers in women worldwide. Early detection of cancer is the main step for early treatment and increasing the chances of patient survival. As the convolutional neural network method has grown in popularity, breast cancer can be easily identified without the help of experts. Using BreaKHis histopathology data, this project will assess the efficacy of the CNN architecture ResNet101 for breast cancer image classification. The dataset is divided into two classes, namely 1146 malignant and 547 benign. The treatment of data preprocessing is considered. The implementation of data augmentation in the benign class to obtain data balance between the two classes and prevent overfitting. The BreaKHis dataset has noise and uneven color distribution. Approaches such as bilateral filtering, image enhancement, and color normalization were chosen to enhance image quality. Adding flatten, dense, and dropout layers to the ResNet101 architecture is applied to improve the model performance. Parameters were modified during the training stage to achieve optimal model performance. The Adam optimizer was used with a learning rate 0.0001 and a batch size of 32. Furthermore, the model was trained for 100 epochs. The accuracy, precision, recall, and f1-score results are 98.7%, 98.73%, 98.7%, and 98.7%, respectively. According to the results, the proposed ResNet101 model outperforms the standard technique as well as other architectures.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Li, Lingxiao, Niantao Xie, and Sha Yuan. "A Federated Learning Framework for Breast Cancer Histopathological Image Classification." Electronics 11, no. 22 (November 16, 2022): 3767. http://dx.doi.org/10.3390/electronics11223767.

Повний текст джерела
Анотація:
Quantities and diversities of datasets are vital to model training in a variety of medical image diagnosis applications. However, there are the following problems in real scenes: the required data may not be available in a single institution due to the number of patients or the type of pathology, and it is often not feasible to share patient data due to medical data privacy regulations. This means keeping private data safe is required and has become an obstacle in fusing data from multi-party to train a medical model. To solve the problems, we propose a federated learning framework, which allows knowledge fusion achieved by sharing the model parameters of each client through federated training rather than sharing data. Based on breast cancer histopathological dataset (BreakHis), our federated learning experiments achieve the expected results which are similar to the performances of the centralized learning and verify the feasibility and efficiency of the proposed framework.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Burrai, Giovanni P., Andrea Gabrieli, Marta Polinas, Claudio Murgia, Maria Paola Becchere, Pierfranco Demontis, and Elisabetta Antuofermo. "Canine Mammary Tumor Histopathological Image Classification via Computer-Aided Pathology: An Available Dataset for Imaging Analysis." Animals 13, no. 9 (May 6, 2023): 1563. http://dx.doi.org/10.3390/ani13091563.

Повний текст джерела
Анотація:
Histopathology, the gold-standard technique in classifying canine mammary tumors (CMTs), is a time-consuming process, affected by high inter-observer variability. Digital (DP) and Computer-aided pathology (CAD) are emergent fields that will improve overall classification accuracy. In this study, the ability of the CAD systems to distinguish benign from malignant CMTs has been explored on a dataset—namely CMTD—of 1056 hematoxylin and eosin JPEG images from 20 benign and 24 malignant CMTs, with three different CAD systems based on the combination of a convolutional neural network (VGG16, Inception v3, EfficientNet), which acts as a feature extractor, and a classifier (support vector machines (SVM) or stochastic gradient boosting (SGB)), placed on top of the neural net. Based on a human breast cancer dataset (i.e., BreakHis) (accuracy from 0.86 to 0.91), our models were applied to the CMT dataset, showing accuracy from 0.63 to 0.85 across all architectures. The EfficientNet framework coupled with SVM resulted in the best performances with an accuracy from 0.82 to 0.85. The encouraging results obtained by the use of DP and CAD systems in CMTs provide an interesting perspective on the integration of artificial intelligence and machine learning technologies in cancer-related research.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Minarno, Agus Eko, Lulita Ria Wandani, and Yufis Azhar. "Classification of Breast Cancer Based on Histopathological Image Using EfficientNet-B0 on Convolutional Neural Network." International Journal of Emerging Technology and Advanced Engineering 12, no. 8 (August 2, 2022): 70–77. http://dx.doi.org/10.46338/ijetae0822_09.

Повний текст джерела
Анотація:
Breast cancer has been chosen as the leading cause of cancer-related death in women. Biopsy is still the most accurate way to detect cancer cells. However, this is time-consuming and requires a relatively expensive cost and requires a pathologist. Advances in machine learning make it possible to detect and diagnose breast cancer using histopathological images that are the result of a biopsy. BreakHis dataset is a dataset that provides histopathological images. This study proposes the use of this dataset for cancer classification based on histopathological images using EfficientNet-B0 on the Convolutional Neural Network (CNN). The purpose of this study is to improve the performance of previous studies and to determine the effect of augmentation and dropout layers on the proposed model. This study also applies cross-validation to get more accurate results. The results showed that the EfficientNet-B0 model combined with augmentation and dropout layers with the application of kfold cross validation k=7 was able to improve performance in previous studies with an accuracy of 98.90%. The application of data augmentation and dropout layer techniques has also been shown to increase accuracy and reduce overfitting of the proposed model.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Agbley, Bless Lord Y., Jianping Li, Md Altab Hossin, Grace Ugochi Nneji, Jehoiada Jackson, Happy Nkanta Monday, and Edidiong Christopher James. "Federated Learning-Based Detection of Invasive Carcinoma of No Special Type with Histopathological Images." Diagnostics 12, no. 7 (July 9, 2022): 1669. http://dx.doi.org/10.3390/diagnostics12071669.

Повний текст джерела
Анотація:
Invasive carcinoma of no special type (IC-NST) is known to be one of the most prevalent kinds of breast cancer, hence the growing research interest in studying automated systems that can detect the presence of breast tumors and appropriately classify them into subtypes. Machine learning (ML) and, more specifically, deep learning (DL) techniques have been used to approach this problem. However, such techniques usually require massive amounts of data to obtain competitive results. This requirement makes their application in specific areas such as health problematic as privacy concerns regarding the release of patients’ data publicly result in a limited number of publicly available datasets for the research community. This paper proposes an approach that leverages federated learning (FL) to securely train mathematical models over multiple clients with local IC-NST images partitioned from the breast histopathology image (BHI) dataset to obtain a global model. First, we used residual neural networks for automatic feature extraction. Then, we proposed a second network consisting of Gabor kernels to extract another set of features from the IC-NST dataset. After that, we performed a late fusion of the two sets of features and passed the output through a custom classifier. Experiments were conducted for the federated learning (FL) and centralized learning (CL) scenarios, and the results were compared. Competitive results were obtained, indicating the positive prospects of adopting FL for IC-NST detection. Additionally, fusing the Gabor features with the residual neural network features resulted in the best performance in terms of accuracy, F1 score, and area under the receiver operation curve (AUC-ROC). The models show good generalization by performing well on another domain dataset, the breast cancer histopathological (BreakHis) image dataset. Our method also outperformed other methods from the literature.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Mewada, Hiren K., Amit V. Patel, Mahmoud Hassaballah, Monagi H. Alkinani, and Keyur Mahant. "Spectral–Spatial Features Integrated Convolution Neural Network for Breast Cancer Classification." Sensors 20, no. 17 (August 22, 2020): 4747. http://dx.doi.org/10.3390/s20174747.

Повний текст джерела
Анотація:
Cancer identification and classification from histopathological images of the breast depends greatly on experts, and computer-aided diagnosis can play an important role in disagreement of experts. This automatic process has increased the accuracy of the classification at a reduced cost. The advancement in Convolution Neural Network (CNN) structure has outperformed the traditional approaches in biomedical imaging applications. One of the limiting factors of CNN is it uses spatial image features only for classification. The spectral features from the transform domain have equivalent importance in the complex image classification algorithm. This paper proposes a new CNN structure to classify the histopathological cancer images based on integrating the spectral features obtained using a multi-resolution wavelet transform with the spatial features of CNN. In addition, batch normalization process is used after every layer in the convolution network to improve the poor convergence problem of CNN and the deep layers of CNN are trained with spectral–spatial features. The proposed structure is tested on malignant histology images of the breast for both binary and multi-class classification of tissue using the BreaKHis Dataset and the Breast Cancer Classification Challenge 2015 Datasest. Experimental results show that the combination of spectral–spatial features improves classification accuracy of the CNN network and requires less training parameters in comparison with the well known models (i.e., VGG16 and ALEXNET). The proposed structure achieves an average accuracy of 97.58% and 97.45% with 7.6 million training parameters on both datasets, respectively.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Li, Xin, HongBo Li, WenSheng Cui, ZhaoHui Cai, and MeiJuan Jia. "Classification on Digital Pathological Images of Breast Cancer Based on Deep Features of Different Levels." Mathematical Problems in Engineering 2021 (December 30, 2021): 1–13. http://dx.doi.org/10.1155/2021/8403025.

Повний текст джерела
Анотація:
Breast cancer is one of the primary causes of cancer death in the world and has a great impact on women’s health. Generally, the majority of classification methods rely on the high-level feature. However, different levels of features may not be positively correlated for the final results of classification. Inspired by the recent widespread use of deep learning, this study proposes a novel method for classifying benign cancer and malignant breast cancer based on deep features. First, we design Sliding + Random and Sliding + Class Balance Random window slicing strategies for data preprocessing. The two strategies enhance the generalization of model and improve classification performance on minority classes. Second, feature extraction is based on the AlexNet model. We also discuss the influence of intermediate- and high-level features on classification results. Third, different levels of features are input into different machine-learning models for classification, and then, the best combination is chosen. The experimental results show that the data preprocessing of the Sliding + Class Balance Random window slicing strategy produces decent effectiveness on the BreaKHis dataset. The classification accuracy ranges from 83.57% to 88.69% at different magnifications. On this basis, combining intermediate- and high-level features with SVM has the best classification effect. The classification accuracy ranges from 85.30% to 88.76% at different magnifications. Compared with the latest results of F. A. Spanhol’s team who provide BreaKHis data, the presented method shows better classification performance on image-level accuracy. We believe that the proposed method has promising good practical value and research significance.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Amato, Domenico, Salvatore Calderaro, Giosué Lo Bosco, Riccardo Rizzo, and Filippo Vella. "Metric Learning in Histopathological Image Classification: Opening the Black Box." Sensors 23, no. 13 (June 28, 2023): 6003. http://dx.doi.org/10.3390/s23136003.

Повний текст джерела
Анотація:
The application of machine learning techniques to histopathology images enables advances in the field, providing valuable tools that can speed up and facilitate the diagnosis process. The classification of these images is a relevant aid for physicians who have to process a large number of images in long and repetitive tasks. This work proposes the adoption of metric learning that, beyond the task of classifying images, can provide additional information able to support the decision of the classification system. In particular, triplet networks have been employed to create a representation in the embedding space that gathers together images of the same class while tending to separate images with different labels. The obtained representation shows an evident separation of the classes with the possibility of evaluating the similarity and the dissimilarity among input images according to distance criteria. The model has been tested on the BreakHis dataset, a reference and largely used dataset that collects breast cancer images with eight pathology labels and four magnification levels. Our proposed classification model achieves relevant performance on the patient level, with the advantage of providing interpretable information for the obtained results, which represent a specific feature missed by the all the recent methodologies proposed for the same purpose.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Liu, Min, Yu He, Minghu Wu, and Chunyan Zeng. "Breast Histopathological Image Classification Method Based on Autoencoder and Siamese Framework." Information 13, no. 3 (February 24, 2022): 107. http://dx.doi.org/10.3390/info13030107.

Повний текст джерела
Анотація:
The automated classification of breast cancer histopathological images is one of the important tasks in computer-aided diagnosis systems (CADs). Due to the characteristics of small inter-class and large intra-class variances in breast cancer histopathological images, extracting features for breast cancer classification is difficult. To address this problem, an improved autoencoder (AE) network using a Siamese framework that can learn the effective features from histopathological images for CAD breast cancer classification tasks was designed. First, the inputted image is processed at multiple scales using a Gaussian pyramid to obtain multi-scale features. Second, in the feature extraction stage, a Siamese framework is used to constrain the pre-trained AE so that the extracted features have smaller intra-class variance and larger inter-class variance. Experimental results show that the proposed method classification accuracy was as high as 97.8% on the BreakHis dataset. Compared with commonly used algorithms in breast cancer histopathological classification, this method has superior, faster performance.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Umer, Muhammad Junaid, Muhammad Sharif, Seifedine Kadry, and Abdullah Alharbi. "Multi-Class Classification of Breast Cancer Using 6B-Net with Deep Feature Fusion and Selection Method." Journal of Personalized Medicine 12, no. 5 (April 26, 2022): 683. http://dx.doi.org/10.3390/jpm12050683.

Повний текст джерела
Анотація:
Breast cancer has now overtaken lung cancer as the world’s most commonly diagnosed cancer, with thousands of new cases per year. Early detection and classification of breast cancer are necessary to overcome the death rate. Recently, many deep learning-based studies have been proposed for automatic diagnosis and classification of this deadly disease, using histopathology images. This study proposed a novel solution for multi-class breast cancer classification from histopathology images using deep learning. For this purpose, a novel 6B-Net deep CNN model, with feature fusion and selection mechanism, was developed for multi-class breast cancer classification. For the evaluation of the proposed method, two large, publicly available datasets, namely, BreaKHis, with eight classes containing 7909 images, and a breast cancer histopathology dataset, containing 3771 images of four classes, were used. The proposed method achieves a multi-class average accuracy of 94.20%, with a classification training time of 226 s in four classes of breast cancer, and a multi-class average accuracy of 90.10%, with a classification training time of 147 s in eight classes of breast cancer. The experimental outcomes show that the proposed method achieves the highest multi-class average accuracy for breast cancer classification, and hence, the proposed method can effectively be applied for early detection and classification of breast cancer to assist the pathologists in early and accurate diagnosis of breast cancer.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Umer, Muhammad Junaid, Muhammad Sharif, Seifedine Kadry, and Abdullah Alharbi. "Multi-Class Classification of Breast Cancer Using 6B-Net with Deep Feature Fusion and Selection Method." Journal of Personalized Medicine 12, no. 5 (April 26, 2022): 683. http://dx.doi.org/10.3390/jpm12050683.

Повний текст джерела
Анотація:
Breast cancer has now overtaken lung cancer as the world’s most commonly diagnosed cancer, with thousands of new cases per year. Early detection and classification of breast cancer are necessary to overcome the death rate. Recently, many deep learning-based studies have been proposed for automatic diagnosis and classification of this deadly disease, using histopathology images. This study proposed a novel solution for multi-class breast cancer classification from histopathology images using deep learning. For this purpose, a novel 6B-Net deep CNN model, with feature fusion and selection mechanism, was developed for multi-class breast cancer classification. For the evaluation of the proposed method, two large, publicly available datasets, namely, BreaKHis, with eight classes containing 7909 images, and a breast cancer histopathology dataset, containing 3771 images of four classes, were used. The proposed method achieves a multi-class average accuracy of 94.20%, with a classification training time of 226 s in four classes of breast cancer, and a multi-class average accuracy of 90.10%, with a classification training time of 147 s in eight classes of breast cancer. The experimental outcomes show that the proposed method achieves the highest multi-class average accuracy for breast cancer classification, and hence, the proposed method can effectively be applied for early detection and classification of breast cancer to assist the pathologists in early and accurate diagnosis of breast cancer.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Sarker, Md Mostafa Kamal, Farhan Akram, Mohammad Alsharid, Vivek Kumar Singh, Robail Yasrab, and Eyad Elyan. "Efficient Breast Cancer Classification Network with Dual Squeeze and Excitation in Histopathological Images." Diagnostics 13, no. 1 (December 29, 2022): 103. http://dx.doi.org/10.3390/diagnostics13010103.

Повний текст джерела
Анотація:
Medical image analysis methods for mammograms, ultrasound, and magnetic resonance imaging (MRI) cannot provide the underline features on the cellular level to understand the cancer microenvironment which makes them unsuitable for breast cancer subtype classification study. In this paper, we propose a convolutional neural network (CNN)-based breast cancer classification method for hematoxylin and eosin (H&E) whole slide images (WSIs). The proposed method incorporates fused mobile inverted bottleneck convolutions (FMB-Conv) and mobile inverted bottleneck convolutions (MBConv) with a dual squeeze and excitation (DSE) network to accurately classify breast cancer tissue into binary (benign and malignant) and eight subtypes using histopathology images. For that, a pre-trained EfficientNetV2 network is used as a backbone with a modified DSE block that combines the spatial and channel-wise squeeze and excitation layers to highlight important low-level and high-level abstract features. Our method outperformed ResNet101, InceptionResNetV2, and EfficientNetV2 networks on the publicly available BreakHis dataset for the binary and multi-class breast cancer classification in terms of precision, recall, and F1-score on multiple magnification levels.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Chandranegara, Didih Rizki, Faras Haidar Pratama, Sidiq Fajrianur, Moch Rizky Eka Putra, and Zamah Sari. "Automated Detection of Breast Cancer Histopathology Image Using Convolutional Neural Network and Transfer Learning." MATRIK : Jurnal Manajemen, Teknik Informatika dan Rekayasa Komputer 22, no. 3 (July 3, 2023): 455–68. http://dx.doi.org/10.30812/matrik.v22i3.2803.

Повний текст джерела
Анотація:
cancer caused 2.3 million cases and 685,000 deaths in 2020. Histopathology analysis is one of the tests used to determine a patient’s prognosis. However, histopathology analysis is a time-consuming and stressful process. With advances in deep learning methods, computer vision science can be used to detect cancer in medical images, which is expected to improve the accuracy of prognosis. This study aimed to apply Convolutional Neural Network (CNN) and Transfer Learning methods to classify breast cancer histopathology images to diagnose breast tumors. This method used CNN, Transfer Learning ((Visual Geometry Group (VGG16), and Residual Network (ResNet50)). These models undergo data augmentation and balancing techniques applied to undersampling techniques. The dataset used for this study was ”The BreakHis Database of microscopic biopsy images of breast tumors (benign and malignant),” with 1693 data classified into two categories: Benign and Malignant. The results of this study were based on recall, precision, and accuracy values. CNN accuracy was 94%, VGG16 accuracy was 88%, and ResNet50 accuracy was 72%. The conclusion was that the CNN method is recommended in detecting breast cancer to diagnose breast cancer.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Wakili, Musa Adamu, Harisu Abdullahi Shehu, Md Haidar Sharif, Md Haris Uddin Sharif, Abubakar Umar, Huseyin Kusetogullari, Ibrahim Furkan Ince, and Sahin Uyaver. "Classification of Breast Cancer Histopathological Images Using DenseNet and Transfer Learning." Computational Intelligence and Neuroscience 2022 (October 10, 2022): 1–31. http://dx.doi.org/10.1155/2022/8904768.

Повний текст джерела
Анотація:
Breast cancer is one of the most common invading cancers in women. Analyzing breast cancer is nontrivial and may lead to disagreements among experts. Although deep learning methods achieved an excellent performance in classification tasks including breast cancer histopathological images, the existing state-of-the-art methods are computationally expensive and may overfit due to extracting features from in-distribution images. In this paper, our contribution is mainly twofold. First, we perform a short survey on deep-learning-based models for classifying histopathological images to investigate the most popular and optimized training-testing ratios. Our findings reveal that the most popular training-testing ratio for histopathological image classification is 70%: 30%, whereas the best performance (e.g., accuracy) is achieved by using the training-testing ratio of 80%: 20% on an identical dataset. Second, we propose a method named DenTnet to classify breast cancer histopathological images chiefly. DenTnet utilizes the principle of transfer learning to solve the problem of extracting features from the same distribution using DenseNet as a backbone model. The proposed DenTnet method is shown to be superior in comparison to a number of leading deep learning methods in terms of detection accuracy (up to 99.28% on BreaKHis dataset deeming training-testing ratio of 80%: 20%) with good generalization ability and computational speed. The limitation of existing methods including the requirement of high computation and utilization of the same feature distribution is mitigated by dint of the DenTnet.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Alirezazadeh, Pendar, Fadi Dornaika, and Abdelmalik Moujahid. "Chasing a Better Decision Margin for Discriminative Histopathological Breast Cancer Image Classification." Electronics 12, no. 20 (October 20, 2023): 4356. http://dx.doi.org/10.3390/electronics12204356.

Повний текст джерела
Анотація:
When considering a large dataset of histopathologic breast images captured at various magnification levels, the process of distinguishing between benign and malignant cancer from these images can be time-intensive. The automation of histopathological breast cancer image classification holds significant promise for expediting pathology diagnoses and reducing the analysis time. Convolutional neural networks (CNNs) have recently gained traction for their ability to more accurately classify histopathological breast cancer images. CNNs excel at extracting distinctive features that emphasize semantic information. However, traditional CNNs employing the softmax loss function often struggle to achieve the necessary discriminatory power for this task. To address this challenge, a set of angular margin-based softmax loss functions have emerged, including angular softmax (A-Softmax), large margin cosine loss (CosFace), and additive angular margin (ArcFace), each sharing a common objective: maximizing inter-class variation while minimizing intra-class variation. This study delves into these three loss functions and their potential to extract distinguishing features while expanding the decision boundary between classes. Rigorous experimentation on a well-established histopathological breast cancer image dataset, BreakHis, has been conducted. As per the results, it is evident that CosFace focuses on augmenting the differences between classes, while A-Softmax and ArcFace tend to emphasize augmenting within-class variations. These observations underscore the efficacy of margin penalties on angular softmax losses in enhancing feature discrimination within the embedding space. These loss functions consistently outperform softmax-based techniques, either by widening the gaps among classes or enhancing the compactness of individual classes.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Zaalouk, Ahmed M., Gamal A. Ebrahim, Hoda K. Mohamed, Hoda Mamdouh Hassan, and Mohamed M. A. Zaalouk. "A Deep Learning Computer-Aided Diagnosis Approach for Breast Cancer." Bioengineering 9, no. 8 (August 15, 2022): 391. http://dx.doi.org/10.3390/bioengineering9080391.

Повний текст джерела
Анотація:
Breast cancer is a gigantic burden on humanity, causing the loss of enormous numbers of lives and amounts of money. It is the world’s leading type of cancer among women and a leading cause of mortality and morbidity. The histopathological examination of breast tissue biopsies is the gold standard for diagnosis. In this paper, a computer-aided diagnosis (CAD) system based on deep learning is developed to ease the pathologist’s mission. For this target, five pre-trained convolutional neural network (CNN) models are analyzed and tested—Xception, DenseNet201, InceptionResNetV2, VGG19, and ResNet152—with the help of data augmentation techniques, and a new approach is introduced for transfer learning. These models are trained and tested with histopathological images obtained from the BreakHis dataset. Multiple experiments are performed to analyze the performance of these models through carrying out magnification-dependent and magnification-independent binary and eight-class classifications. The Xception model has shown promising performance through achieving the highest classification accuracies for all the experiments. It has achieved a range of classification accuracies from 93.32% to 98.99% for magnification-independent experiments and from 90.22% to 100% for magnification-dependent experiments.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Li, Jia, Jingwen Shi, Hexing Su, and Le Gao. "Breast Cancer Histopathological Image Recognition Based on Pyramid Gray Level Co-Occurrence Matrix and Incremental Broad Learning." Electronics 11, no. 15 (July 26, 2022): 2322. http://dx.doi.org/10.3390/electronics11152322.

Повний текст джерела
Анотація:
In order to recognize breast cancer histopathological images, this article proposed a combined model consisting of a pyramid gray level co-occurrence matrix (PGLCM) feature extraction model and an incremental broad learning (IBL) classification model. The PGLCM model is designed to extract the fusion features of breast cancer histopathological images, which can reflect the multiresolution useful information of the images and facilitate the improvement of the classification effect in the later stage. The IBL model is used to improve the classification accuracy by increasing the number of network enhancement nodes horizontally. Unlike deep neural networks, the IBL model compresses the training and testing time cost greatly by making full use of its single-hidden-layer structure. To our knowledge, it is the first attempt for the IBL model to be introduced into the breast cancer histopathological image recognition task. The experimental results in four magnifications of the BreaKHis dataset show that the accuracy of binary classification and eight-class classification outperforms the existing algorithms. The accuracy of binary classification reaches 91.45%, 90.17%, 90.90% and 90.73%, indicating the effectiveness of the established combined model and demonstrating the advantages in breast cancer histopathological image recognition.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Jae Lim, Myung, Da Eun Kim, Dong Kun Chung, Hoon Lim, and Young Man Kwon. "Deep Convolution Neural Networks for Medical Image Analysis." International Journal of Engineering & Technology 7, no. 3.33 (August 29, 2018): 115. http://dx.doi.org/10.14419/ijet.v7i3.33.18588.

Повний текст джерела
Анотація:
Breast cancer is a highly contagious disease that has killed many people all over the world. It can be fully recovered from early detection. To enable the early detection of the breast cancer, it is very important to classify accurately whether it is breast cancer or not. Recently, the deep learning approach method on the medical images such as these histopathologic images of the breast cancer is showing higher level of accuracy and efficiency compared to the conventional methods. In this paper, the breast cancer histopathological image that is difficult to be distinguished was analyzed visually. And among the deep learning algorithms, the CNN(Convolutional Neural Network) specialized for the image was used to perform comparative analysis on whether it is breast cancer or not. Among the CNN algorithms, VGG16 and InceptionV3 were used, and transfer learning was used for the effective application of these algorithms.The data used in this paper is breast cancer histopathological image dataset classifying the benign and malignant of BreakHis. In the 2-class classification task, InceptionV3 achieved 98% accuracy. It is expected that this deep learning approach method will support the development of disease diagnosis through medical images.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Kode, Hepseeba, and Buket D. Barkana. "Deep Learning- and Expert Knowledge-Based Feature Extraction and Performance Evaluation in Breast Histopathology Images." Cancers 15, no. 12 (June 6, 2023): 3075. http://dx.doi.org/10.3390/cancers15123075.

Повний текст джерела
Анотація:
Cancer develops when a single or a group of cells grows and spreads uncontrollably. Histopathology images are used in cancer diagnosis since they show tissue and cell structures under a microscope. Knowledge-based and deep learning-based computer-aided detection is an ongoing research field in cancer diagnosis using histopathology images. Feature extraction is vital in both approaches since the feature set is fed to a classifier and determines the performance. This paper evaluates three feature extraction methods and their performance in breast cancer diagnosis. Features are extracted by (1) a Convolutional Neural Network, (2) a transfer learning architecture VGG16, and (3) a knowledge-based system. The feature sets are tested by seven classifiers, including Neural Network (64 units), Random Forest, Multilayer Perceptron, Decision Tree, Support Vector Machines, K-Nearest Neighbors, and Narrow Neural Network (10 units) on the BreakHis 400× image dataset. The CNN achieved up to 85% for the Neural Network and Random Forest, the VGG16 method achieved up to 86% for the Neural Network, and the knowledge-based features achieved up to 98% for Neural Network, Random Forest, Multilayer Perceptron classifiers.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Leow, Jia Rong, Wee How Khoh, Ying Han Pang, and Hui Yen Yap. "Breast cancer classification with histopathological image based on machine learning." International Journal of Electrical and Computer Engineering (IJECE) 13, no. 5 (October 1, 2023): 5885. http://dx.doi.org/10.11591/ijece.v13i5.pp5885-5897.

Повний текст джерела
Анотація:
<span lang="EN-US">Breast cancer represents one of the most common reasons for death in the worldwide. It has a substantially higher death rate than other types of cancer. Early detection can enhance the chances of receiving proper treatment and survival. In order to address this problem, this work has provided a convolutional neural network (CNN) deep learning (DL) based model on the classification that may be used to differentiate breast cancer histopathology images as benign or malignant. Besides that, five different types of pre-trained CNN architectures have been used to investigate the performance of the model to solve this problem which are the residual neural network-50 (ResNet-50), visual geometry group-19 (VGG-19), Inception-V3, and AlexNet while the ResNet-50 is also functions as a feature extractor to retrieve information from images and passed them to machine learning algorithms, in this case, a random forest (RF) and k-nearest neighbors (KNN) are employed for classification. In this paper, experiments are done using the BreakHis public dataset. As a result, the ResNet-50 network has the highest test accuracy of 97% to classify breast cancer images.</span>
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Tummala, Sudhakar, Jungeun Kim, and Seifedine Kadry. "BreaST-Net: Multi-Class Classification of Breast Cancer from Histopathological Images Using Ensemble of Swin Transformers." Mathematics 10, no. 21 (November 4, 2022): 4109. http://dx.doi.org/10.3390/math10214109.

Повний текст джерела
Анотація:
Breast cancer (BC) is one of the deadly forms of cancer, causing mortality worldwide in the female population. The standard imaging procedures for screening BC involve mammography and ultrasonography. However, these imaging procedures cannot differentiate subtypes of benign and malignant cancers. Here, histopathology images could provide better sensitivity toward benign and malignant cancer subtypes. Recently, vision transformers have been gaining attention in medical imaging due to their success in various computer vision tasks. Swin transformer (SwinT) is a variant of vision transformer that works on the concept of non-overlapping shifted windows and is a proven method for various vision detection tasks. Thus, in this study, we investigated the ability of an ensemble of SwinTs in the two-class classification of benign vs. malignant and eight-class classification of four benign and four malignant subtypes, using an openly available BreaKHis dataset containing 7909 histopathology images acquired at different zoom factors of 40×, 100×, 200×, and 400×. The ensemble of SwinTs (including tiny, small, base, and large) demonstrated an average test accuracy of 96.0% for the eight-class and 99.6% for the two-class classification, outperforming all the previous works. Thus, an ensemble of SwinTs could identify BC subtypes using histopathological images and may lead to pathologist relief.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Kaplun, Dmitry, Alexander Krasichkov, Petr Chetyrbok, Nikolay Oleinikov, Anupam Garg, and Husanbir Singh Pannu. "Cancer Cell Profiling Using Image Moments and Neural Networks with Model Agnostic Explainability: A Case Study of Breast Cancer Histopathological (BreakHis) Database." Mathematics 9, no. 20 (October 17, 2021): 2616. http://dx.doi.org/10.3390/math9202616.

Повний текст джерела
Анотація:
With the evolution of modern digital pathology, examining cancer cell tissues has paved the way to quantify subtle symptoms, for example, by means of image staining procedures using Eosin and Hematoxylin. Cancer tissues in the case of breast and lung cancer are quite challenging to examine by manual expert analysis of patients suffering from cancer. Merely relying on the observable characteristics by histopathologists for cell profiling may under-constrain the scale and diagnostic quality due to tedious repetition with constant concentration. Thus, automatic analysis of cancer cells has been proposed with algorithmic and soft-computing techniques to leverage speed and reliability. The paper’s novelty lies in the utility of Zernike image moments to extract complex features from cancer cell images and using simple neural networks for classification, followed by explainability on the test results using the Local Interpretable Model-Agnostic Explanations (LIME) technique and Explainable Artificial Intelligence (XAI). The general workflow of the proposed high throughput strategy involves acquiring the BreakHis public dataset, which consists of microscopic images, followed by the application of image processing and machine learning techniques. The recommended technique has been mathematically substantiated and compared with the state-of-the-art to justify the empirical basis in the pursuit of our algorithmic discovery. The proposed system is able to classify malignant and benign cancer cell images of 40× resolution with 100% recognition rate. XAI interprets and reasons the test results obtained from the machine learning model, making it reliable and transparent for analysis and parameter tuning.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Chopra, Pooja, N. Junath, Sitesh Kumar Singh, Shakir Khan, R. Sugumar, and Mithun Bhowmick. "Cyclic GAN Model to Classify Breast Cancer Data for Pathological Healthcare Task." BioMed Research International 2022 (July 21, 2022): 1–12. http://dx.doi.org/10.1155/2022/6336700.

Повний текст джерела
Анотація:
An algorithm framework based on CycleGAN and an upgraded dual-path network (DPN) is suggested to address the difficulties of uneven staining in pathological pictures and difficulty of discriminating benign from malignant cells. CycleGAN is used for color normalization in pathological pictures to tackle the problem of uneven staining. However, the resultant detection model is ineffective. By overlapping the images, the DPN uses the addition of small convolution, deconvolution, and attention mechanisms to enhance the model’s ability to classify the texture features of pathological images on the BreaKHis dataset. The parameters that are taken into consideration for measuring the accuracy of the proposed model are false-positive rate, false-negative rate, recall, precision, and F 1 score. Several experiments are carried out over the selected parameters, such as making comparisons between benign and malignant classification accuracy under different normalization methods, comparison of accuracy of image level and patient level using different CNN models, correlating the correctness of DPN68-A network with different deep learning models and other classification algorithms at all magnifications. The results thus obtained have proved that the proposed model DPN68-A network can effectively classify the benign and malignant breast cancer pathological images at various magnifications. The proposed model also is able to better assist the pathologists in diagnosing the patients by synthesizing the images of different magnifications in the clinical stage.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Elshafey, Mohamed Abdelmoneim, and Tarek Elsaid Ghoniemy. "A hybrid ensemble deep learning approach for reliable breast cancer detection." International Journal of Advances in Intelligent Informatics 7, no. 2 (April 19, 2021): 112. http://dx.doi.org/10.26555/ijain.v7i2.615.

Повний текст джерела
Анотація:
Among the cancer diseases, breast cancer is considered one of the most prevalent threats requiring early detection for a higher recovery rate. Meanwhile, the manual evaluation of malignant tissue regions in histopathology images is a critical and challenging task. Nowadays, deep learning becomes a leading technology for automatic tumor feature extraction and classification as malignant or benign. This paper presents a proposed hybrid deep learning-based approach, for reliable breast cancer detection, in three consecutive stages: 1) fine-tuning the pre-trained Xception-based classification model, 2) merging the extracted features with the predictions of a two-layer stacked LSTM-based regression model, and finally, 3) applying the support vector machine, in the classification phase, to the merged features. For the three stages of the proposed approach, training and testing phases are performed on the BreakHis dataset with nine adopted different augmentation techniques to ensure generalization of the proposed approach. A comprehensive performance evaluation of the proposed approach, with diverse metrics, shows that employing the LSTM-based regression model improves accuracy and precision metrics of the fine-tuned Xception-based model by 10.65% and 11.6%, respectively. Additionally, as a classifier, implementing the support vector machine further boosts the model by 3.43% and 5.22% for both metrics, respectively. Experimental results exploit the efficiency of the proposed approach with outstanding reliability in comparison with the recent state-of-the-art approaches.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Yang, Yunfeng, and Chen Guan. "Classification of histopathological images of breast cancer using an improved convolutional neural network model." Journal of X-Ray Science and Technology 30, no. 1 (January 22, 2022): 33–44. http://dx.doi.org/10.3233/xst-210982.

Повний текст джерела
Анотація:
The accurately automatic classification of medical pathological images has always been an important problem in the field of deep learning. However, the traditional manual extraction of features and image classification usually requires in-depth knowledge and more professional researchers to extract and calculate high-quality image features. This kind of operation generally takes a lot of time and the classification effect is not ideal. In order to solve these problems, this study proposes and tests an improved network model DenseNet-201-MSD to accomplish the task of classification of medical pathological images of breast cancer. First, the image is preprocessed, and the traditional pooling layer is replaced by multiple scaling decomposition to prevent overfitting due to the large dimension of the image data set. Second, the BN algorithm is added before the activation function Softmax and Adam is used in the optimizer to optimize performance of the network model and improve image recognition accuracy of the network model. By verifying the performance of the model using the BreakHis dataset, the new deep learning model yields image classification accuracy of 99.4%, 98.8%, 98.2%and 99.4%when applying to four different magnifications of pathological images, respectively. The study results demonstrate that this new classification method and deep learning model can effectively improve accuracy of pathological image classification, which indicates its potential value in future clinical application.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Saha, Priya, Puja Das, Niharika Nath, and Mrinal Kanti Bhowmik. "Estimation of Abnormal Cell Growth and MCG-Based Discriminative Feature Analysis of Histopathological Breast Images." International Journal of Intelligent Systems 2023 (June 30, 2023): 1–12. http://dx.doi.org/10.1155/2023/6318127.

Повний текст джерела
Анотація:
The accurate prediction of cancer from microscopic biopsy images has always been a major challenge for medical practitioners and pathologists who manually observe the shape and structure of the cells from tissues under a microscope. Mathematical modelling of cell proliferation helps to predict tumour sizes and optimizes the treatment procedure. This paper introduces a cell growth estimation function that uncovers the growth behaviour of benign and malignant cells. To analyse the cellular level information from tissue images, we propose a minimized cellular graph (MCG) development method. The method extracts cells and produces different features that are useful in classifying benign and malignant tissues. The method’s graphical features enable a precise and timely exploration of huge amounts of data and can help in making predictions and informed decisions. This paper introduces an algorithm for constructing a minimized cellular graph which reduces the computational complexity. A comparative study is performed based on the state-of-the-art classifiers, SVM, decision tree, random forest, nearest neighbor, LDA, Naive Bayes, and ANN. The experimental data are obtained from the BreakHis dataset, which contains 2480 benign and 5429 malignant histopathological images. The proposed technique achieves a 97.7% classification accuracy which is 7% higher than that of the other graph feature-based classification methods. A comparative study reveals a performance improvement for breast cancer classification compared to the state-of-the-art techniques.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Hao, Yan, Li Zhang, Shichang Qiao, Yanping Bai, Rong Cheng, Hongxin Xue, Yuchao Hou, Wendong Zhang, and Guojun Zhang. "Breast cancer histopathological images classification based on deep semantic features and gray level co-occurrence matrix." PLOS ONE 17, no. 5 (May 5, 2022): e0267955. http://dx.doi.org/10.1371/journal.pone.0267955.

Повний текст джерела
Анотація:
Breast cancer is regarded as the leading killer of women today. The early diagnosis and treatment of breast cancer is the key to improving the survival rate of patients. A method of breast cancer histopathological images recognition based on deep semantic features and gray level co-occurrence matrix (GLCM) features is proposed in this paper. Taking the pre-trained DenseNet201 as the basic model, part of the convolutional layer features of the last dense block are extracted as the deep semantic features, which are then fused with the three-channel GLCM features, and the support vector machine (SVM) is used for classification. For the BreaKHis dataset, we explore the classification problems of magnification specific binary (MSB) classification and magnification independent binary (MIB) classification, and compared the performance with the seven baseline models of AlexNet, VGG16, ResNet50, GoogLeNet, DenseNet201, SqueezeNet and Inception-ResNet-V2. The experimental results show that the method proposed in this paper performs better than the pre-trained baseline models in MSB and MIB classification problems. The highest image-level recognition accuracy of 40×, 100×, 200×, 400× is 96.75%, 95.21%, 96.57%, and 93.15%, respectively. And the highest patient-level recognition accuracy of the four magnifications is 96.33%, 95.26%, 96.09%, and 92.99%, respectively. The image-level and patient-level recognition accuracy for MIB classification is 95.56% and 95.54%, respectively. In addition, the recognition accuracy of the method in this paper is comparable to some state-of-the-art methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Lee, Jiann-Shu, and Wen-Kai Wu. "Breast Tumor Tissue Image Classification Using DIU-Net." Sensors 22, no. 24 (December 14, 2022): 9838. http://dx.doi.org/10.3390/s22249838.

Повний текст джерела
Анотація:
Inspired by the observation that pathologists pay more attention to the nuclei regions when analyzing pathological images, this study utilized soft segmentation to imitate the visual focus mechanism and proposed a new segmentation–classification joint model to achieve superior classification performance for breast cancer pathology images. Aiming at the characteristics of different sizes of nuclei in pathological images, this study developed a new segmentation network with excellent cross-scale description ability called DIU-Net. To enhance the generalization ability of the segmentation network, that is, to avoid the segmentation network from learning low-level features, we proposed the Complementary Color Conversion Scheme in the training phase. In addition, due to the disparity between the area of the nucleus and the background in the pathology image, there is an inherent data imbalance phenomenon, dice loss and focal loss were used to overcome this problem. In order to further strengthen the classification performance of the model, this study adopted a joint training scheme, so that the output of the classification network can not only be used to optimize the classification network itself, but also optimize the segmentation network. In addition, this model can also provide the pathologist model’s attention area, increasing the model’s interpretability. The classification performance verification of the proposed method was carried out with the BreaKHis dataset. Our method obtains binary/multi-class classification accuracy 97.24/93.75 and 98.19/94.43 for 200× and 400× images, outperforming existing methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Ashurov, Asadulla, Samia Allaoua Chelloug, Alexey Tselykh, Mohammed Saleh Ali Muthanna, Ammar Muthanna, and Mehdhar S. A. M. Al-Gaashani. "Improved Breast Cancer Classification through Combining Transfer Learning and Attention Mechanism." Life 13, no. 9 (September 21, 2023): 1945. http://dx.doi.org/10.3390/life13091945.

Повний текст джерела
Анотація:
Breast cancer, a leading cause of female mortality worldwide, poses a significant health challenge. Recent advancements in deep learning techniques have revolutionized breast cancer pathology by enabling accurate image classification. Various imaging methods, such as mammography, CT, MRI, ultrasound, and biopsies, aid in breast cancer detection. Computer-assisted pathological image classification is of paramount importance for breast cancer diagnosis. This study introduces a novel approach to breast cancer histopathological image classification. It leverages modified pre-trained CNN models and attention mechanisms to enhance model interpretability and robustness, emphasizing localized features and enabling accurate discrimination of complex cases. Our method involves transfer learning with deep CNN models—Xception, VGG16, ResNet50, MobileNet, and DenseNet121—augmented with the convolutional block attention module (CBAM). The pre-trained models are finetuned, and the two CBAM models are incorporated at the end of the pre-trained models. The models are compared to state-of-the-art breast cancer diagnosis approaches and tested for accuracy, precision, recall, and F1 score. The confusion matrices are used to evaluate and visualize the results of the compared models. They help in assessing the models’ performance. The test accuracy rates for the attention mechanism (AM) using the Xception model on the “BreakHis” breast cancer dataset are encouraging at 99.2% and 99.5%. The test accuracy for DenseNet121 with AMs is 99.6%. The proposed approaches also performed better than previous approaches examined in the related studies.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Alqahtani, Yahya, Umakant Mandawkar, Aditi Sharma, Mohammad Najmus Saquib Hasan, Mrunalini Harish Kulkarni, and R. Sugumar. "Breast Cancer Pathological Image Classification Based on the Multiscale CNN Squeeze Model." Computational Intelligence and Neuroscience 2022 (August 29, 2022): 1–11. http://dx.doi.org/10.1155/2022/7075408.

Повний текст джерела
Анотація:
The use of an automatic histopathological image identification system is essential for expediting diagnoses and lowering mistake rates. Although it is of enormous clinical importance, computerized breast cancer multiclassification using histological pictures has rarely been investigated. A deep learning-based classification strategy is suggested to solve the challenge of automated categorization of breast cancer pathology pictures. The attention model that acts on the feature channel is the channel refinement model. The learned channel weight may be used to reduce superfluous features when implementing the feature channel. To increase classification accuracy, calibration is necessary. To increase the accuracy of channel recalibration findings, a multiscale channel recalibration model is provided, and the msSE-ResNet convolutional neural network is built. The multiscale properties flow through the network’s highest pooling layer. The channel weights obtained at different scales are delivered into line fusion and used as input to the next channel recalibration model, which may improve the results of channel recalibration. The experimental findings reveal that the spatial recalibration model fares poorly on the job of classifying breast cancer pathology pictures when applied to the semantic segmentation of brain MRI images. The public BreakHis dataset is used to conduct the experiment. The network performs benign/malignant breast pathology picture classification collected at various magnifications with a classification accuracy of 88.87 percent, according to experimental data. The diseased images are also more resilient. Experiments on pathological pictures at various magnifications show that msSE-ResNet34 is capable of performing well when used to classify pathological images at various magnifications.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Burçak, Kadir Can, and Harun Uğuz. "A New Hybrid Breast Cancer Diagnosis Model Using Deep Learning Model and ReliefF." Traitement du Signal 39, no. 2 (April 30, 2022): 521–29. http://dx.doi.org/10.18280/ts.390214.

Повний текст джерела
Анотація:
Breast cancer is a dangerous type of cancer usually found in women and is a significant research topic in medical science. In patients who are diagnosed and not treated early, cancer spreads to other organs, making treatment difficult. In breast cancer diagnosis, the accuracy of the pathological diagnosis is of great importance to shorten the decision-making process, minimize unnoticed cancer cells and obtain a faster diagnosis. However, the similarity of images in histopathological breast cancer image analysis is a sensitive and difficult process that requires high competence for field experts. In recent years, researchers have been seeking solutions to this process using machine learning and deep learning methods, which have contributed to significant developments in medical diagnosis and image analysis. In this study, a hybrid DCNN + ReliefF is proposed for the classification of breast cancer histopathological images, utilizing the activation properties of pre-trained deep convolutional neural network (DCNN) models, and the dimension-reduction-based ReliefF feature selective algorithm. The model is based on a fine-tuned transfer-learning technique for fully connected layers. In addition, the models were compared to the k-nearest neighbor (kNN), naive Bayes (NB), and support vector machine (SVM) machine learning approaches. The performance of each feature extractor and classifier combination was analyzed using the sensitivity, precision, F1-Score, and ROC curves. The proposed hybrid model was trained separately at different magnifications using the BreakHis dataset. The results show that the model is an efficient classification model with up to 97.8% (AUC) accuracy.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Asare, Sarpong Kwadwo, Fei You, and Obed Tettey Nartey. "A Semisupervised Learning Scheme with Self-Paced Learning for Classifying Breast Cancer Histopathological Images." Computational Intelligence and Neuroscience 2020 (December 8, 2020): 1–16. http://dx.doi.org/10.1155/2020/8826568.

Повний текст джерела
Анотація:
The unavailability of large amounts of well-labeled data poses a significant challenge in many medical imaging tasks. Even in the likelihood of having access to sufficient data, the process of accurately labeling the data is an arduous and time-consuming one, requiring expertise skills. Again, the issue of unbalanced data further compounds the abovementioned problems and presents a considerable challenge for many machine learning algorithms. In lieu of this, the ability to develop algorithms that can exploit large amounts of unlabeled data together with a small amount of labeled data, while demonstrating robustness to data imbalance, can offer promising prospects in building highly efficient classifiers. This work proposes a semisupervised learning method that integrates self-training and self-paced learning to generate and select pseudolabeled samples for classifying breast cancer histopathological images. A novel pseudolabel generation and selection algorithm is introduced in the learning scheme to generate and select highly confident pseudolabeled samples from both well-represented classes to less-represented classes. Such a learning approach improves the performance by jointly learning a model and optimizing the generation of pseudolabels on unlabeled-target data to augment the training data and retraining the model with the generated labels. A class balancing framework that normalizes the class-wise confidence scores is also proposed to prevent the model from ignoring samples from less represented classes (hard-to-learn samples), hence effectively handling the issue of data imbalance. Extensive experimental evaluation of the proposed method on the BreakHis dataset demonstrates the effectiveness of the proposed method.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Tangsakul, Surasak, and Sartra Wongthanavasu. "Deep Cellular Automata-Based Feature Extraction for Classification of the Breast Cancer Image." Applied Sciences 13, no. 10 (May 15, 2023): 6081. http://dx.doi.org/10.3390/app13106081.

Повний текст джерела
Анотація:
Feature extraction is an important step in classification. It directly results in an improvement of classification performance. Recent successes of convolutional neural networks (CNN) have revolutionized image classification in computer vision. The outstanding convolution layer of CNN performs feature extraction to obtain promising features from images. However, it faces the overfitting problem and computational complexity due to the complicated structure of the convolution layer and deep computation. Therefore, this research problem is challenging. This paper proposes a novel deep feature extraction method based on a cellular automata (CA) model for image classification. It is established on the basis of a deep learning approach and multilayer CA with two main processes. Firstly, in the feature extraction process, multilayer CA with rules are built as the deep feature extraction model based on CA theory. The model aims at extracting multilayer features, called feature matrices, from images. Then, these feature matrices are used to generate score matrices for the deep feature model trained by the CA rules. Secondly, in the decision process, the score matrices are flattened and fed into the fully connected layer of an artificial neural network (ANN) for classification. For performance evaluation, the proposed method is empirically tested on BreaKHis, a popular public breast cancer image dataset used in several promising and popular studies, in comparison with the state-of-the-art methods. The experimental results show that the proposed method achieves the better results up to 7.95% improvement on average when compared with the state-of-the-art methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Boumaraf, Said, Xiabi Liu, Yuchai Wan, Zhongshu Zheng, Chokri Ferkous, Xiaohong Ma, Zhuo Li, and Dalal Bardou. "Conventional Machine Learning versus Deep Learning for Magnification Dependent Histopathological Breast Cancer Image Classification: A Comparative Study with Visual Explanation." Diagnostics 11, no. 3 (March 16, 2021): 528. http://dx.doi.org/10.3390/diagnostics11030528.

Повний текст джерела
Анотація:
Breast cancer is a serious threat to women. Many machine learning-based computer-aided diagnosis (CAD) methods have been proposed for the early diagnosis of breast cancer based on histopathological images. Even though many such classification methods achieved high accuracy, many of them lack the explanation of the classification process. In this paper, we compare the performance of conventional machine learning (CML) against deep learning (DL)-based methods. We also provide a visual interpretation for the task of classifying breast cancer in histopathological images. For CML-based methods, we extract a set of handcrafted features using three feature extractors and fuse them to get image representation that would act as an input to train five classical classifiers. For DL-based methods, we adopt the transfer learning approach to the well-known VGG-19 deep learning architecture, where its pre-trained version on the large scale ImageNet, is block-wise fine-tuned on histopathological images. The evaluation of the proposed methods is carried out on the publicly available BreaKHis dataset for the magnification dependent classification of benign and malignant breast cancer and their eight sub-classes, and a further validation on KIMIA Path960, a magnification-free histopathological dataset with 20 image classes, is also performed. After providing the classification results of CML and DL methods, and to better explain the difference in the classification performance, we visualize the learned features. For the DL-based method, we intuitively visualize the areas of interest of the best fine-tuned deep neural networks using attention maps to explain the decision-making process and improve the clinical interpretability of the proposed models. The visual explanation can inherently improve the pathologist’s trust in automated DL methods as a credible and trustworthy support tool for breast cancer diagnosis. The achieved results show that DL methods outperform CML approaches where we reached an accuracy between 94.05% and 98.13% for the binary classification and between 76.77% and 88.95% for the eight-class classification, while for DL approaches, the accuracies range from 85.65% to 89.32% for the binary classification and from 63.55% to 69.69% for the eight-class classification.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Wang, Jiatong, Tiantian Zhu, Shan Liang, R. Karthiga, K. Narasimhan, and V. Elamaran. "Binary and Multiclass Classification of Histopathological Images Using Machine Learning Techniques." Journal of Medical Imaging and Health Informatics 10, no. 9 (August 1, 2020): 2252–58. http://dx.doi.org/10.1166/jmihi.2020.3124.

Повний текст джерела
Анотація:
Background and Objective: Breast cancer is fairly common and widespread form of cancer among women. Digital mammogram, thermal images of breast and digital histopathological images serve as a major tool for the diagnosis and grading of cancer. In this paper, a novel attempt has been proposed using image analysis and machine learning algorithm to develop an automated system for the diagnosis and grading of cancer. Methods: BreaKHis dataset is employed for the present work where images are available with different magnification factor namely 40×, 100×, 200×, 400× and 200× magnification factor is utilized for the present work. Accurate preprocessing steps and precise segmentation of nuclei in histopathology image is a necessary prerequisite for building an automated system. In this work, 103 images from benign and 103 malignant images are used. Initially color image is reshaped to gray scale format by applying Otsu thresholding, followed by top hat, bottom hat transform in preprocessing stage. The threshold value selected based on Ridler and calvard algorithm, extended minima transform and median filtering is applied for doing further steps in preprocessing. For segmentation of nuclei distance transform and watershed are used. Finally, for feature extraction, two different methods are explored. Result: In binary classification benign and malignant classification is done with the highest accuracy rate of 89.7% using ensemble bagged tree classifier. In case of multiclass classification 5-class are taken which are adenosis, fibro adenoma, tubular adenoma, mucinous carcinoma and papillary carcinoma the combination of multiclass classification gives the accuracy of 88.1% using ensemble subspace discriminant classifier. To the best of author’s knowledge, it is the first made in a novel attempt made for binary and multiclass classification of histopathology images. Conclusion: By using ensemble bagged tree and ensemble subspace discriminant classifiers the proposed method is efficient and outperform the state of art method in the literature.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Jakkaladiki, Sudha Prathyusha, and Filip Maly. "An efficient transfer learning based cross model classification (TLBCM) technique for the prediction of breast cancer." PeerJ Computer Science 9 (March 21, 2023): e1281. http://dx.doi.org/10.7717/peerj-cs.1281.

Повний текст джерела
Анотація:
Breast cancer has been the most life-threatening disease in women in the last few decades. The high mortality rate among women is due to breast cancer because of less awareness and a minimum number of medical facilities to detect the disease in the early stages. In the recent era, the situation has changed with the help of many technological advancements and medical equipment to observe breast cancer development. The machine learning technique supports vector machines (SVM), logistic regression, and random forests have been used to analyze the images of cancer cells on different data sets. Although the particular technique has performed better on the smaller data set, accuracy still needs to catch up in most of the data, which needs to be fairer to apply in the real-time medical environment. In the proposed research, state-of-the-art deep learning techniques, such as transfer learning, based cross model classification (TLBCM), convolution neural network (CNN) and transfer learning, residual network (ResNet), and Densenet proposed for efficient prediction of breast cancer with the minimized error rating. The convolution neural network and transfer learning are the most prominent techniques for predicting the main features in the data set. The sensitive data is protected using a cyber-physical system (CPS) while using the images virtually over the network. CPS act as a virtual connection between human and networks. While the data is transferred in the network, it must monitor using CPS. The ResNet changes the data on many layers without compromising the minimum error rate. The DenseNet conciliates the problem of vanishing gradient issues. The experiment is carried out on the data sets Breast Cancer Wisconsin (Diagnostic) and Breast Cancer Histopathological Dataset (BreakHis). The convolution neural network and the transfer learning have achieved a validation accuracy of 98.3%. The results of these proposed methods show the highest classification rate between the benign and the malignant data. The proposed method improves the efficiency and speed of classification, which is more convenient for discovering breast cancer in earlier stages than the previously proposed methodologies.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Clement, David, Emmanuel Agu, Muhammad A. Suleiman, John Obayemi, Steve Adeshina, and Wole Soboyejo. "Multi-Class Breast Cancer Histopathological Image Classification Using Multi-Scale Pooled Image Feature Representation (MPIFR) and One-Versus-One Support Vector Machines." Applied Sciences 13, no. 1 (December 22, 2022): 156. http://dx.doi.org/10.3390/app13010156.

Повний текст джерела
Анотація:
Breast cancer (BC) is currently the most common form of cancer diagnosed worldwide with an incidence estimated at 2.26 million in 2020. Additionally, BC is the leading cause of cancer death. Many subtypes of breast cancer exist with distinct biological features and which respond differently to various treatment modalities and have different clinical outcomes. To ensure that sufferers receive lifesaving patients-tailored treatment early, it is crucial to accurately distinguish dangerous malignant (ductal carcinoma, lobular carcinoma, mucinous carcinoma, and papillary carcinoma) subtypes of tumors from adenosis, fibroadenoma, phyllodes tumor, and tubular adenoma benign harmless subtypes. An excellent automated method for detecting malignant subtypes of tumors is desirable since doctors do not identify 10% to 30% of breast cancers during regular examinations. While several computerized methods for breast cancer classification have been proposed, deep convolutional neural networks (DCNNs) have demonstrated superior performance. In this work, we proposed an ensemble of four variants of DCNNs combined with the support vector machines classifier to classify breast cancer histopathological images into eight subtypes classes: four benign and four malignant. The proposed method utilizes the power of DCNNs to extract highly predictive multi-scale pooled image feature representation (MPIFR) from four resolutions (40×, 100×, 200×, and 400×) of BC images that are then classified using SVM. Eight pre-trained DCNN architectures (Inceptionv3, InceptionResNetv2, ResNet18, ResNet50, DenseNet201, EfficientNetb0, shuffleNet, and SqueezeNet) were individually trained and an ensemble of the four best-performing models (ResNet50, ResNet18, DenseNet201, and EfficientNetb0) was utilized for feature extraction. One-versus-one SVM classification was then utilized to model an 8-class breast cancer image classifier. Our work is novel because while some prior work has utilized CNNs for 2- and 4-class breast cancer classification, only one other prior work proposed a solution for 8-class BC histopathological image classification. A 6B-Net deep CNN model was utilized, achieving an accuracy of 90% for 8-class BC classification. In rigorous evaluation, the proposed MPIFR method achieved an average accuracy of 97.77%, with 97.48% sensitivity, and 98.45% precision on the BreakHis histopathological BC image dataset, outperforming the prior state-of-the-art for histopathological breast cancer multi-class classification and a comprehensive set of DCNN baseline models.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Clement, David, Emmanuel Agu, John Obayemi, Steve Adeshina, and Wole Soboyejo. "Breast Cancer Tumor Classification Using a Bag of Deep Multi-Resolution Convolutional Features." Informatics 9, no. 4 (October 28, 2022): 91. http://dx.doi.org/10.3390/informatics9040091.

Повний текст джерела
Анотація:
Breast cancer accounts for 30% of all female cancers. Accurately distinguishing dangerous malignant tumors from benign harmless ones is key to ensuring patients receive lifesaving treatments on time. However, as doctors currently do not identify 10% to 30% of breast cancers during regular assessment, automated methods to detect malignant tumors are desirable. Although several computerized methods for breast cancer classification have been proposed, convolutional neural networks (CNNs) have demonstrably outperformed other approaches. In this paper, we propose an automated method for the binary classification of breast cancer tumors as either malignant or benign that utilizes a bag of deep multi-resolution convolutional features (BoDMCF) extracted from histopathological images at four resolutions (40×, 100×, 200× and 400×) by three pre-trained state-of-the-art deep CNN models: ResNet-50, EfficientNetb0, and Inception-v3. The BoDMCF extracted by the pre-trained CNNs were pooled using global average pooling and classified using the support vector machine (SVM) classifier. While some prior work has utilized CNNs for breast cancer classification, they did not explore using CNNs to extract and pool a bag of deep multi-resolution features. Other prior work utilized CNNs for deep multi-resolution feature extraction from chest X-ray radiographs to detect other conditions such as pneumoconiosis but not for breast cancer detection from histopathological images. In rigorous evaluation experiments, our deep BoDMCF feature approach with global pooling achieved an average accuracy of 99.92%, sensitivity of 0.9987, specificity (or recall) of 0.9797, positive prediction value (PPV) or precision of 0.99870, F1-Score of 0.9987, MCC of 0.9980, Kappa of 0.8368, and AUC of 0.9990 on the publicly available BreaKHis breast cancer image dataset. The proposed approach outperforms the prior state of the art for histopathological breast cancer classification as well as a comprehensive set of CNN baselines, including ResNet18, InceptionV3, DenseNet201, EfficientNetb0, SqueezeNet, and ShuffleNet, when classifying images at any individual resolutions (40×, 100×, 200× or 400×) or when SVM is used to classify a BoDMCF extracted using any single pre-trained CNN model. We also demonstrate through a carefully constructed set of experiments that each component of our approach contributes non-trivially to its superior performance including transfer learning (pre-training and fine-tuning), deep feature extraction at multiple resolutions, global pooling of deep multiresolution features into a powerful BoDMCF representation, and classification using SVM.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Lu, Shida, Kai Huang, Talha Meraj, and Hafiz Tayyab Rauf. "A novel CAPTCHA solver framework using deep skipping Convolutional Neural Networks." PeerJ Computer Science 8 (April 6, 2022): e879. http://dx.doi.org/10.7717/peerj-cs.879.

Повний текст джерела
Анотація:
A Completely Automated Public Turing Test to tell Computers and Humans Apart (CAPTCHA) is used in web systems to secure authentication purposes; it may break using Optical Character Recognition (OCR) type methods. CAPTCHA breakers make web systems highly insecure. However, several techniques to break CAPTCHA suggest CAPTCHA designers about their designed CAPTCHA’s need improvement to prevent computer vision-based malicious attacks. This research primarily used deep learning methods to break state-of-the-art CAPTCHA codes; however, the validation scheme and conventional Convolutional Neural Network (CNN) design still need more confident validation and multi-aspect covering feature schemes. Several public datasets are available of text-based CAPTCHa, including Kaggle and other dataset repositories where self-generation of CAPTCHA datasets are available. The previous studies are dataset-specific only and cannot perform well on other CAPTCHA’s. Therefore, the proposed study uses two publicly available datasets of 4- and 5-character text-based CAPTCHA images to propose a CAPTCHA solver. Furthermore, the proposed study used a skip-connection-based CNN model to solve a CAPTCHA. The proposed research employed 5-folds on data that delivers 10 different CNN models on two datasets with promising results compared to the other studies.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Tao, Ran, Zhaoya Gong, Qiwei Ma, and Jean-Claude Thill. "Boosting Computational Effectiveness in Big Spatial Flow Data Analysis with Intelligent Data Reduction." ISPRS International Journal of Geo-Information 9, no. 5 (May 6, 2020): 299. http://dx.doi.org/10.3390/ijgi9050299.

Повний текст джерела
Анотація:
One of the enduring issues of spatial origin-destination (OD) flow data analysis is the computational inefficiency or even the impossibility to handle large datasets. Despite the recent advancements in high performance computing (HPC) and the ready availability of powerful computing infrastructure, we argue that the best solutions are based on a thorough understanding of the fundamental properties of the data. This paper focuses on overcoming the computational challenge through data reduction that intelligently takes advantage of the heavy-tailed distributional property of most flow datasets. We specifically propose the classification technique of head/tail breaks to this end. We test this approach with representative algorithms from three common method families, namely flowAMOEBA from flow clustering, Louvain from network community detection, and PageRank from network centrality algorithms. A variety of flow datasets are adopted for the experiments, including inter-city travel flows, cellphone call flows, and synthetic flows. We propose a standard evaluation framework to evaluate the applicability of not only the selected three algorithms, but any given method in a systematic way. The results prove that head/tail breaks can significantly improve the computational capability and efficiency of flow data analyses while preserving result quality, on condition that the analysis emphasizes the “head” part of the dataset or the flows with high absolute values. We recommend considering this easy-to-implement data reduction technique before analyzing a large flow dataset.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Tang, Yansong, Xingyu Liu, Xumin Yu, Danyang Zhang, Jiwen Lu, and Jie Zhou. "Learning from Temporal Spatial Cubism for Cross-Dataset Skeleton-based Action Recognition." ACM Transactions on Multimedia Computing, Communications, and Applications 18, no. 2 (May 31, 2022): 1–24. http://dx.doi.org/10.1145/3472722.

Повний текст джерела
Анотація:
Rapid progress and superior performance have been achieved for skeleton-based action recognition recently. In this article, we investigate this problem under a cross-dataset setting, which is a new, pragmatic, and challenging task in real-world scenarios. Following the unsupervised domain adaptation (UDA) paradigm, the action labels are only available on a source dataset, but unavailable on a target dataset in the training stage. Different from the conventional adversarial learning-based approaches for UDA, we utilize a self-supervision scheme to reduce the domain shift between two skeleton-based action datasets. Our inspiration is drawn from Cubism, an art genre from the early 20th century, which breaks and reassembles the objects to convey a greater context. By segmenting and permuting temporal segments or human body parts, we design two self-supervised learning classification tasks to explore the temporal and spatial dependency of a skeleton-based action and improve the generalization ability of the model. We conduct experiments on six datasets for skeleton-based action recognition, including three large-scale datasets (NTU RGB+D, PKU-MMD, and Kinetics) where new cross-dataset settings and benchmarks are established. Extensive results demonstrate that our method outperforms state-of-the-art approaches. The source codes of our model and all the compared methods are available at https://github.com/shanice-l/st-cubism.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Isthigosah, Maie, Andi Sunyoto, and Tonny Hidayat. "Image Augmentation for BreaKHis Medical Data using Convolutional Neural Networks." sinkron 8, no. 4 (October 1, 2023): 2381–92. http://dx.doi.org/10.33395/sinkron.v8i4.12878.

Повний текст джерела
Анотація:
In applying Convolutional Neural Network (CNN) to computer vision tasks in the medical domain, it is necessary to have sufficient datasets to train models with high accuracy and good general ability in identifying important patterns in medical data. This overfitting is exacerbated by data imbalances, where some classes may have a smaller sample size than others, leading to biased predictive results. The purpose of this augmentation is to create variation in the training data, which in turn can help reduce overfitting and increase the ability of the model to generalize. Therefore, comparing augmentation techniques becomes essential to assess and understand the relative effectiveness of each method in addressing the challenges of overfitting and data imbalance in the medical domain. In the context of the research described, namely a comparative analysis of augmentation performance on CNN models using the ResNet101 architecture, a comparison of augmentation techniques such as Image Generator, SMOTE, and ADASYN provides insight into which technique is most suitable for improving model performance on limited medical data. By comparing these techniques' accuracy, recall, and overall performance results, research can identify the most effective and relevant techniques in addressing the challenges of complex medical datasets. This provides a valuable guide for developing better CNN models in the future and may encourage further research in developing more innovative augmentation methods suitable for the medical domain.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Laporte, Matias, Martin Gjoreski, and Marc Langheinrich. "LAUREATE." Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 7, no. 3 (September 27, 2023): 1–41. http://dx.doi.org/10.1145/3610892.

Повний текст джерела
Анотація:
The latest developments in wearable sensors have resulted in a wide range of devices available to consumers, allowing users to monitor and improve their physical activity, sleep patterns, cognitive load, and stress levels. However, the lack of out-of-the-lab labelled data hinders the development of advanced machine learning models for predicting affective states. Furthermore, to the best of our knowledge, there are no publicly available datasets in the area of Human Memory Augmentation. This paper presents a dataset we collected during a 13-week study in a university setting. The dataset, named LAUREATE, contains the physiological data of 42 students during 26 classes (including exams), daily self-reports asking the students about their lifestyle habits (e.g. studying hours, physical activity, and sleep quality) and their performance across multiple examinations. In addition to the raw data, we provide expert features from the physiological data, and baseline machine learning models for estimating self-reported affect, models for recognising classes vs breaks, and models for user identification. Besides the use cases presented in this paper, among which Human Memory Augmentation, the dataset represents a rich resource for the UbiComp community in various domains, including affect recognition, behaviour modelling, user privacy, and activity and context recognition.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії