Journal articles on the topic 'Deep learning segmentation'

To see the other types of publications on this topic, follow the link: Deep learning segmentation.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Deep learning segmentation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Noori, Amani Y., Dr Shaimaa H. Shaker, and Dr Raghad Abdulaali Azeez. "Semantic Segmentation of Urban Street Scenes Using Deep Learning." Webology 19, no. 1 (January 20, 2022): 2294–306. http://dx.doi.org/10.14704/web/v19i1/web19156.

Full text
Abstract:
Scene classification is essential conception task used by robotics for understanding the environmental. The outdoor scene like urban street scene is composing of image with depth having greater variety than iconic object image. The semantic segmentation is an important task for autonomous driving and mobile robotics applications because it introduces enormous information need for safe navigation and complex reasoning. This paper introduces a model for classification all pixel’s image and predicates the right object that contains this pixel. This model adapts famous network image classification VGG16 with fully convolution network (FCN-8) and transfer learned representation by fine tuning for doing segmentation. Skip Architecture is added between layers to combine coarse, semantic, and local appearance information to generate accurate segmentation. This model is robust and efficiency because it efficient consumes low memory and faster inference time for testing and training on Camvid dataset. The output module is designed by using a special computer equipped by GPU memory NVIDIA GeForce RTX 2060 6G, and programmed by using python 3.7 programming language. The proposed system reached an accuracy 0.8804 and MIOU 73% on Camvid dataset.
APA, Harvard, Vancouver, ISO, and other styles
2

AL-Oudat, Mohammad, Mohammad Azzeh, Hazem Qattous, Ahmad Altamimi, and Saleh Alomari. "Image Segmentation based Deep Learning for Biliary Tree Diagnosis." Webology 19, no. 1 (January 20, 2022): 1834–49. http://dx.doi.org/10.14704/web/v19i1/web19123.

Full text
Abstract:
Dilation of biliary tree can be an indicator of several diseases such as stones, tumors, benign strictures, and some cases cancer. This dilation can be due to many reasons such as gallstones, inflammation of the bile ducts, trauma, injury, severe liver damage. Automatic measurement of the biliary tree in magnetic resonance images (MRI) is helpful to assist hepatobiliary surgeons for minimally invasive surgery. In this paper, we proposed a model to segment biliary tree MRI images using a Fully Convolutional Neural (FCN) network. Based on the extracted area, seven features that include Entropy, standard deviation, RMS, kurtosis, skewness, Energy and maximum are computed. A database of images from King Hussein Medical Center (KHMC) is used in this work, containing 800 MRI images; 400 cases with normal biliary tree; and 400 images with dilated biliary tree labeled by surgeons. Once the features are extracted, four classifiers (Multi-Layer perceptron neural network, support vector machine, k-NN and decision tree) are applied to predict the status of patient in terms of biliary tree (normal or dilated). All classifiers show high accuracy in terms of Area Under Curve except support vector machine. The contributions of this work include introducing a fully convolutional network for biliary tree segmentation, additionally scientifically correlate the extracted features with the status of biliary tree (normal or dilated) that have not been previously investigated in the literature from MRI images for biliary tree status determinations.
APA, Harvard, Vancouver, ISO, and other styles
3

Sri, S. Vinitha, and S. P. Kavya. "Lung Segmentation Using Deep Learning." Asian Journal of Applied Science and Technology 05, no. 02 (2021): 10–19. http://dx.doi.org/10.38177/ajast.2021.5202.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Vogt, Nina. "Neuron segmentation with deep learning." Nature Methods 16, no. 6 (May 30, 2019): 460. http://dx.doi.org/10.1038/s41592-019-0450-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Hyun-Cheol Park, Hyun-Cheol Park, Raman Ghimire Hyun-Cheol Park, Sahadev Poudel Raman Ghimire, and Sang-Woong Lee Sahadev Poudel. "Deep Learning for Joint Classification and Segmentation of Histopathology Image." 網際網路技術學刊 23, no. 4 (July 2022): 903–10. http://dx.doi.org/10.53106/160792642022072304025.

Full text
Abstract:
<p>Liver cancer is one of the most prevalent cancer deaths worldwide. Thus, early detection and diagnosis of possible liver cancer help in reducing cancer death. Histopathological Image Analysis (HIA) used to be carried out traditionally, but these are time-consuming and require expert knowledge. We propose a patch-based deep learning method for liver cell classification and segmentation. In this work, a two-step approach for the classification and segmentation of whole-slide image (WSI) is proposed. Since WSIs are too large to be fed into convolutional neural networks (CNN) directly, we first extract patches from them. The patches are fed into a modified version of U-Net with its equivalent mask for precise segmentation. In classification tasks, the WSIs are scaled 4 times, 16 times, and 64 times respectively. Patches extracted from each scale are then fed into the convolutional network with its corresponding label. During inference, we perform majority voting on the result obtained from the convolutional network. The proposed method has demonstrated better results in both classification and segmentation of liver cancer cells.</p> <p>&nbsp;</p>
APA, Harvard, Vancouver, ISO, and other styles
6

Weishaupt, L. L., T. Vuong, A. Thibodeau-Antonacci, A. Garant, K. S. Singh, C. Miller, A. Martin, and S. Enger. "A121 QUANTIFYING INTER-OBSERVER VARIABILITY IN THE SEGMENTATION OF RECTAL TUMORS IN ENDOSCOPY IMAGES AND ITS EFFECTS ON DEEP LEARNING." Journal of the Canadian Association of Gastroenterology 5, Supplement_1 (February 21, 2022): 140–42. http://dx.doi.org/10.1093/jcag/gwab049.120.

Full text
Abstract:
Abstract Background Tumor delineation in endoscopy images is a crucial part of clinical diagnoses and treatment planning for rectal cancer patients. However, it is challenging to detect and adequately determine the size of tumors in these images, especially for inexperienced clinicians. This motivates the need for a standardized, automated segmentation method. While deep learning has proven to be a powerful tool for medical image segmentation, it requires a large quantity of high-quality annotated training data. Since the annotation of endoscopy images is prone to high inter-observer variability, creating a robust unbiased deep learning model for this task is challenging. Aims To quantify the inter-observer variability in the manual segmentation of tumors in endoscopy images of rectal cancer patients and investigate an automated approach using deep learning. Methods Three gastrointestinal physicians and radiation oncologists (G1, G2, and G3) segmented 2833 endoscopy images into tumor and non-tumor regions. The whole image classifications and the pixelwise classifications into tumor and non-tumor were compared to quantify the inter-observer variability. Each manual annotator is from a different institution. Three different deep learning architectures (FCN32, U-Net, and SegNet) were trained on the binary contours created by G2. This naive approach investigates the effectiveness of neglecting any information about the uncertainty associated with the task of tumor delineation. Finally, segmentations from G2 and the deep learning models’ predictions were compared against ground truth labels from G1 and G3, and accuracy, sensitivity, specificity, precision, and F1 scores were computed for images where both segmentations contained tumors. Results The deep-learning segmentation took less than 1 second, while manual segmentation took approximately 10 seconds per image. There was significant inter-observer variability for the whole-image classifications made by the manual annotators (Figure 1A). The segmentation scores achieved by the deep learning models (SegNet F1:0.80±0.08) were comparable to the inter-observer variability for the pixel-wise image classification (Figure 1B). Conclusions The large inter-observer variability observed in this study indicates a need for an automated segmentation tool for tumors in endoscopy images of rectal cancer patients. While deep learning models trained on a single observer’s labels can segment tumors with an accuracy similar to the inter-observer variability, these models do not accurately reflect the intrinsic uncertainty associated with tumor delineation. In our ongoing studies, we investigate training a model with all observers’ contours to reflect the uncertainty associated with the tumor segmentations. Funding Agencies CIHRNSERC
APA, Harvard, Vancouver, ISO, and other styles
7

Iwaszenko, Sebastian, and Leokadia Róg. "Application of Deep Learning in Petrographic Coal Images Segmentation." Minerals 11, no. 11 (November 13, 2021): 1265. http://dx.doi.org/10.3390/min11111265.

Full text
Abstract:
The study of the petrographic structure of medium- and high-rank coals is important from both a cognitive and a utilitarian point of view. The petrographic constituents and their individual characteristics and features are responsible for the properties of coal and the way it behaves in various technological processes. This paper considers the application of convolutional neural networks for coal petrographic images segmentation. The U-Net-based model for segmentation was proposed. The network was trained to segment inertinite, liptinite, and vitrinite. The segmentations prepared manually by a domain expert were used as the ground truth. The results show that inertinite and vitrinite can be successfully segmented with minimal difference from the ground truth. The liptinite turned out to be much more difficult to segment. After usage of transfer learning, moderate results were obtained. Nevertheless, the application of the U-Net-based network for petrographic image segmentation was successful. The results are good enough to consider the method as a supporting tool for domain experts in everyday work.
APA, Harvard, Vancouver, ISO, and other styles
8

Yang, Zi, Mingli Chen, Mahdieh Kazemimoghadam, Lin Ma, Strahinja Stojadinovic, Robert Timmerman, Tu Dan, Zabi Wardak, Weiguo Lu, and Xuejun Gu. "Deep-learning and radiomics ensemble classifier for false positive reduction in brain metastases segmentation." Physics in Medicine & Biology 67, no. 2 (January 19, 2022): 025004. http://dx.doi.org/10.1088/1361-6560/ac4667.

Full text
Abstract:
Abstract Stereotactic radiosurgery (SRS) is now the standard of care for brain metastases (BMs) patients. The SRS treatment planning process requires precise target delineation, which in clinical workflow for patients with multiple (>4) BMs (mBMs) could become a pronounced time bottleneck. Our group has developed an automated BMs segmentation platform to assist in this process. The accuracy of the auto-segmentation, however, is influenced by the presence of false-positive segmentations, mainly caused by the injected contrast during MRI acquisition. To address this problem and further improve the segmentation performance, a deep-learning and radiomics ensemble classifier was developed to reduce the false-positive rate in segmentations. The proposed model consists of a Siamese network and a radiomic-based support vector machine (SVM) classifier. The 2D-based Siamese network contains a pair of parallel feature extractors with shared weights followed by a single classifier. This architecture is designed to identify the inter-class difference. On the other hand, the SVM model takes the radiomic features extracted from 3D segmentation volumes as the input for twofold classification, either a false-positive segmentation or a true BM. Lastly, the outputs from both models create an ensemble to generate the final label. The performance of the proposed model in the segmented mBMs testing dataset reached the accuracy (ACC), sensitivity (SEN), specificity (SPE) and area under the curve of 0.91, 0.96, 0.90 and 0.93, respectively. After integrating the proposed model into the original segmentation platform, the average segmentation false negative rate (FNR) and the false positive over the union (FPoU) were 0.13 and 0.09, respectively, which preserved the initial FNR (0.07) and significantly improved the FPoU (0.55). The proposed method effectively reduced the false-positive rate in the BMs raw segmentations indicating that the integration of the proposed ensemble classifier into the BMs segmentation platform provides a beneficial tool for mBMs SRS management.
APA, Harvard, Vancouver, ISO, and other styles
9

Xue, Jie, Bao Wang, Yang Ming, Xuejun Liu, Zekun Jiang, Chengwei Wang, Xiyu Liu, et al. "Deep learning–based detection and segmentation-assisted management of brain metastases." Neuro-Oncology 22, no. 4 (December 23, 2019): 505–14. http://dx.doi.org/10.1093/neuonc/noz234.

Full text
Abstract:
Abstract Background Three-dimensional T1 magnetization prepared rapid acquisition gradient echo (3D-T1-MPRAGE) is preferred in detecting brain metastases (BM) among MRI. We developed an automatic deep learning–based detection and segmentation method for BM (named BMDS net) on 3D-T1-MPRAGE images and evaluated its performance. Methods The BMDS net is a cascaded 3D fully convolution network (FCN) to automatically detect and segment BM. In total, 1652 patients with 3D-T1-MPRAGE images from 3 hospitals (n = 1201, 231, and 220, respectively) were retrospectively included. Manual segmentations were obtained by a neuroradiologist and a radiation oncologist in a consensus reading in 3D-T1-MPRAGE images. Sensitivity, specificity, and dice ratio of the segmentation were evaluated. Specificity and sensitivity measure the fractions of relevant segmented voxels. Dice ratio was used to quantitatively measure the overlap between automatic and manual segmentation results. Paired samples t-tests and analysis of variance were employed for statistical analysis. Results The BMDS net can detect all BM, providing a detection result with an accuracy of 100%. Automatic segmentations correlated strongly with manual segmentations through 4-fold cross-validation of the dataset with 1201 patients: the sensitivity was 0.96 ± 0.03 (range, 0.84–0.99), the specificity was 0.99 ± 0.0002 (range, 0.99–1.00), and the dice ratio was 0.85 ± 0.08 (range, 0.62–0.95) for total tumor volume. Similar performances on the other 2 datasets also demonstrate the robustness of BMDS net in correctly detecting and segmenting BM in various settings. Conclusions The BMDS net yields accurate detection and segmentation of BM automatically and could assist stereotactic radiotherapy management for diagnosis, therapy planning, and follow-up.
APA, Harvard, Vancouver, ISO, and other styles
10

Napte, Kiran, and Anurag Mahajan. "Deep Learning based Liver Segmentation: A Review." Revue d'Intelligence Artificielle 36, no. 6 (December 31, 2022): 979–84. http://dx.doi.org/10.18280/ria.360620.

Full text
Abstract:
Tremendous advancement takes place in the field of medical science. With this advancement, it is possible to support diagnosis and treatment planning for various diseases related to the abdominal organ. The liver is one of the adnominal organs, a common site for developing tumors. Liver disease is one of the main causes of death. Due to its complex and heterogeneous nature and shape, it is challenging to segment the liver and its tumor. There are numerous methods available for liver segmentation. Some are handcrafted, semi-automatic, and fully automatic. Image segmentation using deep learning techniques is becoming a very robust tool nowadays. There are many methods of liver segmentation which uses Deep Learning. This article provides the survey of the various liver segmentation schemes based on Artificial Neural Network (ANN), Convolution Neural network (CNN), Deep Belief network (DBN), Auto Encoder, Deep Feed-forward neural Network (DFNN), etc based on the architecture details, methodology, performance metrics and dataset details. Researchers are continuously putting efforts into improving these segmentation techniques. So this article give out a comprehensive review of deep learning-based liver segmentation techniques and highlights the advantages of the deep learning segmentation schemes over the traditional segmentation techniques.
APA, Harvard, Vancouver, ISO, and other styles
11

Hussain, Dildar, Rizwan Ali Naqvi, Woong-Kee Loh, and Jooyoung Lee. "Deep Learning in DXA Image Segmentation." Computers, Materials & Continua 66, no. 3 (2021): 2587–98. http://dx.doi.org/10.32604/cmc.2021.013031.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Akiyama, T. S., J. Marcato Junior, W. N. Gonçalves, P. O. Bressan, A. Eltner, F. Binder, and T. Singer. "DEEP LEARNING APPLIED TO WATER SEGMENTATION." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B2-2020 (August 14, 2020): 1189–93. http://dx.doi.org/10.5194/isprs-archives-xliii-b2-2020-1189-2020.

Full text
Abstract:
Abstract. The use of deep learning (DL) with convolutional neural networks (CNN) to monitor surface water can be a valuable supplement to costly and labour-intense standard gauging stations. This paper presents the application of a recent CNN semantic segmentation method (SegNet) to automatically segment river water in imagery acquired by RGB sensors. This approach can be used as a new supporting tool because there are only a few studies using DL techniques to monitor water resources. The study area is a medium-scale river (Wesenitz) located in the East of Germany. The captured images reflect different periods of the day over a period of approximately 50 days, allowing for the analysis of the river in different environmental conditions and situations. In the experiments, we evaluated the input image resolutions of 256 × 256 and 512 × 512 pixels to assess their influence on the performance of river segmentation. The performance of the CNN was measured with the pixel accuracy and IoU metrics revealing an accuracy of 98% and 97%, respectively, for both resolutions, indicating that our approach is efficient to segment water in RGB imagery.
APA, Harvard, Vancouver, ISO, and other styles
13

Benbrahim Ansari, Oussama. "Geo-Marketing Segmentation with Deep Learning." Businesses 1, no. 1 (June 16, 2021): 51–71. http://dx.doi.org/10.3390/businesses1010005.

Full text
Abstract:
Spatial clustering is a fundamental instrument in modern geo-marketing. The complexity of handling of high-dimensional and geo-referenced data in the context of distribution networks imposes important challenges for marketers to catch the right customer segments with useful pattern similarities. The increasing availability of the geo-referenced data also places more pressure on the existing geo-marketing methods and makes it more difficult to detect hidden or non-linear relationships between the variables. In recent years, artificial neural networks have been established in different disciplines such as engineering, medical diagnosis, or finance, to solve complex problems due to their high performance and accuracy. The purpose of this paper is to perform a market segmentation by using unsupervised deep learning with self-organizing maps in the B2B industrial automation market across the United States. The results of this study demonstrate a high clustering performance (4 × 4 neurons) as well as a significant dimensionality reduction by using self-organizing maps. The high level of visualization of the maps out of the initially unorganized data set allows a comprehensive interpretation of the different clusters and patterns across space. The centroids of the clusters have been identified as footprints for assigning new marketing channels to ensure a better market coverage.
APA, Harvard, Vancouver, ISO, and other styles
14

Junior, Gerivan Santos, Janderson Ferreira, Cristian Millán-Arias, Ramiro Daniel, Alberto Casado Junior, and Bruno J. T. Fernandes. "Ceramic Cracks Segmentation with Deep Learning." Applied Sciences 11, no. 13 (June 28, 2021): 6017. http://dx.doi.org/10.3390/app11136017.

Full text
Abstract:
Cracks are pathologies whose appearance in ceramic tiles can cause various damages due to the coating system losing water tightness and impermeability functions. Besides, the detachment of a ceramic plate, exposing the building structure, can still reach people who move around the building. Manual inspection is the most common method for addressing this problem. However, it depends on the knowledge and experience of those who perform the analysis and demands a long time and a high cost to map the entire area. This work focuses on automated optical inspection to find faults in ceramic tiles performing the segmentation of cracks in ceramic images using deep learning to segment these defects. We propose an architecture for segmenting cracks in facades with Deep Learning that includes an image pre-processing step. We also propose the Ceramic Crack Database, a set of images to segment defects in ceramic tiles. The proposed model can adequately identify the crack even when it is close to or within the grout.
APA, Harvard, Vancouver, ISO, and other styles
15

V., Pattabiraman, and Harshit Singh. "Deep Learning based Brain Tumour Segmentation." WSEAS TRANSACTIONS ON COMPUTERS 19 (January 4, 2021): 234–41. http://dx.doi.org/10.37394/23205.2020.19.29.

Full text
Abstract:
Artificial Intelligence has changed our outlook towards the whole world and it is regularly used to better understand all the data and information that surrounds us in our everyday lives. One such application of Artificial Intelligence in real world scenarios is extraction of data from various images and interpreting it in different ways. This includes applications like object detection, image segmentation, image restoration, etc. While every technique has its own area of application image segmentation has a variety of applications extending from complex medical field to regular pattern identification. The aim of this paper is to research about several FCNN based Semantic Segmentation techniques to develop a deep learning model that is able to segment tumours in brain MRI images to a high degree of precision and accuracy. The aim is to try several different architecture and experiment with several loss functions to improve the accuracy of our model and obtain the best model for our classification including newer loss function like dice loss function, hierarchical dice loss function cross entropy, etc.
APA, Harvard, Vancouver, ISO, and other styles
16

Kurama, Vihar, Samhita Alla, and Rohith Vishnu K. "Image Semantic Segmentation Using Deep Learning." International Journal of Image, Graphics and Signal Processing 10, no. 12 (December 8, 2018): 1–10. http://dx.doi.org/10.5815/ijigsp.2018.12.01.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Sardar, Mousumi, Subhashis Banerjee, and Sushmita Mitra. "Iris Segmentation Using Interactive Deep Learning." IEEE Access 8 (2020): 219322–30. http://dx.doi.org/10.1109/access.2020.3041519.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Pandian, Asha, Bala Gopi Sai Kumar, Venkata Revanth, and Kanipakam Sai Kiran. "Brain Tumor Segmentation Using Deep Learning." Journal of Computational and Theoretical Nanoscience 17, no. 8 (August 1, 2020): 3648–52. http://dx.doi.org/10.1166/jctn.2020.9247.

Full text
Abstract:
Manual recognition of the cerebrum tumor for malignant growth determination from MRI pictures is a troublesome, repetitive and tedious assignment. The precision and the power of cerebrum Tumor discovery in this way, are significant for the determination, treatment arranging, and treatment result assessment. Generally, the programmed cerebrum tumor location techniques use hand structured highlights. Correspondingly, customary strategies for profound learning, for example, ordinary neural systems require a lot of commented on information to learn from, which is frequently hard to acquire in clinical area. Here, we portray another model two-pathway-bunch CNN (Convolutional Neural Network) design for cerebrum tumor recognition, which misuses neighborhood highlights and worldwide relevant highlights at the same time. This model implements equivariance in the two-pathway CNN model to decrease dangers and over fitting parameter sharing. At last, we implant the course engineering into two-pathway-brunch CNN in which the yield of an essential CNN is treated as an extra source and connected at the last year. Approval of the model on BRATS2013 and BRATS2015 information collections uncovered that inserting of a gathering CNN in to a two pathway engineering improved the general execution over the as of now distributed best in class while computational multifaceted nature stays alluring.
APA, Harvard, Vancouver, ISO, and other styles
19

Zhang, Xiaoqing. "Melanoma segmentation based on deep learning." Computer Assisted Surgery 22, sup1 (October 18, 2017): 267–77. http://dx.doi.org/10.1080/24699322.2017.1389405.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Singh, Abhineet, Hayden Kalke, Mark Loewen, and Nilanjan Ray. "River Ice Segmentation With Deep Learning." IEEE Transactions on Geoscience and Remote Sensing 58, no. 11 (November 2020): 7570–79. http://dx.doi.org/10.1109/tgrs.2020.2981082.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Pekala, M., N. Joshi, T. Y. Alvin Liu, N. M. Bressler, D. Cabrera DeBuc, and P. Burlina. "Deep learning based retinal OCT segmentation." Computers in Biology and Medicine 114 (November 2019): 103445. http://dx.doi.org/10.1016/j.compbiomed.2019.103445.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Srinivas, Dr Kalyanapu, and Reddy Dr.B.R.S. "Deep Learning based CNN Optimization Model for MR Braing Image Segmentation." Journal of Advanced Research in Dynamical and Control Systems 11, no. 11 (November 20, 2019): 213–20. http://dx.doi.org/10.5373/jardcs/v11i11/20193190.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Hu, Guangdong, Fengyuan Qian, Longgui Sha, and Zilong Wei. "Application of Deep Learning Technology in Glioma." Journal of Healthcare Engineering 2022 (February 18, 2022): 1–9. http://dx.doi.org/10.1155/2022/8507773.

Full text
Abstract:
A common and most basic brain tumor is glioma that is exceptionally dangerous to health of various patients. A glioma segmentation, which is primarily magnetic resonance imaging (MRI) oriented, is considered as one of common tools developed for doctors. These doctors use this system to examine, analyse, and diagnose appearance of the glioma’s outward for both patients, i.e., indoor and outdoor. In the literature, a widely utilized approach for the segmentation of glioma is the deep learning-oriented method. To cope with this issue, a segmentation of glioma approach, i.e., primarily on the convolution neural networks, is developed in this manuscript. A DM-DA-enabled cascading approach for the segmentation of glioma, which is 2DResUnet-enabled model, is reported to resolve the problem of spatial data acquisition of insufficient 3D specifically in the 2D full CNN along with the core issue of memory consumption of 3D full CNN. For gliomas segmentation at various stages, we have utilized multiscale fusion approach, attention, segmentation, and DenseBlock. Moreover, for reducing three dimensionalities of the Unet model, a sampling of fixed region is used along with multisequence data of the glioma image. Finally, the CNN model has the ability of producing a better segmentation of tumor preferably with minimum possible memory. The proposed model has used BraTS18 and BraTS17 benchmark data sets for fivefold cross-validation (local) and online evaluation preferably official, respectively. Evaluation results have verified that edema’s Dice Score preferable average, enhancement, and core areas of the segmentation of the glioma with DM-DA-Unet perform exceptionally well on the validation set of BraTS17. Finally, average sensitivity was observed to be high as well, which is approximately closer to the best segmentation model and its effect on the validation set of BraTS1 and has segmented gliomas accurately.
APA, Harvard, Vancouver, ISO, and other styles
24

Iyer, Aditi, Maria Thor, Ifeanyirochukwu Onochie, Jennifer Hesse, Kaveh Zakeri, Eve LoCastro, Jue Jiang, et al. "Prospectively-validated deep learning model for segmenting swallowing and chewing structures in CT." Physics in Medicine & Biology 67, no. 2 (January 17, 2022): 024001. http://dx.doi.org/10.1088/1361-6560/ac4000.

Full text
Abstract:
Abstract Objective. Delineating swallowing and chewing structures aids in radiotherapy (RT) treatment planning to limit dysphagia, trismus, and speech dysfunction. We aim to develop an accurate and efficient method to automate this process. Approach. CT scans of 242 head and neck (H&N) cancer patients acquired from 2004 to 2009 at our institution were used to develop auto-segmentation models for the masseters, medial pterygoids, larynx, and pharyngeal constrictor muscle using DeepLabV3+. A cascaded framework was used, wherein models were trained sequentially to spatially constrain each structure group based on prior segmentations. Additionally, an ensemble of models, combining contextual information from axial, coronal, and sagittal views was used to improve segmentation accuracy. Prospective evaluation was conducted by measuring the amount of manual editing required in 91 H&N CT scans acquired February-May 2021. Main results. Medians and inter-quartile ranges of Dice similarity coefficients (DSC) computed on the retrospective testing set (N = 24) were 0.87 (0.85–0.89) for the masseters, 0.80 (0.79–0.81) for the medial pterygoids, 0.81 (0.79–0.84) for the larynx, and 0.69 (0.67–0.71) for the constrictor. Auto-segmentations, when compared to two sets of manual segmentations in 10 randomly selected scans, showed better agreement (DSC) with each observer than inter-observer DSC. Prospective analysis showed most manual modifications needed for clinical use were minor, suggesting auto-contouring could increase clinical efficiency. Trained segmentation models are available for research use upon request via https://github.com/cerr/CERR/wiki/Auto-Segmentation-models. Significance. We developed deep learning-based auto-segmentation models for swallowing and chewing structures in CT and demonstrated its potential for use in treatment planning to limit complications post-RT. To the best of our knowledge, this is the only prospectively-validated deep learning-based model for segmenting chewing and swallowing structures in CT. Segmentation models have been made open-source to facilitate reproducibility and multi-institutional research.
APA, Harvard, Vancouver, ISO, and other styles
25

Huang, Caiyun, and Changhua Yin. "A coronary artery CTA segmentation approach based on deep learning." Journal of X-Ray Science and Technology 30, no. 2 (March 15, 2022): 245–59. http://dx.doi.org/10.3233/xst-211063.

Full text
Abstract:
Presence of plaque and coronary artery stenosis are the main causes of coronary heart disease. Detection of plaque and coronary artery segmentation have become the first choice in detecting coronary artery disease. The purpose of this study is to investigate a new method for plaque detection and automatic segmentation and diagnosis of coronary arteries and to test its feasibility of applying to clinical medical image diagnosis. A multi-model fusion coronary CT angiography (CTA) vessel segmentation method is proposed based on deep learning. The method includes three network layer models namely, an original 3-dimensional full convolutional network (3D FCN) and two networks that embed the attention gating (AG) model in the original 3D FCN. Then, the prediction results of the three networks are merged by using the majority voting algorithm and thus the final prediction result of the networks is obtained. In the post-processing stage, the level set function is used to further iteratively optimize the results of network fusion prediction. The JI (Jaccard index) and DSC (Dice similarity coefficient) scores are calculated to evaluate accuracy of blood vessel segmentations. Applying to a CTA dataset of 20 patients, accuracy of coronary blood vessel segmentation using FCN, FCN-AG1, FCN-AG2 network and the fusion method are tested. The average values of JI and DSC of using the first three networks are (0.7962, 0.8843), (0.8154, 0.8966) and (0.8119, 0.8936), respectively. When using new fusion method, average JI and DSC of segmentation results increase to (0.8214, 0.9005), which are better than the best result of using FCN, FCN-AG1 and FCN-AG2 model independently.
APA, Harvard, Vancouver, ISO, and other styles
26

Buser, Myrthe A. D., Alida F. W. van der Steeg, Marc H. W. A. Wijnen, Matthijs Fitski, Harm van Tinteren, Marry M. van den Heuvel-Eibrink, Annemieke S. Littooij, and Bas H. M. van der Velden. "Radiologic versus Segmentation Measurements to Quantify Wilms Tumor Volume on MRI in Pediatric Patients." Cancers 15, no. 7 (April 1, 2023): 2115. http://dx.doi.org/10.3390/cancers15072115.

Full text
Abstract:
Wilms tumor is a common pediatric solid tumor. To evaluate tumor response to chemotherapy and decide whether nephron-sparing surgery is possible, tumor volume measurements based on magnetic resonance imaging (MRI) are important. Currently, radiological volume measurements are based on measuring tumor dimensions in three directions. Manual segmentation-based volume measurements might be more accurate, but this process is time-consuming and user-dependent. The aim of this study was to investigate whether manual segmentation-based volume measurements are more accurate and to explore whether these segmentations can be automated using deep learning. We included the MRI images of 45 Wilms tumor patients (age 0–18 years). First, we compared radiological tumor volumes with manual segmentation-based tumor volume measurements. Next, we created an automated segmentation method by training a nnU-Net in a five-fold cross-validation. Segmentation quality was validated by comparing the automated segmentation with the manually created ground truth segmentations, using Dice scores and the 95th percentile of the Hausdorff distances (HD95). On average, manual tumor segmentations result in larger tumor volumes. For automated segmentation, the median dice was 0.90. The median HD95 was 7.2 mm. We showed that radiological volume measurements underestimated tumor volume by about 10% when compared to manual segmentation-based volume measurements. Deep learning can potentially be used to replace manual segmentation to benefit from accurate volume measurements without time and observer constraints.
APA, Harvard, Vancouver, ISO, and other styles
27

Rehman, Aasia, Muheet Ahmed Butt, and Majid Zaman. "Liver Lesion Segmentation Using Deep Learning Models." Acadlore Transactions on AI and Machine Learning 1, no. 1 (November 20, 2022): 61–67. http://dx.doi.org/10.56578/ataiml010108.

Full text
Abstract:
An estimated 9.6 million deaths, or one in every six deaths, were attributed to cancer in 2018, making it the second highest cause of death worldwide. Men are more likely to develop lung, prostate, colorectal, stomach, and liver cancer than women, who are more likely to develop breast, colorectal, lung, cervical, and thyroid cancer. The primary goals of medical image segmentation include studying anatomical structure, identifying regions of interest (RoI), and measuring tissue volume to track tumor growth. It is crucial to diagnose and treat liver lesions quickly in order to stop the tumor from spreading further. Deep learning model-based liver segmentation has become very popular in the field of medical image analysis. This study explores various deep learning-based liver lesion segmentation algorithms and methodologies. Based on the developed models, the performance, and their limitations of these methodologies are contrasted. In the end, it was concluded that small size lesion segmentation, in particular, is still an open research subject for computer-aided systems of liver lesion segmentation, for there are still a number of technical issues that need to be resolved.
APA, Harvard, Vancouver, ISO, and other styles
28

Zhao, Yang. "Development of Semantic Segmentation Based on Deep Learning." Highlights in Science, Engineering and Technology 34 (February 28, 2023): 281–88. http://dx.doi.org/10.54097/hset.v34i.5485.

Full text
Abstract:
In recent years, the discipline of computer vision has seen a lot of interest in the study of image semantic segmentation. Deep learning has grown in popularity, and deep learning and image segmentation have combined and improved. These technologies are now widely employed in autonomous vehicles, intelligent robots, and other devices. In the beginning, Fully Convolutional Networks (FCN) or U-net-based semantic segmentation techniques were proposed; FCN realized an end-to-end training network and effectively applied Convolutional Neural Networks (CNN) to the semantic segmentation domain. To improve outcomes in the field of semantic segmentation, the encoder-decoder structure from the FCN approach was later implemented, and the Atrous Convolution approach was also proposed. Transformer-based semantic segmentation techniques are another recent trend, in addition to CNN-based networks. The Transformer model was first proposed in 2017, and subsequent Transformer-based semantic segmentation methods have also achieved good results. In this paper, these various methods will be compared and discussed to provide a guidance for this field.
APA, Harvard, Vancouver, ISO, and other styles
29

Moorthy, Jayashree, and Usha Devi Gandhi. "A Survey on Medical Image Segmentation Based on Deep Learning Techniques." Big Data and Cognitive Computing 6, no. 4 (October 17, 2022): 117. http://dx.doi.org/10.3390/bdcc6040117.

Full text
Abstract:
Deep learning techniques have rapidly become important as a preferred method for evaluating medical image segmentation. This survey analyses different contributions in the deep learning medical field, including the major common issues published in recent years, and also discusses the fundamentals of deep learning concepts applicable to medical image segmentation. The study of deep learning can be applied to image categorization, object recognition, segmentation, registration, and other tasks. First, the basic ideas of deep learning techniques, applications, and frameworks are introduced. Deep learning techniques that operate the ideal applications are briefly explained. This paper indicates that there is a previous experience with different techniques in the class of medical image segmentation. Deep learning has been designed to describe and respond to various challenges in the field of medical image analysis such as low accuracy of image classification, low segmentation resolution, and poor image enhancement. Aiming to solve these present issues and improve the evolution of medical image segmentation challenges, we provide suggestions for future research.
APA, Harvard, Vancouver, ISO, and other styles
30

Pécot, Thierry, Alexander Alekseyenko, and Kristin Wallace. "A deep learning segmentation strategy that minimizes the amount of manually annotated images." F1000Research 10 (March 30, 2021): 256. http://dx.doi.org/10.12688/f1000research.52026.1.

Full text
Abstract:
Deep learning has revolutionized the automatic processing of images. While deep convolutional neural networks have demonstrated astonishing segmentation results for many biological objects acquired with microscopy, this technology's good performance relies on large training datasets. In this paper, we present a strategy to minimize the amount of time spent in manually annotating images for segmentation. It involves using an efficient and open source annotation tool, the artificial increase of the training data set with data augmentation, the creation of an artificial data set with a conditional generative adversarial network and the combination of semantic and instance segmentations. We evaluate the impact of each of these approaches for the segmentation of nuclei in 2D widefield images of human precancerous polyp biopsies in order to define an optimal strategy.
APA, Harvard, Vancouver, ISO, and other styles
31

Pécot, Thierry, Alexander Alekseyenko, and Kristin Wallace. "A deep learning segmentation strategy that minimizes the amount of manually annotated images." F1000Research 10 (January 17, 2022): 256. http://dx.doi.org/10.12688/f1000research.52026.2.

Full text
Abstract:
Deep learning has revolutionized the automatic processing of images. While deep convolutional neural networks have demonstrated astonishing segmentation results for many biological objects acquired with microscopy, this technology's good performance relies on large training datasets. In this paper, we present a strategy to minimize the amount of time spent in manually annotating images for segmentation. It involves using an efficient and open source annotation tool, the artificial increase of the training dataset with data augmentation, the creation of an artificial dataset with a conditional generative adversarial network and the combination of semantic and instance segmentations. We evaluate the impact of each of these approaches for the segmentation of nuclei in 2D widefield images of human precancerous polyp biopsies in order to define an optimal strategy.
APA, Harvard, Vancouver, ISO, and other styles
32

Ayad, Hayder, Ikhlas Watan Ghindawi, and Mustafa Salam Kadhm. "Lung Segmentation Using Proposed Deep Learning Architecture." International Journal of Online and Biomedical Engineering (iJOE) 16, no. 15 (December 15, 2020): 141. http://dx.doi.org/10.3991/ijoe.v16i15.17115.

Full text
Abstract:
<div id="titleAndAbstract"><table class="data" width="100%"><tbody><tr valign="top"><td class="value">The Prediction and detection disease in human lungs are a very critical operation. It depends on an efficient view of the CT images to the doctors. It depends on an efficient view of the CT images to the doctors. The clear view of the images to clearly identify the disease depends on the segmentation that may save people lives. Therefore, an accurate lung segmentation system from CT image based on proposed CNN architecture is proposed. The system used weighted softmax function the improved the segmentation accuracy. By experiments, the system achieved a high segmentation accuracy 98.9% using LIDC-IDRI CT lung images database.</td></tr></tbody></table></div><div id="indexing"> </div>
APA, Harvard, Vancouver, ISO, and other styles
33

A. C., Anitha, R. ,. Dhanesha, Shrinivasa Naika C. L., Krishna A. N., Parinith S. Kumar, and Parikshith P. Sharma. "Arecanut Bunch Segmentation Using Deep Learning Techniques." International Journal of Circuits, Systems and Signal Processing 16 (July 26, 2022): 1064–73. http://dx.doi.org/10.46300/9106.2022.16.129.

Full text
Abstract:
Agriculture and farming as a backbone of many developing countries provides food safety and security. Arecanut being a major plantation in India, take part an important role in the life of the farmers. Arecanut growth monitoring and harvesting needs skilled labors and it is very risky since the arecanut trees are very thin and tall. A vision-based system for agriculture and farming gains popularity in the recent years. Segmentation is a fundamental task in any vision-based system. A very few attempts been made for the segmentation of arecanut bunch and are based on hand-crafted features with limited performance. The aim of our research is to propose and develop an efficient and accurate technique for the segmentation of arecanut bunches by eliminating unwanted background information. This paper presents two deep-learning approaches: Mask Region-Based Convolutional Neural Network (Mask R-CNN) and U-Net for the segmentation of arecanut bunches from the tree images without any pre-processing. Experiments were done to estimate and evaluate the performances of both the methods and shows that Mask R-CNN performs better compared to U-Net and methods that apply segmentation on other commodities as there were no bench marks for the arecanut.
APA, Harvard, Vancouver, ISO, and other styles
34

Kushwah, Chandra Pal, and Kuruna Markam. "Semantic Segmentation of Satellite Images using Deep Learning." Regular issue 10, no. 8 (June 30, 2021): 33–37. http://dx.doi.org/10.35940/ijitee.h9186.0610821.

Full text
Abstract:
Bidirectional in recent years, Deep learning performance in natural scene image processing has improved its use in remote sensing image analysis. In this paper, we used the semantic segmentation of remote sensing images for deep neural networks (DNN). To make it ideal for multi-target semantic segmentation of remote sensing image systems, we boost the Seg Net encoder-decoder CNN structures with index pooling & U-net. The findings reveal that the segmentation of various objects has its benefits and drawbacks for both models. Furthermore, we provide an integrated algorithm that incorporates two models. The test results indicate that the integrated algorithm proposed will take advantage of all multi-target segmentation models and obtain improved segmentation relative to two models.
APA, Harvard, Vancouver, ISO, and other styles
35

Shu, Zhenyu, Chengwu Qi, Shiqing Xin, Chao Hu, Li Wang, Yu Zhang, and Ligang Liu. "Unsupervised 3D shape segmentation and co-segmentation via deep learning." Computer Aided Geometric Design 43 (March 2016): 39–52. http://dx.doi.org/10.1016/j.cagd.2016.02.015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

M N, Rajesh, and Chandrasekar B S. "Deep Learning-Based Semantic Segmentation Models for Prostate Gland Segmentation." International Journal of Electrical and Electronics Engineering 10, no. 2 (February 28, 2023): 157–71. http://dx.doi.org/10.14445/23488379/ijeee-v10i2p115.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Liu, Xiangbin, Liping Song, Shuai Liu, and Yudong Zhang. "A Review of Deep-Learning-Based Medical Image Segmentation Methods." Sustainability 13, no. 3 (January 25, 2021): 1224. http://dx.doi.org/10.3390/su13031224.

Full text
Abstract:
As an emerging biomedical image processing technology, medical image segmentation has made great contributions to sustainable medical care. Now it has become an important research direction in the field of computer vision. With the rapid development of deep learning, medical image processing based on deep convolutional neural networks has become a research hotspot. This paper focuses on the research of medical image segmentation based on deep learning. First, the basic ideas and characteristics of medical image segmentation based on deep learning are introduced. By explaining its research status and summarizing the three main methods of medical image segmentation and their own limitations, the future development direction is expanded. Based on the discussion of different pathological tissues and organs, the specificity between them and their classic segmentation algorithms are summarized. Despite the great achievements of medical image segmentation in recent years, medical image segmentation based on deep learning has still encountered difficulties in research. For example, the segmentation accuracy is not high, the number of medical images in the data set is small and the resolution is low. The inaccurate segmentation results are unable to meet the actual clinical requirements. Aiming at the above problems, a comprehensive review of current medical image segmentation methods based on deep learning is provided to help researchers solve existing problems.
APA, Harvard, Vancouver, ISO, and other styles
38

Çevik, Kerim Kürşat, Hasan Erdinç Koçer, and Mustafa Boğa. "Deep Learning Based Egg Fertility Detection." Veterinary Sciences 9, no. 10 (October 17, 2022): 574. http://dx.doi.org/10.3390/vetsci9100574.

Full text
Abstract:
This study investigates the implementation of deep learning (DL) approaches to the fertile egg-recognition problem, based on incubator images. In this study, we aimed to classify chicken eggs according to both segmentation and fertility status with a Mask R-CNN-based approach. In this manner, images can be handled by a single DL model to successfully perform detection, classification and segmentation of fertile and infertile eggs. Two different test processes were used in this study. In the first test application, a data set containing five fertile eggs was used. In the second, testing was carried out on the data set containing 18 fertile eggs. For evaluating this study, we used AP, one of the most important metrics for evaluating object detection and segmentation models in computer vision. When the results obtained were examined, the optimum threshold value (IoU) value was determined as 0.7. According to the IoU of 0.7, it was observed that all fertile eggs in the incubator were determined correctly on the third day of both test periods. Considering the methods used and the ease of the designed system, it can be said that a very successful system has been designed according to the studies in the literature. In order to increase the segmentation performance, it is necessary to carry out an experimental study to improve the camera and lighting setup prepared for taking the images.
APA, Harvard, Vancouver, ISO, and other styles
39

Lim, Chee Chin, Norhanis Ayunie Ahmad Khairudin, Siew Wen Loke, Aimi Salihah Abdul Nasir, Yen Fook Chong, and Zeehaida Mohamed. "Comparison of Human Intestinal Parasite Ova Segmentation Using Machine Learning and Deep Learning Techniques." Applied Sciences 12, no. 15 (July 27, 2022): 7542. http://dx.doi.org/10.3390/app12157542.

Full text
Abstract:
Helminthiasis disease is one of the most serious health problems in the world and frequently occurs in children, especially in unhygienic conditions. The manual diagnosis method is time consuming and challenging, especially when there are a large number of samples. An automated system is acknowledged as a quick and easy technique to assess helminth sample images by offering direct visibility on the computer monitor without the requirement for examination under a microscope. Thus, this paper aims to compare the human intestinal parasite ova segmentation performance between machine learning segmentation and deep learning segmentation. Four types of helminth ova are tested, which are Ascaris Lumbricoides Ova (ALO), Enterobious Vermicularis Ova (EVO), Hookworm Ova (HWO), and Trichuris Trichiura Ova (TTO). In this paper, fuzzy c-Mean (FCM) segmentation technique is used in machine learning segmentation, while convolutional neural network (CNN) segmentation technique is used for deep learning. The performance of segmentation algorithms based on FCM and CNN segmentation techniques is investigated and compared to select the best segmentation procedure for helminth ova detection. The results reveal that the accuracy obtained for each helminth species is in the range of 97% to 100% for both techniques. However, IoU analysis showed that CNN based on ResNet technique performed better than FCM for ALO, EVO, and TTO with values of 75.80%, 55.48%, and 77.06%, respectively. Therefore, segmentation through deep learning is more suitable for segmenting the human intestinal parasite ova.
APA, Harvard, Vancouver, ISO, and other styles
40

Kim, Yong-Woon, Yung-Cheol Byun, and Addapalli V. N. Krishna. "Portrait Segmentation Using Ensemble of Heterogeneous Deep-Learning Models." Entropy 23, no. 2 (February 5, 2021): 197. http://dx.doi.org/10.3390/e23020197.

Full text
Abstract:
Image segmentation plays a central role in a broad range of applications, such as medical image analysis, autonomous vehicles, video surveillance and augmented reality. Portrait segmentation, which is a subset of semantic image segmentation, is widely used as a preprocessing step in multiple applications such as security systems, entertainment applications, video conferences, etc. A substantial amount of deep learning-based portrait segmentation approaches have been developed, since the performance and accuracy of semantic image segmentation have improved significantly due to the recent introduction of deep learning technology. However, these approaches are limited to a single portrait segmentation model. In this paper, we propose a novel approach using an ensemble method by combining multiple heterogeneous deep-learning based portrait segmentation models to improve the segmentation performance. The Two-Models ensemble and Three-Models ensemble, using a simple soft voting method and weighted soft voting method, were experimented. Intersection over Union (IoU) metric, IoU standard deviation and false prediction rate were used to evaluate the performance. Cost efficiency was calculated to analyze the efficiency of segmentation. The experiment results show that the proposed ensemble approach can perform with higher accuracy and lower errors than single deep-learning-based portrait segmentation models. The results also show that the ensemble of deep-learning models typically increases the use of memory and computing power, although it also shows that the ensemble of deep-learning models can perform more efficiently than a single model with higher accuracy using less memory and less computing power.
APA, Harvard, Vancouver, ISO, and other styles
41

Mauricaite, Radvile, Ella Mi, Jiarong Chen, Andrew Ho, Lillie Pakzad-Shahabi, and Matthew Williams. "Fully automated deep learning system for detecting sarcopenia on brain MRI in glioblastoma." Neuro-Oncology 23, Supplement_4 (October 1, 2021): iv13. http://dx.doi.org/10.1093/neuonc/noab195.031.

Full text
Abstract:
Abstract Aims Glioblastoma multiforme (GBM) is an aggressive brain malignancy. Performance status is an important prognostic factor but is subjectively evaluated, resulting in inaccuracy. Objective markers of frailty/physical condition, such as measures of skeletal muscle mass can be evaluated on cross-sectional imaging and is associated with cancer survival. In GBM, temporalis muscle has been identified as a skeletal muscle mass surrogate and a prognostic factor. However, current manual muscle quantification is time consuming, limiting clinical adoption. We previously developed a deep learning system for automated temporalis muscle quantification, with high accuracy (Dice coefficient 0.912), and showed muscle cross-sectional area is independently significantly associated with survival in GBM (HR 0.380). However, it required manual selection of the temporalis muscle-containing MRI slice. Thus, in this work we aimed to develop a fully automatic deep-learning system, using the eyeball as an anatomic landmark for automatic slice selection, to quantify temporalis and validate on independent datasets. Method 3D brain MRI scans were obtained from four datasets: our in-house glioblastoma patient dataset, TCGA-GBM, IVY-GAP and REMBRANDT. Manual eyeball and temporalis segmentations were performed on 2D MRI images by two experienced readers. Two neural networks (2D U-Nets) were trained, one to automatically segment the eyeball and the other to segment the temporalis muscle on 2D MRI images using Dice loss function. The cross sectional area of eyeball segmentations were quantified and thresholded, to select the superior orbital MRI slice from each scan. This slice underwent temporalis segmentation, whose cross sectional area was then quantified. Accuracy of automatically predicted eyeball and temporalis segmentations were compared to manual ground truth segmentations on metrics of Dice coefficient, precision, recall and Hausdorff distance. Accuracy of MRI slice selection (by the eyeball segmentation model) for temporalis segmentation was determined by comparing automatically selected slices to slices selected manually by a trained neuro-oncologist. Results 398 images from 185 patients and 366 images from 145 patients were used for the eyeball and temporalis segmentation models, respectively. 61 independent TCGA-GBM scans formed a validation cohort to assess the performance of the full pipeline. The model achieved high accuracy in eyeball segmentation, with test set Dice coefficient of 0.9029 ± 0.0894, precision of 0.8842 ± 0.0992, recall of 0.9297 ± 0.6020 and Hausdorff distance of 2.8847 ± 0.6020. High segmentation accuracy was also achieved by the temporalis segmentation model, with Dice coefficient of 0.8968 ± 0.0375, precision of 0.8877 ± 0.0679, recall of 0.9118 ± 0.0505 and Hausdorff distance of 1.8232 ± 0.3263 in the test set. 96.1% of automatically selected slices for temporalis segmentation were within 2 slices of the manually selected slice. Conclusion Temporalis muscle cross-sectional area can be rapidly and accurately assessed from 3D MRI brain scans using a deep learning-based system in a fully automated pipeline. Combined with our and others’ previous results that demonstrate the prognostic significance of temporalis cross-sectional area and muscle width, our findings suggest a role for deep learning in muscle mass and sarcopenia screening in GBM, with the potential to add significant value to routine imaging. Possible clinical applications include risk profiling, treatment stratification and informing interventions for muscle preservation. Further work will be to validate the prognostic value of temporalis muscle cross sectional area measurements generated by our fully automatic deep learning system in the multiple in-house and external datasets.
APA, Harvard, Vancouver, ISO, and other styles
42

Trojahn, Tiago Henrique, and Rudinei Goularte. "Temporal video scene segmentation using deep-learning." Multimedia Tools and Applications 80, no. 12 (February 8, 2021): 17487–513. http://dx.doi.org/10.1007/s11042-020-10450-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Lattisi, Tiziano, Davide Farina, and Marco Ronchetti. "Semantic Segmentation of Text Using Deep Learning." Computing and Informatics 41, no. 1 (2022): 78–97. http://dx.doi.org/10.31577/cai_2022_1_78.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Anagnostis, Athanasios, Aristotelis C. Tagarakis, Dimitrios Kateris, Vasileios Moysiadis, Claus Grøn Sørensen, Simon Pearson, and Dionysis Bochtis. "Orchard Mapping with Deep Learning Semantic Segmentation." Sensors 21, no. 11 (May 31, 2021): 3813. http://dx.doi.org/10.3390/s21113813.

Full text
Abstract:
This study aimed to propose an approach for orchard trees segmentation using aerial images based on a deep learning convolutional neural network variant, namely the U-net network. The purpose was the automated detection and localization of the canopy of orchard trees under various conditions (i.e., different seasons, different tree ages, different levels of weed coverage). The implemented dataset was composed of images from three different walnut orchards. The achieved variability of the dataset resulted in obtaining images that fell under seven different use cases. The best-trained model achieved 91%, 90%, and 87% accuracy for training, validation, and testing, respectively. The trained model was also tested on never-before-seen orthomosaic images or orchards based on two methods (oversampling and undersampling) in order to tackle issues with out-of-the-field boundary transparent pixels from the image. Even though the training dataset did not contain orthomosaic images, it achieved performance levels that reached up to 99%, demonstrating the robustness of the proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
45

Solanki, Abhishek, Rajan Kumar Singh, and Brinsley Demeneze. "Aerial pictures semantic segmentation applying deep learning." International Journal Of Trendy Research In Engineering And Technology 05, no. 01 (2021): 42–48. http://dx.doi.org/10.54473/ijtret.2021.5107.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Inik, Özkan, and Erkan Ülker. "Optimization of deep learning based segmentation method." Soft Computing 26, no. 7 (March 2, 2022): 3329–44. http://dx.doi.org/10.1007/s00500-021-06711-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Jung, Sunguk, Hyeonbeom Heo, Sangheon Park, Sung-Uk Jung, and Kyungjae Lee. "Benchmarking Deep Learning Models for Instance Segmentation." Applied Sciences 12, no. 17 (September 3, 2022): 8856. http://dx.doi.org/10.3390/app12178856.

Full text
Abstract:
Instance segmentation has gained attention in various computer vision fields, such as autonomous driving, drone control, and sports analysis. Recently, many successful models have been developed, which can be classified into two categories: accuracy- and speed-focused. Accuracy and inference time are important for real-time applications of this task. However, these models just present inference time measured on different hardware, which makes their comparison difficult. This study is the first to evaluate and compare the performances of state-of-the-art instance segmentation models by focusing on their inference time in a fixed experimental environment. For precise comparison, the test hardware and environment should be identical; hence, we present the accuracy and speed of the models in a fixed hardware environment for quantitative and qualitative analyses. Although speed-focused models run in real-time on high-end GPUs, there is a trade-off between speed and accuracy when the computing power is insufficient. The experimental results show that a feature pyramid network structure may be considered when designing a real-time model, and a balance between the speed and accuracy must be achieved for real-time application.
APA, Harvard, Vancouver, ISO, and other styles
48

Zheng, Ke, and Hasan Abdullah Hasan Naji. "Road Scene Segmentation Based on Deep Learning." IEEE Access 8 (2020): 140964–71. http://dx.doi.org/10.1109/access.2020.3009782.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Suprunenko, V. V. "Ore particles segmentation using deep learning methods." Journal of Physics: Conference Series 1679 (November 2020): 042089. http://dx.doi.org/10.1088/1742-6596/1679/4/042089.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Xiangfu Zhang, 张祥甫, 刘健 Jian Liu, 石章松 Zhangsong Shi, 吴中红 Zhonghong Wu, and 王智 Zhi Wang. "Review of Deep Learning-Based Semantic Segmentation." Laser & Optoelectronics Progress 56, no. 15 (2019): 150003. http://dx.doi.org/10.3788/lop56.150003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography