Academic literature on the topic 'Deep learning segmentation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Deep learning segmentation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Deep learning segmentation"

1

Noori, Amani Y., Dr Shaimaa H. Shaker, and Dr Raghad Abdulaali Azeez. "Semantic Segmentation of Urban Street Scenes Using Deep Learning." Webology 19, no. 1 (January 20, 2022): 2294–306. http://dx.doi.org/10.14704/web/v19i1/web19156.

Full text
Abstract:
Scene classification is essential conception task used by robotics for understanding the environmental. The outdoor scene like urban street scene is composing of image with depth having greater variety than iconic object image. The semantic segmentation is an important task for autonomous driving and mobile robotics applications because it introduces enormous information need for safe navigation and complex reasoning. This paper introduces a model for classification all pixel’s image and predicates the right object that contains this pixel. This model adapts famous network image classification VGG16 with fully convolution network (FCN-8) and transfer learned representation by fine tuning for doing segmentation. Skip Architecture is added between layers to combine coarse, semantic, and local appearance information to generate accurate segmentation. This model is robust and efficiency because it efficient consumes low memory and faster inference time for testing and training on Camvid dataset. The output module is designed by using a special computer equipped by GPU memory NVIDIA GeForce RTX 2060 6G, and programmed by using python 3.7 programming language. The proposed system reached an accuracy 0.8804 and MIOU 73% on Camvid dataset.
APA, Harvard, Vancouver, ISO, and other styles
2

AL-Oudat, Mohammad, Mohammad Azzeh, Hazem Qattous, Ahmad Altamimi, and Saleh Alomari. "Image Segmentation based Deep Learning for Biliary Tree Diagnosis." Webology 19, no. 1 (January 20, 2022): 1834–49. http://dx.doi.org/10.14704/web/v19i1/web19123.

Full text
Abstract:
Dilation of biliary tree can be an indicator of several diseases such as stones, tumors, benign strictures, and some cases cancer. This dilation can be due to many reasons such as gallstones, inflammation of the bile ducts, trauma, injury, severe liver damage. Automatic measurement of the biliary tree in magnetic resonance images (MRI) is helpful to assist hepatobiliary surgeons for minimally invasive surgery. In this paper, we proposed a model to segment biliary tree MRI images using a Fully Convolutional Neural (FCN) network. Based on the extracted area, seven features that include Entropy, standard deviation, RMS, kurtosis, skewness, Energy and maximum are computed. A database of images from King Hussein Medical Center (KHMC) is used in this work, containing 800 MRI images; 400 cases with normal biliary tree; and 400 images with dilated biliary tree labeled by surgeons. Once the features are extracted, four classifiers (Multi-Layer perceptron neural network, support vector machine, k-NN and decision tree) are applied to predict the status of patient in terms of biliary tree (normal or dilated). All classifiers show high accuracy in terms of Area Under Curve except support vector machine. The contributions of this work include introducing a fully convolutional network for biliary tree segmentation, additionally scientifically correlate the extracted features with the status of biliary tree (normal or dilated) that have not been previously investigated in the literature from MRI images for biliary tree status determinations.
APA, Harvard, Vancouver, ISO, and other styles
3

Sri, S. Vinitha, and S. P. Kavya. "Lung Segmentation Using Deep Learning." Asian Journal of Applied Science and Technology 05, no. 02 (2021): 10–19. http://dx.doi.org/10.38177/ajast.2021.5202.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Vogt, Nina. "Neuron segmentation with deep learning." Nature Methods 16, no. 6 (May 30, 2019): 460. http://dx.doi.org/10.1038/s41592-019-0450-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Hyun-Cheol Park, Hyun-Cheol Park, Raman Ghimire Hyun-Cheol Park, Sahadev Poudel Raman Ghimire, and Sang-Woong Lee Sahadev Poudel. "Deep Learning for Joint Classification and Segmentation of Histopathology Image." 網際網路技術學刊 23, no. 4 (July 2022): 903–10. http://dx.doi.org/10.53106/160792642022072304025.

Full text
Abstract:
<p>Liver cancer is one of the most prevalent cancer deaths worldwide. Thus, early detection and diagnosis of possible liver cancer help in reducing cancer death. Histopathological Image Analysis (HIA) used to be carried out traditionally, but these are time-consuming and require expert knowledge. We propose a patch-based deep learning method for liver cell classification and segmentation. In this work, a two-step approach for the classification and segmentation of whole-slide image (WSI) is proposed. Since WSIs are too large to be fed into convolutional neural networks (CNN) directly, we first extract patches from them. The patches are fed into a modified version of U-Net with its equivalent mask for precise segmentation. In classification tasks, the WSIs are scaled 4 times, 16 times, and 64 times respectively. Patches extracted from each scale are then fed into the convolutional network with its corresponding label. During inference, we perform majority voting on the result obtained from the convolutional network. The proposed method has demonstrated better results in both classification and segmentation of liver cancer cells.</p> <p>&nbsp;</p>
APA, Harvard, Vancouver, ISO, and other styles
6

Weishaupt, L. L., T. Vuong, A. Thibodeau-Antonacci, A. Garant, K. S. Singh, C. Miller, A. Martin, and S. Enger. "A121 QUANTIFYING INTER-OBSERVER VARIABILITY IN THE SEGMENTATION OF RECTAL TUMORS IN ENDOSCOPY IMAGES AND ITS EFFECTS ON DEEP LEARNING." Journal of the Canadian Association of Gastroenterology 5, Supplement_1 (February 21, 2022): 140–42. http://dx.doi.org/10.1093/jcag/gwab049.120.

Full text
Abstract:
Abstract Background Tumor delineation in endoscopy images is a crucial part of clinical diagnoses and treatment planning for rectal cancer patients. However, it is challenging to detect and adequately determine the size of tumors in these images, especially for inexperienced clinicians. This motivates the need for a standardized, automated segmentation method. While deep learning has proven to be a powerful tool for medical image segmentation, it requires a large quantity of high-quality annotated training data. Since the annotation of endoscopy images is prone to high inter-observer variability, creating a robust unbiased deep learning model for this task is challenging. Aims To quantify the inter-observer variability in the manual segmentation of tumors in endoscopy images of rectal cancer patients and investigate an automated approach using deep learning. Methods Three gastrointestinal physicians and radiation oncologists (G1, G2, and G3) segmented 2833 endoscopy images into tumor and non-tumor regions. The whole image classifications and the pixelwise classifications into tumor and non-tumor were compared to quantify the inter-observer variability. Each manual annotator is from a different institution. Three different deep learning architectures (FCN32, U-Net, and SegNet) were trained on the binary contours created by G2. This naive approach investigates the effectiveness of neglecting any information about the uncertainty associated with the task of tumor delineation. Finally, segmentations from G2 and the deep learning models’ predictions were compared against ground truth labels from G1 and G3, and accuracy, sensitivity, specificity, precision, and F1 scores were computed for images where both segmentations contained tumors. Results The deep-learning segmentation took less than 1 second, while manual segmentation took approximately 10 seconds per image. There was significant inter-observer variability for the whole-image classifications made by the manual annotators (Figure 1A). The segmentation scores achieved by the deep learning models (SegNet F1:0.80±0.08) were comparable to the inter-observer variability for the pixel-wise image classification (Figure 1B). Conclusions The large inter-observer variability observed in this study indicates a need for an automated segmentation tool for tumors in endoscopy images of rectal cancer patients. While deep learning models trained on a single observer’s labels can segment tumors with an accuracy similar to the inter-observer variability, these models do not accurately reflect the intrinsic uncertainty associated with tumor delineation. In our ongoing studies, we investigate training a model with all observers’ contours to reflect the uncertainty associated with the tumor segmentations. Funding Agencies CIHRNSERC
APA, Harvard, Vancouver, ISO, and other styles
7

Iwaszenko, Sebastian, and Leokadia Róg. "Application of Deep Learning in Petrographic Coal Images Segmentation." Minerals 11, no. 11 (November 13, 2021): 1265. http://dx.doi.org/10.3390/min11111265.

Full text
Abstract:
The study of the petrographic structure of medium- and high-rank coals is important from both a cognitive and a utilitarian point of view. The petrographic constituents and their individual characteristics and features are responsible for the properties of coal and the way it behaves in various technological processes. This paper considers the application of convolutional neural networks for coal petrographic images segmentation. The U-Net-based model for segmentation was proposed. The network was trained to segment inertinite, liptinite, and vitrinite. The segmentations prepared manually by a domain expert were used as the ground truth. The results show that inertinite and vitrinite can be successfully segmented with minimal difference from the ground truth. The liptinite turned out to be much more difficult to segment. After usage of transfer learning, moderate results were obtained. Nevertheless, the application of the U-Net-based network for petrographic image segmentation was successful. The results are good enough to consider the method as a supporting tool for domain experts in everyday work.
APA, Harvard, Vancouver, ISO, and other styles
8

Yang, Zi, Mingli Chen, Mahdieh Kazemimoghadam, Lin Ma, Strahinja Stojadinovic, Robert Timmerman, Tu Dan, Zabi Wardak, Weiguo Lu, and Xuejun Gu. "Deep-learning and radiomics ensemble classifier for false positive reduction in brain metastases segmentation." Physics in Medicine & Biology 67, no. 2 (January 19, 2022): 025004. http://dx.doi.org/10.1088/1361-6560/ac4667.

Full text
Abstract:
Abstract Stereotactic radiosurgery (SRS) is now the standard of care for brain metastases (BMs) patients. The SRS treatment planning process requires precise target delineation, which in clinical workflow for patients with multiple (>4) BMs (mBMs) could become a pronounced time bottleneck. Our group has developed an automated BMs segmentation platform to assist in this process. The accuracy of the auto-segmentation, however, is influenced by the presence of false-positive segmentations, mainly caused by the injected contrast during MRI acquisition. To address this problem and further improve the segmentation performance, a deep-learning and radiomics ensemble classifier was developed to reduce the false-positive rate in segmentations. The proposed model consists of a Siamese network and a radiomic-based support vector machine (SVM) classifier. The 2D-based Siamese network contains a pair of parallel feature extractors with shared weights followed by a single classifier. This architecture is designed to identify the inter-class difference. On the other hand, the SVM model takes the radiomic features extracted from 3D segmentation volumes as the input for twofold classification, either a false-positive segmentation or a true BM. Lastly, the outputs from both models create an ensemble to generate the final label. The performance of the proposed model in the segmented mBMs testing dataset reached the accuracy (ACC), sensitivity (SEN), specificity (SPE) and area under the curve of 0.91, 0.96, 0.90 and 0.93, respectively. After integrating the proposed model into the original segmentation platform, the average segmentation false negative rate (FNR) and the false positive over the union (FPoU) were 0.13 and 0.09, respectively, which preserved the initial FNR (0.07) and significantly improved the FPoU (0.55). The proposed method effectively reduced the false-positive rate in the BMs raw segmentations indicating that the integration of the proposed ensemble classifier into the BMs segmentation platform provides a beneficial tool for mBMs SRS management.
APA, Harvard, Vancouver, ISO, and other styles
9

Xue, Jie, Bao Wang, Yang Ming, Xuejun Liu, Zekun Jiang, Chengwei Wang, Xiyu Liu, et al. "Deep learning–based detection and segmentation-assisted management of brain metastases." Neuro-Oncology 22, no. 4 (December 23, 2019): 505–14. http://dx.doi.org/10.1093/neuonc/noz234.

Full text
Abstract:
Abstract Background Three-dimensional T1 magnetization prepared rapid acquisition gradient echo (3D-T1-MPRAGE) is preferred in detecting brain metastases (BM) among MRI. We developed an automatic deep learning–based detection and segmentation method for BM (named BMDS net) on 3D-T1-MPRAGE images and evaluated its performance. Methods The BMDS net is a cascaded 3D fully convolution network (FCN) to automatically detect and segment BM. In total, 1652 patients with 3D-T1-MPRAGE images from 3 hospitals (n = 1201, 231, and 220, respectively) were retrospectively included. Manual segmentations were obtained by a neuroradiologist and a radiation oncologist in a consensus reading in 3D-T1-MPRAGE images. Sensitivity, specificity, and dice ratio of the segmentation were evaluated. Specificity and sensitivity measure the fractions of relevant segmented voxels. Dice ratio was used to quantitatively measure the overlap between automatic and manual segmentation results. Paired samples t-tests and analysis of variance were employed for statistical analysis. Results The BMDS net can detect all BM, providing a detection result with an accuracy of 100%. Automatic segmentations correlated strongly with manual segmentations through 4-fold cross-validation of the dataset with 1201 patients: the sensitivity was 0.96 ± 0.03 (range, 0.84–0.99), the specificity was 0.99 ± 0.0002 (range, 0.99–1.00), and the dice ratio was 0.85 ± 0.08 (range, 0.62–0.95) for total tumor volume. Similar performances on the other 2 datasets also demonstrate the robustness of BMDS net in correctly detecting and segmenting BM in various settings. Conclusions The BMDS net yields accurate detection and segmentation of BM automatically and could assist stereotactic radiotherapy management for diagnosis, therapy planning, and follow-up.
APA, Harvard, Vancouver, ISO, and other styles
10

Napte, Kiran, and Anurag Mahajan. "Deep Learning based Liver Segmentation: A Review." Revue d'Intelligence Artificielle 36, no. 6 (December 31, 2022): 979–84. http://dx.doi.org/10.18280/ria.360620.

Full text
Abstract:
Tremendous advancement takes place in the field of medical science. With this advancement, it is possible to support diagnosis and treatment planning for various diseases related to the abdominal organ. The liver is one of the adnominal organs, a common site for developing tumors. Liver disease is one of the main causes of death. Due to its complex and heterogeneous nature and shape, it is challenging to segment the liver and its tumor. There are numerous methods available for liver segmentation. Some are handcrafted, semi-automatic, and fully automatic. Image segmentation using deep learning techniques is becoming a very robust tool nowadays. There are many methods of liver segmentation which uses Deep Learning. This article provides the survey of the various liver segmentation schemes based on Artificial Neural Network (ANN), Convolution Neural network (CNN), Deep Belief network (DBN), Auto Encoder, Deep Feed-forward neural Network (DFNN), etc based on the architecture details, methodology, performance metrics and dataset details. Researchers are continuously putting efforts into improving these segmentation techniques. So this article give out a comprehensive review of deep learning-based liver segmentation techniques and highlights the advantages of the deep learning segmentation schemes over the traditional segmentation techniques.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Deep learning segmentation"

1

Favia, Federico. "Real-time hand segmentation using deep learning." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-292930.

Full text
Abstract:
Hand segmentation is a fundamental part of many computer vision systems aimed at gesture recognition or hand tracking. In particular, augmented reality solutions need a very accurate gesture analysis system in order to satisfy the end consumers in an appropriate manner. Therefore the hand segmentation step is critical. Segmentation is a well-known problem in image processing, being the process to divide a digital image into multiple regions with pixels of similar qualities. Classify what pixels belong to the hand and which ones belong to the background need to be performed within a real-time performance and a reasonable computational complexity. While in the past mainly light-weight probabilistic and machine learning approaches were used, this work investigates the challenges of real-time hand segmentation achieved through several deep learning techniques. Is it possible or not to improve current state-of-theart segmentation systems for smartphone applications? Several models are tested and compared based on accuracy and processing speed. Transfer learning-like approach leads the method of this work since many architectures were built just for generic semantic segmentation or for particular applications such as autonomous driving. Great effort is spent on organizing a solid and generalized dataset of hands, exploiting the existing ones and data collected by ManoMotion AB. Since the first aim was to obtain a really accurate hand segmentation, in the end, RefineNet architecture is selected and both quantitative and qualitative evaluations are performed, considering its advantages and analysing the problems related to the computational time which could be improved in the future.
Handsegmentering är en grundläggande del av många datorvisionssystem som syftar till gestigenkänning eller handspårning. I synnerhet behöver förstärkta verklighetslösningar ett mycket exakt gestanalyssystem för att tillfredsställa slutkonsumenterna på ett lämpligt sätt. Därför är handsegmenteringssteget kritiskt. Segmentering är ett välkänt problem vid bildbehandling, det vill säga processen att dela en digital bild i flera regioner med pixlar av liknande kvaliteter. Klassificera vilka pixlar som tillhör handen och vilka som hör till bakgrunden måste utföras i realtidsprestanda och rimlig beräkningskomplexitet. Medan tidigare använts huvudsakligen lättviktiga probabilistiska metoder och maskininlärningsmetoder, undersöker detta arbete utmaningarna med realtidshandsegmentering uppnådd genom flera djupinlärningstekniker. Är det möjligt eller inte att förbättra nuvarande toppmoderna segmenteringssystem för smartphone-applikationer? Flera modeller testas och jämförs baserat på noggrannhet och processhastighet. Transfer learning-liknande metoden leder metoden för detta arbete eftersom många arkitekturer byggdes bara för generisk semantisk segmentering eller för specifika applikationer som autonom körning. Stora ansträngningar läggs på att organisera en gedigen och generaliserad uppsättning händer, utnyttja befintliga och data som samlats in av ManoMotion AB. Eftersom det första syftet var att få en riktigt exakt handsegmentering, väljs i slutändan RefineNetarkitekturen och både kvantitativa och kvalitativa utvärderingar utförs med beaktande av fördelarna med det och analys av problemen relaterade till beräkningstiden som kan förbättras i framtiden.
APA, Harvard, Vancouver, ISO, and other styles
2

Sarpangala, Kishan. "Semantic Segmentation Using Deep Learning Neural Architectures." University of Cincinnati / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=ucin157106185092304.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wen, Shuangyue. "Automatic Tongue Contour Segmentation using Deep Learning." Thesis, Université d'Ottawa / University of Ottawa, 2018. http://hdl.handle.net/10393/38343.

Full text
Abstract:
Ultrasound is one of the primary technologies used for clinical purposes. Ultrasound systems have favorable real-time capabilities, are fast and relatively inexpensive, portable and non-invasive. Recent interest in using ultrasound imaging for tongue motion has various applications in linguistic study, speech therapy as well as in foreign language education, where visual-feedback of tongue motion complements conventional audio feedback. Ultrasound images are known to be difficult to recognize. The anatomical structure in them, the rapidity of tongue movements, also missing segments in some frames and the limited frame rate of ultrasound systems have made automatic tongue contour extraction and tracking very challenging and especially hard for real-time applications. Traditional image processing-based approaches have many practical limitations in terms of automation, speed, and accuracy. Recent progress in deep convolutional neural networks has been successfully exploited in a variety of computer vision problems such as detection, classification, and segmentation. In the past few years, deep belief networks for tongue segmentation and convolutional neural networks for the classification of tongue motion have been proposed. However, none of these claim fully-automatic or real-time performance. U-Net is one of the most popular deep learning algorithms for image segmentation, and it is composed of several convolutions and deconvolution layers. In this thesis, we proposed a fully automatic system to extract tongue dorsum from ultrasound videos in real-time using a simplified version of U-Net, which we call sU-Net. Two databases from different machines were collected, and different training schemes were applied for testing the learning capability of the model. Our experiment on ultrasound video data demonstrates that the proposed method is very competitive compared with other methods in terms of performance and accuracy.
APA, Harvard, Vancouver, ISO, and other styles
4

¿, Ananya. "DEEP LEARNING METHODS FOR CROP AND WEED SEGMENTATION." Case Western Reserve University School of Graduate Studies / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=case1528372119706623.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Tosteberg, Patrik. "Semantic Segmentation of Point Clouds Using Deep Learning." Thesis, Linköpings universitet, Datorseende, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-136793.

Full text
Abstract:
In computer vision, it has in recent years become more popular to use point clouds to represent 3D data. To understand what a point cloud contains, methods like semantic segmentation can be used. Semantic segmentation is the problem of segmenting images or point clouds and understanding what the different segments are. An application for semantic segmentation of point clouds are e.g. autonomous driving, where the car needs information about objects in its surrounding. Our approach to the problem, is to project the point clouds into 2D virtual images using the Katz projection. Then we use pre-trained convolutional neural networks to semantically segment the images. To get the semantically segmented point clouds, we project back the scores from the segmentation into the point cloud. Our approach is evaluated on the semantic3D dataset. We find our method is comparable to state-of-the-art, without any fine-tuning on the Semantic3Ddataset.
APA, Harvard, Vancouver, ISO, and other styles
6

Kolhatkar, Dhanvin. "Real-Time Instance and Semantic Segmentation Using Deep Learning." Thesis, Université d'Ottawa / University of Ottawa, 2020. http://hdl.handle.net/10393/40616.

Full text
Abstract:
In this thesis, we explore the use of Convolutional Neural Networks for semantic and instance segmentation, with a focus on studying the application of existing methods with cheaper neural networks. We modify a fast object detection architecture for the instance segmentation task, and study the concepts behind these modifications both in the simpler context of semantic segmentation and the more difficult context of instance segmentation. Various instance segmentation branch architectures are implemented in parallel with a box prediction branch, using its results to crop each instance's features. We negate the imprecision of the final box predictions and eliminate the need for bounding box alignment by using an enlarged bounding box for cropping. We report and study the performance, advantages, and disadvantages of each. We achieve fast speeds with all of our methods.
APA, Harvard, Vancouver, ISO, and other styles
7

Wang, Wei. "Image Segmentation Using Deep Learning Regulated by Shape Context." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-227261.

Full text
Abstract:
In recent years, image segmentation by using deep neural networks has made great progress. However, reaching a good result by training with a small amount of data remains to be a challenge. To find a good way to improve the accuracy of segmentation with limited datasets, we implemented a new automatic chest radiographs segmentation experiment based on preliminary works by Chunliang using deep learning neural network combined with shape context information. When the process was conducted, the datasets were put into origin U-net at first. After the preliminary process, the segmented images were then repaired through a new network with shape context information. In this experiment, we created a new network structure by rebuilding the U-net into a 2-input structure and refined the processing pipeline step. In this proposed pipeline, the datasets and shape context were trained together through the new network model by iteration. The proposed method was evaluated on 247 posterior-anterior chest radiographs of public datasets and n-folds cross-validation was also used. The outcome shows that compared to origin U-net, the proposed pipeline reaches higher accuracy when trained with limited datasets. Here the "limited" datasets refer to 1-20 images in the medical image field. A better outcome with higher accuracy can be reached if the second structure is further refined and shape context generator's parameter is fine-tuned in the future.
Under de senaste åren har bildsegmentering med hjälp av djupa neurala nätverk gjort stora framsteg. Att nå ett bra resultat med träning med en liten mängd data kvarstår emellertid som en utmaning. För att hitta ett bra sätt att förbättra noggrannheten i segmenteringen med begränsade datamängder så implementerade vi en ny segmentering för automatiska röntgenbilder av bröstkorgsdiagram baserat på tidigare forskning av Chunliang. Detta tillvägagångssätt använder djupt lärande neurala nätverk kombinerat med "shape context" information. I detta experiment skapade vi en ny nätverkstruktur genom omkonfiguration av U-nätverket till en 2-inputstruktur och förfinade pipeline processeringssteget där bilden och "shape contexten" var tränade tillsammans genom den nya nätverksmodellen genom iteration.Den föreslagna metoden utvärderades på dataset med 247 bröströntgenfotografier, och n-faldig korsvalidering användes för utvärdering. Resultatet visar att den föreslagna pipelinen jämfört med ursprungs U-nätverket når högre noggrannhet när de tränas med begränsade datamängder. De "begränsade" dataseten här hänvisar till 1-20 bilder inom det medicinska fältet. Ett bättre resultat med högre noggrannhet kan nås om den andra strukturen förfinas ytterligare och "shape context-generatorns" parameter finjusteras.
APA, Harvard, Vancouver, ISO, and other styles
8

Chen, Yani. "Deep Learning based 3D Image Segmentation Methods and Applications." Ohio University / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1547066297047003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Liu, Dongnan. "Supervised and Unsupervised Deep Learning-based Biomedical Image Segmentation." Thesis, The University of Sydney, 2021. https://hdl.handle.net/2123/24744.

Full text
Abstract:
Biomedical image analysis plays a crucial role in the development of healthcare, with a wide scope of applications including the disease diagnosis, clinical treatment, and future prognosis. Among various biomedical image analysis techniques, segmentation is an essential step, which aims at assigning each pixel with labels of interest on the category and instance. At the early stage, the segmentation results were obtained via manual annotation, which is time-consuming and error-prone. Over the past few decades, hand-craft feature based methods have been proposed to segment the biomedical images automatically. However, these methods heavily rely on prior knowledge, which limits their generalization ability on various biomedical images. With the recent advance of the deep learning technique, convolutional neural network (CNN) based methods have achieved state-of-the-art performance on various nature and biomedical image segmentation tasks. The great success of the CNN based segmentation methods results from the ability to learn contextual and local information from the high dimensional feature space. However, the biomedical image segmentation tasks are particularly challenging, due to the complicated background components, the high variability of object appearances, numerous overlapping objects, and ambiguous object boundaries. To this end, it is necessary to establish automated deep learning-based segmentation paradigms, which are capable of processing the complicated semantic and morphological relationships in various biomedical images. In this thesis, we propose novel deep learning-based methods for fully supervised and unsupervised biomedical image segmentation tasks. For the first part of the thesis, we introduce fully supervised deep learning-based segmentation methods on various biomedical image analysis scenarios. First, we design a panoptic structure paradigm for nuclei instance segmentation in the histopathology images, and cell instance segmentation in the fluorescence microscopy images. Traditional proposal-based and proposal-free instance segmentation methods are only capable to leverage either global contextual or local instance information. However, our panoptic paradigm integrates both of them and therefore achieves better performance. Second, we propose a multi-level feature fusion architecture for semantic neuron membrane segmentation in the electron microscopy (EM) images. Third, we propose a 3D anisotropic paradigm for brain tumor segmentation in magnetic resonance images, which enlarges the model receptive field while maintaining the memory efficiency. Although our fully supervised methods achieve competitive performance on several biomedical image segmentation tasks, they heavily rely on the annotations of the training images. However, labeling pixel-level segmentation ground truth for biomedical images is expensive and labor-intensive. Subsequently, exploring unsupervised segmentation methods without accessing annotations is an important topic for biomedical image analysis. In the second part of the thesis, we focus on the unsupervised biomedical image segmentation methods. First, we proposed a panoptic feature alignment paradigm for unsupervised nuclei instance segmentation in the histopathology images, and mitochondria instance segmentation in EM images. To the best of our knowledge, we are for the first time to design an unsupervised deep learning-based method for various biomedical image instance segmentation tasks. Second, we design a feature disentanglement architecture for unsupervised object recognition. In addition to the unsupervised instance segmentation for the biomedical images, our method also achieves state-of-the-art performance on the unsupervised object detection for natural images, which further demonstrates its effectiveness and high generalization ability.
APA, Harvard, Vancouver, ISO, and other styles
10

Granli, Petter. "Semantic segmentation of seabed sonar imagery using deep learning." Thesis, Linköpings universitet, Programvara och system, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-160561.

Full text
Abstract:
For investigating the large parts of the ocean which have yet to be mapped, there is a need for autonomous underwater vehicles. Current state-of-the-art underwater positioning often relies on external data from other vessels or beacons. Processing seabed image data could potentially improve autonomy for underwater vehicles. In this thesis, image data from a synthetic aperture sonar (SAS) was manually segmented into two classes: sand and gravel. Two different convolutional neural networks (CNN) were trained using different loss functions, and the results were examined. The best performing network, U-Net trained with the IoU loss function, achieved dice coefficient and IoU scores of 0.645 and 0.476, respectively. It was concluded that CNNs are a viable approach for segmenting SAS image data, but there is much room for improvement.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Deep learning segmentation"

1

Wang, Xiaogang. Deep Learning in Object Recognition, Detection, and Segmentation. Now Publishers, 2016.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Brain Tumor MRI Image Segmentation Using Deep Learning Techniques. Elsevier, 2022. http://dx.doi.org/10.1016/c2021-0-00056-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Chaki, Jyotismita. Brain Tumor MRI Image Segmentation Using Deep Learning Techniques. Elsevier Science & Technology, 2021.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Chaki, Jyotismita. Brain Tumor MRI Image Segmentation Using Deep Learning Techniques. Elsevier Science & Technology Books, 2021.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Advanced Deep Learning with TensorFlow 2 and Keras: Apply DL, GANs, VAEs, Deep RL, Unsupervised Learning, Object Detection and Segmentation, and More, 2nd Edition. Packt Publishing, Limited, 2020.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Deep learning segmentation"

1

Hatamizadeh, Ali, Assaf Hoogi, Debleena Sengupta, Wuyue Lu, Brian Wilcox, Daniel Rubin, and Demetri Terzopoulos. "Deep Active Lesion Segmentation." In Machine Learning in Medical Imaging, 98–105. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-32692-0_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Benoit, Alexandre, Badih Ghattas, Emna Amri, Joris Fournel, and Patrick Lambert. "Deep Learning for Semantic Segmentation." In Multi-faceted Deep Learning, 39–72. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-74478-6_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Moskalenko, Viktor, Nikolai Zolotykh, and Grigory Osipov. "Deep Learning for ECG Segmentation." In Studies in Computational Intelligence, 246–54. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-30425-6_29.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Yang, Hao-Yu. "Deep Learning in Brain Segmentation." In Handbook of Artificial Intelligence in Biomedical Engineering, 261–88. Series statement: Biomedical engineering: techniques and applications: Apple Academic Press, 2020. http://dx.doi.org/10.1201/9781003045564-12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kaur, Prabhjot, and Anand Muni Mishra. "Segmentation of Deep Learning Models." In Machine Learning for Edge Computing, 115–26. Boca Raton: CRC Press, 2022. http://dx.doi.org/10.1201/9781003143468-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Keydana, Sigrid. "Image Segmentation." In Deep Learning and Scientific Computing with R torch, 181–200. Boca Raton: Chapman and Hall/CRC, 2023. http://dx.doi.org/10.1201/9781003275923-19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ezeobiejesi, Jude, and Bir Bhanu. "Latent Fingerprint Image Segmentation Using Deep Neural Network." In Deep Learning for Biometrics, 83–107. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-61657-5_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Jalilian, Ehsaneddin, and Andreas Uhl. "Iris Segmentation Using Fully Convolutional Encoder–Decoder Networks." In Deep Learning for Biometrics, 133–55. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-61657-5_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Rashed, Hazem, Senthil Yogamani, Ahmad El-Sallab, Mohamed Elhelw, and Mahmoud Hassaballah. "Deep Semantic Segmentation in Autonomous Driving." In Deep Learning in Computer Vision, 151–82. First edition. | Boca Raton, FL : CRC Press/Taylor and Francis, 2020. |: CRC Press, 2020. http://dx.doi.org/10.1201/9781351003827-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Munir, Khushboo, Fabrizio Frezza, and Antonello Rizzi. "Deep Learning for Brain Tumor Segmentation." In Studies in Computational Intelligence, 189–201. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-6321-8_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Deep learning segmentation"

1

Kamkova, Yuliia, Hemin Ali Qadir, Ole Jakob, and Rahul Prasanna Kumar. "Kidney and tumor segmentation using combined Deep learning method." In 2019 Kidney Tumor Segmentation Challenge: KiTS19. University of Minnesota Libraries Publishing, 2019. http://dx.doi.org/10.24926/548719.091.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Raein Hashemi, Seyed, Boris Gershman, and Vladimir I. Valtchinov. "Development of a Deep Learning Algorithm for Segmentation of Kidney Tumor Imaging." In 2019 Kidney Tumor Segmentation Challenge: KiTS19. University of Minnesota Libraries Publishing, 2019. http://dx.doi.org/10.24926/548719.083.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Souza, Alan, Wilson Leao, Daniel Miranda, Nelson Hargreaves, Bruno Dias, and Erick Talarico. "Salt segmentation using deep learning." In International Congress of the Brazilian Geophysical Society&Expogef. Brazilian Geophysical Society, 2019. http://dx.doi.org/10.22564/16cisbgf2019.219.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Zhou, Xueting, Yan Chen, and Shoushan Liu. "Deep learning for image segmentation." In ICAIP 2022: 2022 6th International Conference on Advances in Image Processing. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3577117.3577144.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Junior, Gerivan, Janderson Ferreira, Cristian Millan-Aria, Ramiro Daniel, Alberto Casado, and Bruno Fernandes. "Ceramic Cracks Segmentation with Deep Learning." In LatinX in AI at International Conference on Machine Learning 2021. Journal of LatinX in AI Research, 2021. http://dx.doi.org/10.52591/202107245.

Full text
Abstract:
Cracks are pathologies whose appearance in ceramic tiles can cause various types of scratches due to the coating system losing water tightness and impermeability functions. Besides, the detachment of a ceramic plate, exposing the building structure, can still reach people who move around the building. Manual inspection is the most common method for this problem. However, it depends on the knowledge and experience of those who perform the analysis and demands a long time to map the entire area and high cost. These inspections require special equipment when they are at high altitudes, and the integrity of the inspector is at risk. Thus, there exists a need for automated optical inspection to find faults in ceramic tiles. This work focuses on the segmentation of cracks in ceramic images using deep learning to segment these defects. We propose an architecture for segmenting cracks in facades with Deep Learning that includes a pre-processing step. We also propose the Ceramic Crack Database, a set of images to segment defects in ceramic tiles. The results show that the proposed architecture for ceramic crack segmentation achieves promising performance
APA, Harvard, Vancouver, ISO, and other styles
6

Junior, Gerivan, Janderson Ferreira, Cristian Millan-Aria, Ramiro Daniel, Alberto Casado, and Bruno Fernandes. "Ceramic Cracks Segmentation with Deep Learning." In LatinX in AI at International Conference on Machine Learning 2021. Journal of LatinX in AI Research, 2021. http://dx.doi.org/10.52591/lxai202107245.

Full text
Abstract:
Cracks are pathologies whose appearance in ceramic tiles can cause various types of scratches due to the coating system losing water tightness and impermeability functions. Besides, the detachment of a ceramic plate, exposing the building structure, can still reach people who move around the building. Manual inspection is the most common method for this problem. However, it depends on the knowledge and experience of those who perform the analysis and demands a long time to map the entire area and high cost. These inspections require special equipment when they are at high altitudes, and the integrity of the inspector is at risk. Thus, there exists a need for automated optical inspection to find faults in ceramic tiles. This work focuses on the segmentation of cracks in ceramic images using deep learning to segment these defects. We propose an architecture for segmenting cracks in facades with Deep Learning that includes a pre-processing step. We also propose the Ceramic Crack Database, a set of images to segment defects in ceramic tiles. The results show that the proposed architecture for ceramic crack segmentation achieves promising performance.
APA, Harvard, Vancouver, ISO, and other styles
7

Müller, Dominik, and Frank Kramer. "MIScnn: A Framework for Medical Image Segmentation with Convolutional Neural Networks and Deep Learning." In 2019 Kidney Tumor Segmentation Challenge: KiTS19. University of Minnesota Libraries Publishing, 2019. http://dx.doi.org/10.24926/548719.074.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Mehrubeoglu, Mehrube, Isaac Vargas, Chi Huang, and Kirk Cammarata. "Segmentation of seagrass blade images using deep learning." In Real-Time Image Processing and Deep Learning 2021, edited by Nasser Kehtarnavaz and Matthias F. Carlsohn. SPIE, 2021. http://dx.doi.org/10.1117/12.2587057.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Vijay, Amishi, Jasleen Saini, and B. S. Saini. "A Review of Brain Tumor Image Segmentation of MR Images Using Deep Learning Methods." In International Conference on Women Researchers in Electronics and Computing. AIJR Publisher, 2021. http://dx.doi.org/10.21467/proceedings.114.19.

Full text
Abstract:
A significant analysis is routine for Brain Tumor patients and it depends on accurate segmentation of Region of Interest. In automatic segmentation, field deep learning algorithms are attaining interest after they have performed very well in various ImageNet competitions. This review focuses on state-of-the-art Deep Learning Algorithms which are applied to Brain Tumor Segmentation. First, we review the methods of brain tumor segmentation, next the different deep learning algorithms and their performance measures like sensitivity, specificity and Dice similarity Coefficient (DSC) are discussed and Finally, we discuss and summarize the current deep learning techniques and identify future scope and trends.
APA, Harvard, Vancouver, ISO, and other styles
10

Liu, Yun, Peng-Tao Jiang, Vahan Petrosyan, Shi-Jie Li, Jiawang Bian, Le Zhang, and Ming-Ming Cheng. "DEL: Deep Embedding Learning for Efficient Image Segmentation." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/120.

Full text
Abstract:
Image segmentation has been explored for many years and still remains a crucial vision problem. Some efficient or accurate segmentation algorithms have been widely used in many vision applications. However, it is difficult to design a both efficient and accurate image segmenter. In this paper, we propose a novel method called DEL (deep embedding learning) which can efficiently transform superpixels into image segmentation. Starting with the SLIC superpixels, we train a fully convolutional network to learn the feature embedding space for each superpixel. The learned feature embedding corresponds to a similarity measure that measures the similarity between two adjacent superpixels. With the deep similarities, we can directly merge the superpixels into large segments. The evaluation results on BSDS500 and PASCAL Context demonstrate that our approach achieves a good trade-off between efficiency and effectiveness. Specifically, our DEL algorithm can achieve comparable segments when compared with MCG but is much faster than it, i.e. 11.4fps vs. 0.07fps.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Deep learning segmentation"

1

Chang, Ke-Vin. Deep Learning Algorithm for Automatic Localization and Segmentation of the Median Nerve: a Protocol for Systematic Review and Meta-analysis. INPLASY - International Platform of Registered Systematic Review and Meta-analysis Protocols, May 2022. http://dx.doi.org/10.37766/inplasy2022.5.0074.

Full text
Abstract:
Review question / Objective: To explore/summarize the performance of deep learning in automatic localization and segmentation of the median nerve at the carpal tunnel level. Condition being studied: Participants with and without carpal tunnel syndrome. Information sources: The following electronic databases will be searched, encompassing PubMed, Medline, Embase and Web of Science. We target the studies investigating in the utility of deep neural network on the evaluation of the median nerve in the carpal tunnel.
APA, Harvard, Vancouver, ISO, and other styles
2

Huang, Haohang, Erol Tutumluer, Jiayi Luo, Kelin Ding, Issam Qamhia, and John Hart. 3D Image Analysis Using Deep Learning for Size and Shape Characterization of Stockpile Riprap Aggregates—Phase 2. Illinois Center for Transportation, September 2022. http://dx.doi.org/10.36501/0197-9191/22-017.

Full text
Abstract:
Riprap rock and aggregates are extensively used in structural, transportation, geotechnical, and hydraulic engineering applications. Field determination of morphological properties of aggregates such as size and shape can greatly facilitate the quality assurance/quality control (QA/QC) process for proper aggregate material selection and engineering use. Many aggregate imaging approaches have been developed to characterize the size and morphology of individual aggregates by computer vision. However, 3D field characterization of aggregate particle morphology is challenging both during the quarry production process and at construction sites, particularly for aggregates in stockpile form. This research study presents a 3D reconstruction-segmentation-completion approach based on deep learning techniques by combining three developed research components: field 3D reconstruction procedures, 3D stockpile instance segmentation, and 3D shape completion. The approach was designed to reconstruct aggregate stockpiles from multi-view images, segment the stockpile into individual instances, and predict the unseen side of each instance (particle) based on the partial visible shapes. Based on the dataset constructed from individual aggregate models, a state-of-the-art 3D instance segmentation network and a 3D shape completion network were implemented and trained, respectively. The application of the integrated approach was demonstrated on re-engineered stockpiles and field stockpiles. The validation of results using ground-truth measurements showed satisfactory algorithm performance in capturing and predicting the unseen sides of aggregates. The algorithms are integrated into a software application with a user-friendly graphical user interface. Based on the findings of this study, this stockpile aggregate analysis approach is envisioned to provide efficient field evaluation of aggregate stockpiles by offering convenient and reliable solutions for on-site QA/QC tasks of riprap rock and aggregate stockpiles.
APA, Harvard, Vancouver, ISO, and other styles
3

Alhasson, Haifa F., and Shuaa S. Alharbi. New Trends in image-based Diabetic Foot Ucler Diagnosis Using Machine Learning Approaches: A Systematic Review. INPLASY - International Platform of Registered Systematic Review and Meta-analysis Protocols, November 2022. http://dx.doi.org/10.37766/inplasy2022.11.0128.

Full text
Abstract:
Review question / Objective: A significant amount of research has been conducted to detect and recognize diabetic foot ulcers (DFUs) using computer vision methods, but there are still a number of challenges. DFUs detection frameworks based on machine learning/deep learning lack systematic reviews. With Machine Learning (ML) and Deep learning (DL), you can improve care for individuals at risk for DFUs, identify and synthesize evidence about its use in interventional care and management of DFUs, and suggest future research directions. Information sources: A thorough search of electronic databases such as Science Direct, PubMed (MIDLINE), arXiv.org, MDPI, Nature, Google Scholar, Scopus and Wiley Online Library was conducted to identify and select the literature for this study (January 2010-January 01, 2023). It was based on the most popular image-based diagnosis targets in DFu such as segmentation, detection and classification. Various keywords were used during the identification process, including artificial intelligence in DFu, deep learning, machine learning, ANNs, CNNs, DFu detection, DFu segmentation, DFu classification, and computer-aided diagnosis.
APA, Harvard, Vancouver, ISO, and other styles
4

Patwa, B., P. L. St-Charles, G. Bellefleur, and B. Rousseau. Predictive models for first arrivals on seismic reflection data, Manitoba, New Brunswick, and Ontario. Natural Resources Canada/CMSS/Information Management, 2022. http://dx.doi.org/10.4095/329758.

Full text
Abstract:
First arrivals are the primary waves picked and analyzed by seismologists to infer properties of the subsurface. Here we try to solve a problem in a small subsection of the seismic processing workflow: first break picking of seismic reflection data. We formulate this problem as an image segmentation task. Data is preprocessed, cleaned from outliers and extrapolated to make the training of deep learning models feasible. We use Fully Convolutional Networks (specifically UNets) to train initial models and explore their performance with losses, layer depths, and the number of classes. We propose to use residual connections to improve each UNet block and residual paths to solve the semantic gap between UNet encoder and decoder which improves the performance of the model. Adding spatial information as an extra channel helped increase the RMSE performance of the first break predictions. Other techniques like data augmentation, multitask loss, and normalization methods, were further explored to evaluate model improvement.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography