Dissertations / Theses on the topic 'Deep Learning Imaging'

To see the other types of publications on this topic, follow the link: Deep Learning Imaging.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Deep Learning Imaging.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Li, Shuai Ph D. Massachusetts Institute of Technology. "Computational imaging through deep learning." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122070.

Full text
Abstract:
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2019
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 143-154).
Computational imaging (CI) is a class of imaging systems that uses inverse algorithms to recover an unknown object from the physical measurement. Traditional inverse algorithms in CI obtain an estimate of the object by minimizing the Tikhonov functional, which requires explicit formulations of the forward operator of the physical system, as well as the prior knowledge about the class of objects being imaged. In recent years, machine learning architectures, and deep learning (DL) in particular, have attracted increasing attentions from CI researchers. Unlike traditional inverse algorithms in CI, DL approach learns both the forward operator and the objects' prior implicitly from training examples. Therefore, it is especially attractive when the forward imaging model is uncertain (e.g. imaging through random scattering media), or the prior about the class of objects is difficult to be expressed analytically (e.g. natural images).
In this thesis, the application of DL approaches in two different CI scenarios are investigated: imaging through a glass diffuser and quantitative phase retrieval (QPR), where an Imaging through Diffuser Network (IDiffNet) and a Phase Extraction Neural Network (PhENN) are experimentally demonstrated, respectively. This thesis also studies the influences of the two main factors that determine the performance of a trained neural network: network architecture (connectivity, network depth, etc) and training example quality (spatial frequency content in particular). Motivated by the analysis of the latter factor, two novel approaches, spectral pre-modulation approach and Learning Synthesis by DNN (LS-DNN) method, are successively proposed to improve the visual qualities of the network outputs. Finally, the LS-DNN enhanced PhENN is applied to a phase microscope to recover the phase of a red blood cell (RBC) sample.
Furthermore, through simulation of the learned weak object transfer function (WOTF) and experiment on a star-like phase target, we demonstrate that our network has indeed learned the correct physical model rather than doing something trivial as pattern matching.
by Shuai Li.
Ph. D.
Ph.D. Massachusetts Institute of Technology, Department of Mechanical Engineering
APA, Harvard, Vancouver, ISO, and other styles
2

Alzubaidi, Laith. "Deep learning for medical imaging applications." Thesis, Queensland University of Technology, 2022. https://eprints.qut.edu.au/227812/1/Laith_Alzubaidi_Thesis.pdf.

Full text
Abstract:
This thesis investigated novel deep learning techniques for advanced medical imaging applications. It addressed three major research issues of employing deep learning for medical imaging applications including network architecture, lack of training data, and generalisation. It proposed three new frameworks for CNN network architecture and three novel transfer learning methods. The proposed solutions have been tested on four different medical imaging applications demonstrating their effectiveness and generalisation. These solutions have already been employed by the scientific community showing excellent performance in medical imaging applications and other domains.
APA, Harvard, Vancouver, ISO, and other styles
3

Bernal, Moyano Jose. "Deep learning for atrophy quantification in brain magnetic resonance imaging." Doctoral thesis, Universitat de Girona, 2020. http://hdl.handle.net/10803/671699.

Full text
Abstract:
The quantification of cerebral atrophy is fundamental in neuroinformatics since it permits diagnosing brain diseases, assessing their progression, and determining the effectiveness of novel treatments to counteract them. However, this is still an open and challenging problem since the performance 2/2 of traditional methods depends on imaging protocols and quality, data harmonisation errors, and brain abnormalities. In this doctoral thesis, we question whether deep learning methods can be used for better estimating cerebral atrophy from magnetic resonance images. Our work shows that deep learning can lead to a state-of-the-art performance in cross-sectional assessments and compete and surpass traditional longitudinal atrophy quantification methods. We believe that the proposed cross-sectional and longitudinal methods can be beneficial for the research and clinical community
La cuantificación de la atrofia cerebral es fundamental en la neuroinformática ya que permite diagnosticar enfermedades cerebrales, evaluar su progresión y determinar la eficacia de los nuevos tratamientos para contrarrestarlas. Sin embargo, éste sigue siendo un problema abierto y difícil, ya que el rendimiento de los métodos tradicionales depende de los protocolos y la calidad de las imágenes, los errores de armonización de los datos y las anomalías del cerebro. En esta tesis doctoral, cuestionamos si los métodos de aprendizaje profundo pueden ser utilizados para estimar mejor la atrofia cerebral a partir de imágenes de resonancia magnética. Nuestro trabajo muestra que el aprendizaje profundo puede conducir a un rendimiento de vanguardia en las evaluaciones transversales y competir y superar los métodos tradicionales de cuantificación de la atrofia longitudinal. Creemos que los métodos transversales y longitudinales propuestos pueden ser beneficiosos para la comunidad investigadora y clínica
APA, Harvard, Vancouver, ISO, and other styles
4

Sundman, Tobias. "Noise Reduction in Flash X-ray Imaging Using Deep Learning." Thesis, Uppsala universitet, Avdelningen för beräkningsvetenskap, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-355731.

Full text
Abstract:
Recent improvements in deep learning architectures, combined with the strength of modern computing hardware such as graphics processing units, has lead to significant results in the field of image analysis. In this thesis work, locally connected architectures are employed to reduce noise in flash X-ray diffraction images. The layers in these architectures use convolutional kernels, but without shared weights. This combines the benefits of lower model memory footprint in convolutional networks with the higher model capacity of fully connected networks. Since the camera used to capture the diffraction images has pixelwise unique characteristics, and thus lacks equivariance, this compromise can be beneficial. The background images of this thesis work were generated with an active laser but without injected samples. Artificial diffraction patterns were then added to these background images allowing for training U-Net architectures to separate them. Architecture A achieved a performance of 0.187 on the test set, roughly translating to 35 fewer photon errors than a model similar to state of the art. After smoothing the photon errors this performance increased to 0.285, since the U-Net architectures managed to remove flares where state of the art could not. This could be taken as a proof of concept that locally connected networks are able to separate diffraction from background in flash X-Ray imaging.
APA, Harvard, Vancouver, ISO, and other styles
5

Forsgren, Edvin. "Deep Learning to Enhance Fluorescent Signals in Live Cell Imaging." Thesis, Umeå universitet, Institutionen för fysik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-175328.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

McCamey, Morgan R. "Deep Learning for Compressive SAR Imaging with Train-Test Discrepancy." Wright State University / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=wright1624266549100904.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Wajngot, David. "Improving Image Quality in Cardiac Computed Tomography using Deep Learning." Thesis, Linköpings universitet, Avdelningen för kardiovaskulär medicin, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-154506.

Full text
Abstract:
Cardiovascular diseases are the largest mortality factor globally, and early diagnosis is essential for a proper medical response. Cardiac computed tomography can be used to acquire images for their diagnosis, but without radiation dose reduction the radiation emitted to the patient becomes a significant risk factor. By reducing the dose, the image quality is often compromised, and determining a diagnosis becomes difficult. This project proposes image quality enhancement with deep learning. A cycle-consistent generative adversarial neural network was fed low- and high-quality images with the purpose to learn to translate between them. By using a cycle-consistency cost it was possible to train the network without paired data. With this method, a low-quality image acquired from a computed tomography scan with dose reduction could be enhanced in post processing. The results were mixed but showed an increase of ventricular contrast and artifact mitigation. The technique comes with several problems that are yet to be solved, such as structure alterations, but it shows promise for continued development.
APA, Harvard, Vancouver, ISO, and other styles
8

Nie, Yali. "Automatic Melanoma Diagnosis in Dermoscopic Imaging Base on Deep Learning System." Licentiate thesis, Mittuniversitetet, Institutionen för elektronikkonstruktion, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-41751.

Full text
Abstract:
Melanoma is one of the deadliest forms of cancer. Unfortunately, its incidence rates have been increasing all over the world. One of the techniques used by dermatologists to diagnose melanomas is an imaging modality called dermoscopy. The skin lesion is inspected using a magnification device and a light source. This technique makes it possible for the dermatologist to observe subcutaneous structures that would be invisible otherwise. However, the use of dermoscopy is not straightforward, requiring years of practice. Moreover, the diagnosis is many times subjective and challenging to reproduce. Therefore, it is necessary to develop automatic methods that will help dermatologists provide more reliable diagnoses.  Since this cancer is visible on the skin, it is potentially detectable at a very early stage when it is curable. Recent developments have converged to make fully automatic early melanoma detection a real possibility. First, the advent of dermoscopy has enabled a dramatic boost in the clinical diagnostic ability to the point that it can detect melanoma in the clinic at the earliest stages. This technology’s global adoption has allowed the accumulation of extensive collections of dermoscopy images. The development of advanced technologies in image processing and machine learning has given us the ability to distinguish malignant melanoma from the many benign mimics that require no biopsy. These new technologies should allow earlier detection of melanoma and reduce a large number of unnecessary and costly biopsy procedures. Although some of the new systems reported for these technologies have shown promise in preliminary trials, a widespread implementation must await further technical progress in accuracy and reproducibility.  This thesis provides an overview of our deep learning (DL) based methods used in the diagnosis of melanoma in dermoscopy images. First, we introduce the background. Then, this paper gives a brief overview of the state-of-art article on melanoma interpret. After that, a review is provided on the deep learning models for melanoma image analysis and the main popular techniques to improve the diagnose performance. We also made a summary of our research results. Finally, we discuss the challenges and opportunities for automating melanocytic skin lesions’ diagnostic procedures. We end with an overview of a conclusion and directions for the following research plan.
APA, Harvard, Vancouver, ISO, and other styles
9

Hoffmire, Matthew A. "Deep Learning for Anisoplanatic Optical Turbulence Mitigation in Long Range Imaging." University of Dayton / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1607694391536891.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Marini, Michela. "Representation learning and applications in neuronal imaging." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/19776/.

Full text
Abstract:
Confocal fluorescence microscopy is a microscopic technique that provides true three-dimensional (3D) optical resolution and that allows the visualization of molecular expression patterns and morphological structures. This technique has therefore become increasingly more important in neuroscience, due to its applications in image-based screening and profiling of neurons. However, in the last two decades, many approaches have been introduced to segment the neurons automatically. With the more recent advances in the field of neural networks and Deep Learning, multiple methods have been implemented with focus on the segmentation and delineation of the neuronal trees and somas. Deep Learning methods, such as the Convolutional Neural Networks (CNN), have recently become one of the new trends in the Computer Vision area. Their ability to find strong spatially local correlations in the data at different abstraction levels allows them to learn a set of filters that are useful to correctly segment the data, when given a labeled training set. The overall aim of this thesis was to develop a new algorithm for automated segmentation of confocal neuronal images based on Deep Learning techniques. In order to realize this goal, we implemented a U-Net-based CNN and realized the dataset necessary to train the Neural Network. The results show how satisfactory segmentations are achieved for all the test images given in input to our algorithm, by obtaining a Dice coefficient, as average of all the images of the test dataset, greater than 0.9.
APA, Harvard, Vancouver, ISO, and other styles
11

Nasrin, Mst Shamima. "Pathological Image Analysis with Supervised and Unsupervised Deep Learning Approaches." University of Dayton / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1620052562772676.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Wallis, David. "A study of machine learning and deep learning methods and their application to medical imaging." Thesis, université Paris-Saclay, 2021. http://www.theses.fr/2021UPAST057.

Full text
Abstract:
Nous utilisons d'abord des réseaux neuronaux convolutifs (CNNs) pour automatiser la détection des ganglions lymphatiques médiastinaux dans les images TEP/TDM. Nous construisons un modèle entièrement automatisé pour passer directement des images TEP/TDM à la localisation des ganglions. Les résultats montrent une performance comparable à celle d'un médecin. Dans la seconde partie de la thèse, nous testons la performance, l'interprétabilité et la stabilité des modèles radiomiques et CNN sur trois ensembles de données (IRM cérébrale 2D, TDM pulmonaire 3D, TEP/TDM médiastinale 3D). Nous comparons la façon dont les modèles s'améliorent lorsque davantage de données sont disponibles et nous examinons s'il existe des tendances communess aux différents problèmes. Nous nous demandons si les méthodes actuelles d'interprétation des modèles sont satisfaisantes. Nous étudions également comment une segmentation précise affecte les performances des modèles. Nous utilisons d'abord des réseaux neuronaux convolutifs (CNNs) pour automatiser la détection des ganglions lymphatiques médiastinaux dans les images TEP/TDM. Nous construisons un modèle entièrement automatisé pour passer directement des images TEP/TDM à la localisation des ganglions. Les résultats montrent une performance comparable à celle d'un médecin. Dans la seconde partie de la thèse, nous testons la performance, l'interprétabilité et la stabilité des modèles radiomiques et CNN sur trois ensembles de données (IRM cérébrale 2D, TDM pulmonaire 3D, TEP/TDM médiastinale 3D). Nous comparons la façon dont les modèles s'améliorent lorsque davantage de données sont disponibles et nous examinons s'il existe des tendances communess aux différents problèmes. Nous nous demandons si les méthodes actuelles d'interprétation des modèles sont satisfaisantes. Nous étudions également comment une segmentation précise affecte les performances des modèles
We first use Convolutional Neural Networks (CNNs) to automate mediastinal lymph node detection using FDG-PET/CT scans. We build a fully automated model to go directly from whole-body FDG-PET/CT scans to node localisation. The results show a comparable performance to an experienced physician. In the second half of the thesis we experimentally test the performance, interpretability, and stability of radiomic and CNN models on three datasets (2D brain MRI scans, 3D CT lung scans, 3D FDG-PET/CT mediastinal scans). We compare how the models improve as more data is available and examine whether there are patterns common to the different problems. We question whether current methods for model interpretation are satisfactory. We also investigate how precise segmentation affects the performance of the models. We first use Convolutional Neural Networks (CNNs) to automate mediastinal lymph node detection using FDG-PET/CT scans. We build a fully automated model to go directly from whole-body FDG-PET/CT scans to node localisation. The results show a comparable performance to an experienced physician. In the second half of the thesis we experimentally test the performance, interpretability, and stability of radiomic and CNN models on three datasets (2D brain MRI scans, 3D CT lung scans, 3D FDG-PET/CT mediastinal scans). We compare how the models improve as more data is available and examine whether there are patterns common to the different problems. We question whether current methods for model interpretation are satisfactory. We also investigate how precise segmentation affects the performance of the models
APA, Harvard, Vancouver, ISO, and other styles
13

Vekhande, Swapnil Sudhir. "Deep Learning Neural Network-based Sinogram Interpolation for Sparse-View CT Reconstruction." Thesis, Virginia Tech, 2019. http://hdl.handle.net/10919/90182.

Full text
Abstract:
Computed Tomography (CT) finds applications across domains like medical diagnosis, security screening, and scientific research. In medical imaging, CT allows physicians to diagnose injuries and disease more quickly and accurately than other imaging techniques. However, CT is one of the most significant contributors of radiation dose to the general population and the required radiation dose for scanning could lead to cancer. On the other hand, a shallow radiation dose could sacrifice image quality causing misdiagnosis. To reduce the radiation dose, sparse-view CT, which includes capturing a smaller number of projections, becomes a promising alternative. However, the image reconstructed from linearly interpolated views possesses severe artifacts. Recently, Deep Learning-based methods are increasingly being used to interpret the missing data by learning the nature of the image formation process. The current methods are promising but operate mostly in the image domain presumably due to lack of projection data. Another limitation is the use of simulated data with less sparsity (up to 75%). This research aims to interpolate the missing sparse-view CT in the sinogram domain using deep learning. To this end, a residual U-Net architecture has been trained with patch-wise projection data to minimize Euclidean distance between the ground truth and the interpolated sinogram. The model can generate highly sparse missing projection data. The results show improvement in SSIM and RMSE by 14% and 52% respectively with respect to the linear interpolation-based methods. Thus, experimental sparse-view CT data with 90% sparsity has been successfully interpolated while improving CT image quality.
Master of Science
Computed Tomography is a commonly used imaging technique due to the remarkable ability to visualize internal organs, bones, soft tissues, and blood vessels. It involves exposing the subject to X-ray radiation, which could lead to cancer. On the other hand, the radiation dose is critical for the image quality and subsequent diagnosis. Thus, image reconstruction using only a small number of projection data is an open research problem. Deep learning techniques have already revolutionized various Computer Vision applications. Here, we have used a method which fills missing highly sparse CT data. The results show that the deep learning-based method outperforms standard linear interpolation-based methods while improving the image quality.
APA, Harvard, Vancouver, ISO, and other styles
14

Sahasrabudhe, Mihir. "Unsupervised and weakly supervised deep learning methods for computer vision and medical imaging." Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASC010.

Full text
Abstract:
Les premières contributions de cette thèse (Chapter 2 et Chapitre 3) sont des modèles appelés Deforming Autoencoder (DAE) et Lifting Autoencoder (LAE), utilisés pour l'apprentissage non-supervisé de l'alignement 2-D dense d'images d'une classe donnée, et à partir de cela, pour apprendre un modèle tridimensionnel de l'objet. Ces modèles sont capable d'identifer des espaces canoniques pour représenter de différent caractéristiques de l'objet, à savoir, l'apparence des objets dans l'espace canonique, la déformation dense associée permettant de retrouver l'image réelle à partir de cette apparence, et pour le cas des visages humains, le modèle 3-D propre au visage de la personne considérée, son expression faciale, et l'angle de vue de la caméra. De plus, nous illustrons l'application de DAE à d'autres domaines, à savoir, l'alignement d'IRM de poumons et d'images satellites. Dans le Chapitre 4, nous nous concentrons sur une problématique lié au cancer du sang-diagnostique d'hyperlymphocytosis. Nous proposons un modèle convolutif pour encoder les images appartenant à un patient, suivi par la concaténation de l'information contenue dans toutes les images. Nos résultats montrent que les modèles proposés sont de performances comparables à celles des biologistes, et peuvent dont les aider dans l'élaboration de leur diagnostique
The first two contributions of this thesis (Chapter 2 and 3) are models for unsupervised 2D alignment and learning 3D object surfaces, called Deforming Autoencoders (DAE) and Lifting Autoencoders (LAE). These models are capable of identifying canonical space in order to represent different object properties, for example, appearance in a canonical space, deformation associated with this appearance that maps it to the image space, and for human faces, a 3D model for a face, its facial expression, and the angle of the camera. We further illustrate applications of models to other domains_ alignment of lung MRI images in medical image analysis, and alignment of satellite images for remote sensing imagery. In Chapter 4, we concentrate on a problem in medical image analysis_ diagnosis of lymphocytosis. We propose a convolutional network to encode images of blood smears obtained from a patient, followed by an aggregation operation to gather information from all images in order to represent them in one feature vector which is used to determine the diagnosis. Our results show that the performance of the proposed models is at-par with biologists and can therefore augment their diagnosis
APA, Harvard, Vancouver, ISO, and other styles
15

Cabrera, Gil Blanca. "Deep Learning Based Deformable Image Registration of Pelvic Images." Thesis, KTH, Skolan för kemi, bioteknologi och hälsa (CBH), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-279155.

Full text
Abstract:
Deformable image registration is usually performed manually by clinicians,which is time-consuming and costly, or using optimization-based algorithms, which are not always optimal for registering images of different modalities. In this work, a deep learning-based method for MR-CT deformable image registration is presented. In the first place, a neural network is optimized to register CT pelvic image pairs. Later, the model is trained on MR-CT image pairs to register CT images to match its MR counterpart. To solve the unavailability of ground truth data problem, two approaches were used. For the CT-CT case, perfectly aligned image pairs were the starting point of our model, and random deformations were generated to create a ground truth deformation field. For the multi-modal case, synthetic CT images were generated from T2-weighted MR using a CycleGAN model, plus synthetic deformations were applied to the MR images to generate ground truth deformation fields. The synthetic deformations were created by combining a coarse and fine deformation grid, obtaining a field with deformations of different scales. Several models were trained on images of different resolutions. Their performance was benchmarked with an analytic algorithm used in an actual registration workflow. The CT-CT models were tested using image pairs created by applying synthetic deformation fields. The MR-CT models were tested using two types of test images. The first one contained synthetic CT images and MR ones deformed by synthetically generated deformation fields. The second test set contained real MR-CT image pairs. The test performance was measured using the Dice coefficient. The CT-CT models obtained Dice scores higherthan 0.82 even for the models trained on lower resolution images. Despite the fact that all MR-CT models experienced a drop in their performance, the biggest decrease came from the analytic method used as a reference, both for synthetic and real test data. This means that the deep learning models outperformed the state-of-the-art analytic benchmark method. Even though the obtained Dice scores would need further improvement to be used in a clinical setting, the results show great potential for using deep learning-based methods for multi- and mono-modal deformable image registration.
APA, Harvard, Vancouver, ISO, and other styles
16

Chen, Zhiang. "Deep-learning Approaches to Object Recognition from 3D Data." Case Western Reserve University School of Graduate Studies / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=case1496303868914492.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Ren, Jing. "From RF signals to B-mode Images Using Deep Learning." Thesis, KTH, Medicinteknik och hälsosystem, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-235061.

Full text
Abstract:
Ultrasound imaging is a safe and popular imaging technique that relies on received radio frequency (RF) echos to show the internal organs and tissue. B-mode (Brightness mode) is the typical mode of ultrasound images generated from RF signals. In practice, the real processing algorithms from RF signals to B-mode images in ultrasound machines are kept confidential by the manufacturers. The thesis aims to estimate the process and reproduce the same results as the Ultrasonix One ultrasound machine does using deep learning. 11 scalar parameters including global gain, time-gain-compensation (TGC1-8), dynamic range and reject affect the transformation from RF signals to B-mode images in the machine. Data generation strategy was proposed. Two network architectures adapted from U-Net and Tiramisu Net were investigated and compared. Results show that a deep learning network is able to translate RF signals to B-mode images with respect to the controlling parameters. The best performance is achieved by adapted U-Net that reduces per pixel error to 1.325%. The trained model can be used to generate images for other experiments.
APA, Harvard, Vancouver, ISO, and other styles
18

Hellström, Terese. "Deep-learning based prediction model for dose distributions in lung cancer patients." Thesis, Stockholms universitet, Fysikum, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-196891.

Full text
Abstract:
Background To combat one of the leading causes of death worldwide, lung cancer treatment techniques and modalities are advancing, and the treatment options are becoming increasingly individualized. Modern cancer treatment includes the option for the patient to be treated with proton therapy, which can in some cases spare healthy tissue from excessive dose better than conventional photon radiotherapy. However, to assess the benefit of proton therapy compared to photon therapy, it is necessary to make both treatment plans to get information about the Tumour Control Probability (TCP) and the Normal Tissue Complication Probability (NTCP). This requires excessive treatment planning time and increases the workload for planners.  Aim This project aims to investigate the possibility for automated prediction of the treatment dose distribution using a deep learning network for lung cancer patients treated with photon radiotherapy. This is an initial step towards decreasing the overall planning time and would allow for efficient estimation of the NTCP for each treatment plan and lower the workload of treatment planning technicians. The purpose of the current work was also to understand which features of the input data and training specifics were essential for producing accurate predictions.  Methods Three different deep learning networks were developed to assess the difference in performance based on the complexity of the input for the network. The deep learning models were applied for predictions of the dose distribution of lung cancer treatment and used data from 95 patient treatments. The networks were trained with a U-net architecture using input data from the planning Computed Tomography (CT) and volume contours to produce an output of the dose distribution of the same image size. The network performance was evaluated based on the error of the predicted mean dose to Organs At Risk (OAR) as well as the shape of the predicted Dose-Volume Histogram (DVH) and individual dose distributions.  Results  The optimal input combination was the CT scan and lung, mediastinum envelope and Planning Target Volume (PTV) contours. The model predictions showed a homogenous dose distribution over the PTV with a steep fall-off seen in the DVH. However, the dose distributions had a blurred appearance and the predictions of the doses to the OARs were therefore not as accurate as of the doses to the PTV compared to the manual treatment plans. The performance of the network trained with the Houndsfield Unit input of the CT scan had similar performance as the network trained without it.  Conclusions As one of the novel attempts to assess the potential for a deep learning-based prediction model for the dose distribution based on minimal input, this study shows promising results. To develop this kind of model further a larger data set would be needed and the training method could be expanded as a generative adversarial network or as a more developed U-net network.
APA, Harvard, Vancouver, ISO, and other styles
19

Kostopouls, Theodore P. "A Machine Learning approach to Febrile Classification." Digital WPI, 2018. https://digitalcommons.wpi.edu/etd-theses/1173.

Full text
Abstract:
General health screening is needed to decrease the risk of pandemic in high volume areas. Thermal characterization, via infrared imaging, is an effective technique for fever detection, however, strict use requirements in combination with highly controlled environmental conditions compromise the practicality of such a system. Combining advanced processing techniques to thermograms of individuals can remove some of these requirements allowing for more flexible classification algorithms. The purpose of this research was to identify individuals who had febrile status utilizing modern thermal imaging and machine learning techniques in a minimally controlled setting. Two methods were evaluated with data that contained environmental, and acclimation noise due to data gathering technique. The first was a pretrained VGG16 Convolutional Neural Network found to have F1 score of 0.77 (accuracy of 76%) on a balanced dataset. The second was a VGG16 Feature Extractor that gives inputs to a principle components analysis and utilizes a support vector machine for classification. This technique obtained a F1 score of 0.84 (accuracy of 85%) on balanced data sets. These results demonstrate that machine learning is an extremely viable technique to classify febrile status independent of noise affiliated.
APA, Harvard, Vancouver, ISO, and other styles
20

Maestri, Rita. "Metodiche di deep learning e applicazioni all’imaging medico: la radiomica." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/15452/.

Full text
Abstract:
Questa tesi ha lo scopo di presentare il deep learning e una delle sue applicazioni che ha avuto molto successo nell'analisi delle immagini: la rete neurale convoluzionale. In particolare, si espongono i vantaggi, gli svantaggi e i risultati ottenuti nell'applicazione delle reti convoluzionali alla radiomica, una nuova disciplina che prevede l'estrazione di un elevato numero di feature dalle immagini mediche per elaborare modelli di supporto a diagnosi e prognosi. Nel primo capitolo si introducono concetti di machine learning utili per comprendere gli algoritmi di apprendimento usati anche nel deep learning. Poi sono presentate le reti neurali, ovvero le strutture su cui si basano gli algoritmi di deep learning. Infine, viene spiegato il funzionamento e gli utilizzi delle reti neurali convoluzionali. Nel secondo capitolo si espongono le tecniche e gli utilizzi della radiomica e, infine, i vantaggi di usare le reti neurali convoluzionali in quest'ambito, presentando alcuni recenti studi portati a termine in merito.
APA, Harvard, Vancouver, ISO, and other styles
21

Nayak, Aman Kumar. "Segmenting the Left Atrium in Cardic CT Images using Deep Learning." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-176592.

Full text
Abstract:
Convolution neural networks have achieved a state of the art accuracy for multi-class segmentation in biomedical image science. In this thesis, a 2-Stage binary 2D UNet and MultiResUNet are used to segment the 3D cardiac CT Volumes. 3D volumes have been sliced into 2D images. The 2D networks learned to classify the pixels by transforming the information about the segmentation into latent feature space in a contracting path and upsampling them to semantic segmentation in an expanding path. The network trained on diastole and systole timestamp volumes will be able to handle much more extreme morphological differences between the subjects. Evaluation of the results is based on the Dice coefficient as a segmentation metric. The thesis work also explores the impact of the various loss function in image segmentation for the imbalanced dataset. Results show that2-Stage binary UNet has higher performance than MultiResUnet considering segmentation done in all planes. In this work, Convolution neural network prediction uncertainty is estimated using Monte Carlo dropout estimation and it shows that 2-Stage Binary UNet has lower prediction uncertainty than MultiResUNet.
APA, Harvard, Vancouver, ISO, and other styles
22

Camborata, Caterina. "Capsule networks: a new approach for brain imaging." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/18127/.

Full text
Abstract:
Nel campo delle reti neurali per il riconoscimento immagini, una delle più recenti e promettenti innovazioni è l’utilizzo delle Capsule Networks (CapsNet). Lo scopo di questo lavoro di tesi è studiare l'approccio CapsNet per l'analisi di immagini, in particolare per quelle neuroanatomiche. Le odierne tecniche di microscopia ottica, infatti, hanno posto sfide significative in termini di analisi dati, per l'elevata quantità di immagini disponibili e per la loro risoluzione sempre più fine. Con l'obiettivo di ottenere informazioni strutturali sulla corteccia cerebrale, nuove proposte di segmentazione possono rivelarsi molto utili. Fino a questo momento, gli approcci più utilizzati in questo campo sono basati sulla Convolutional Neural Network (CNN), architettura che raggiunge le performance migliori rappresentando lo stato dell'arte dei risultati di Deep Learning. Ci proponiamo, con questo studio, di aprire la strada ad un nuovo approccio che possa superare i limiti delle CNNs come, ad esempio, il numero di parametri utilizzati e l'accuratezza del risultato. L’applicazione in neuroscienze delle CapsNets, basate sull’idea di emulare il funzionamento della visione e dell’elaborazione immagini nel cervello umano, concretizza un paradigma di ricerca stimolante volto a superare i limiti della conoscenza della natura e i limiti della natura stessa.
APA, Harvard, Vancouver, ISO, and other styles
23

Ran, Peipei. "Imaging and diagnostic of sub-wavelength micro-structures, from closed-form algorithms to deep learning." Electronic Thesis or Diss., université Paris-Saclay, 2020. http://www.theses.fr/2020UPASG061.

Full text
Abstract:
Le test électromagnétique d’un ensemble fini en forme de grille de tiges diélectriques cylindriques circulaires infiniment longues dont certaines manquent est investigué à partir de données fréquence simple et multiple et en régime temporel. Les distances sous-longueur d’onde entre tiges adjacentes et des diamètres de tige de sous-longueur d’onde sont considérées sur toute la bande de fréquences d’opération et cela conduit à un défi majeur en raison du besoin de super-résolution dans la microstructure, bien au-delà du critère de Rayleigh. Tout un ensemble de méthodes de résolution est étudié et des simulations numériques systématiques illustrent avantages et inconvénients, complétées par le traitement de données expérimentales en laboratoire acquises sur un prototype de micro-structure en chambre anéchoïque micro-onde. Ces méthodes, qui diffèrent selon les informations a priori prises en compte et la polyvalence qui en résulte, comprennent retournement temporel, inversions de source de contraste, binaires ou parcimonieuses, ainsi que réseaux de neurones convolutifs éventuellement combinés avec des réseaux récurrents
Electromagnetic probing of a gridlike, finite set of infinitely long circular cylindrical dielectric rods affected by missing ones is investigated from time-harmonic single and multiple frequency data. Sub-wavelength distances between adjacent rods and sub-wavelength rod diameters are assumed throughout the frequency band of operation and this leads to a severe challenge due to need of super-resolution within the present micro-structure, well beyond the Rayleigh criterion. A wealth of solution methods is investigated and comprehensive numerical simulations illustrate pros and cons, completed by processing laboratory-controlled experimental data acquired on a micro-structure prototype in a microwave anechoic chamber. These methods, which differ per a priori information accounted for and consequent versatility, include time-reversal, binary-specialized contrast-source and sparsity-constrained inversions, and convolutional neural networks possibly combined with recurrent ones
APA, Harvard, Vancouver, ISO, and other styles
24

Wang, Chuangqi. "Machine Learning Pipelines for Deconvolution of Cellular and Subcellular Heterogeneity from Cell Imaging." Digital WPI, 2019. https://digitalcommons.wpi.edu/etd-dissertations/587.

Full text
Abstract:
Cell-to-cell variations and intracellular processes such as cytoskeletal organization and organelle dynamics exhibit massive heterogeneity. Advances in imaging and optics have enabled researchers to access spatiotemporal information in living cells efficiently. Even though current imaging technologies allow us to acquire an unprecedented amount of cell images, it is challenging to extract valuable information from the massive and complex dataset to interpret heterogeneous biological processes. Machine learning (ML), referring to a set of computational tools to acquire knowledge from data, provides promising solutions to meet this challenge. In this dissertation, we developed ML pipelines for deconvolution of subcellular protrusion heterogeneity from live cell imaging and molecular diagnostic from lens-free digital in-line holography (LDIH) imaging. Cell protrusion is driven by spatiotemporally fluctuating actin assembly processes and is morphodynamically heterogeneous at the subcellular level. Elucidating the underlying molecular dynamics associated with subcellular protrusion heterogeneity is crucial to understanding the biology of cellular movement. Traditional ensemble averaging methods without characterizing the heterogeneity could mask important activities. Therefore, we established an ACF (auto-correlation function) based time series clustering pipeline called HACKS (deconvolution of heterogeneous activities in coordination of cytoskeleton at the subcellular level) to identify distinct subcellular lamellipodial protrusion phenotypes with their underlying actin regulator dynamics from live cell imaging. Using our method, we discover “accelerating protrusion”, which is driven by the temporally ordered coordination of Arp2/3 and VASP activities. Furthermore, deriving the merits of ML, especially Deep Learning (DL) to learn features automatically, we advanced our pipeline to learn fine-grained temporal features by integrating the prior ML analysis results with bi-LSTM (bi-direction long-short term memory) autoencoders to dissect variable-length time series protrusion heterogeneity. By applying it to subcellular protrusion dynamics in pharmacologically and metabolically perturbed epithelial cells, we discovered fine differential response of protrusion dynamics specific to each perturbation. This provides an analytical framework for detailed and quantitative understanding of molecular mechanisms hidden in their heterogeneity. Lens-free digital in-line holography (LDIH) is a promising microscopic tool that overcomes several drawbacks (e.g., limited field of view) of traditional lens-based microscopy. Numerical reconstruction for hologram images from large-field-of-view LDIH is extremely time-consuming. Until now, there are no effective manual-design features to interpret the lateral and depth information from complex diffraction patterns in hologram images directly, which limits LDIH utility for point-of-care applications. Inherited from advantages of DL to learn generalized features automatically, we proposed a deep transfer learning (DTL)-based approach to process LDIH images without reconstruction in the context of cellular analysis. Specifically, using the raw holograms as input, the features extracted from a well-trained network were able to classify cell categories according to the number of cell-bounded microbeads, which performance was comparable with that of object images as input. Combined with the developed DTL approach, LDIH could be realized as a low-cost, portable tool for point-of-care diagnostics. In summary, this dissertation demonstrate that ML applied to cell imaging can successfully dissect subcellular heterogeneity and perform cell-based diagnosis. We expect that our study will be able to make significant contributions to data-driven cell biological research.
APA, Harvard, Vancouver, ISO, and other styles
25

Pech, Thomas Joel. "A Deep-Learning Approach to Evaluating the Navigability of Off-Road Terrain from 3-D Imaging." Case Western Reserve University School of Graduate Studies / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=case1496377449249936.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Campanini, Matteo. "Architetture di deep learning per l'imaging medico del tumore alla prostata." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/16369/.

Full text
Abstract:
Questa tesi vuole essere un'esposizione sintetica delle architetture di deep learning applicate all'imaging medico, con particolare attenzione all'analisi di immagini TRUS e MRI multiparametriche per la diagnosi automatica del tumore alla prostata. Il lavoro è diviso in due parti. Nella prima parte vengono prima presentati i principali concetti teorici del machine learning per poi analizzare nel dettaglio tre delle architetture di deep learning più utilizzate: la rete neurale profonda, la rete neurale convoluzionale e vari tipi di autoencoder. Nella seconda parte viene mostrato come le tecnologie di deep learning vengono utilizzate nell'analisi delle immagini mediche, per poi analizzare le architetture di deep learning utilizzate nei sistemi automatici di diagnosi del tumore alla prostata da analisi di immagini TRUS e MRI multiparametriche.
APA, Harvard, Vancouver, ISO, and other styles
27

Liso, Lorenzo. "Rete Residuale per la Rimozione di Rumore Poissoniano e Gaussiano da Immagini Mediche." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/23560/.

Full text
Abstract:
Viene trattato il problema del rumore nelle immagini mediche e di quali sono le tecniche classiche adottate. Successivamente viene descritto un modello di rete neurale per la rimozione del rumore e viene proposta una modifica della funzione di errore per ridurre la sfocatura dei dettagli dell' immagine.
APA, Harvard, Vancouver, ISO, and other styles
28

Rabenius, Michaela. "Deep Learning-based Lung Triage for Streamlining the Workflow of Radiologists." Thesis, Linköpings universitet, Medie- och Informationsteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-160537.

Full text
Abstract:
The usage of deep learning algorithms such as Convolutional Neural Networks within the field of medical imaging has grown in popularity over the past few years. In particular, these types of algorithms have been used to detect abnormalities in chest x-rays, one of the most commonly performed type of radiographic examination. To try and improve the workflow of radiologists, this thesis investigated the possibility of using convolutional neural networks to create a lung triage to sort a bulk of chest x-ray images based on a degree of disease, where sick lungs should be prioritized before healthy lungs. The results from using a binary relevance approach to train multiple classifiers for different observations commonly found in chest x-rays shows that several models fail to learn how to classify x-ray images, most likely due to insufficient and/or imbalanced data. Using a binary relevance approach to create a triage is feasible but inflexible due to having to handle multiple models simultaneously. In future work it would therefore be interesting to further investigate other approaches, such as a single binary classification model or a multi-label classification model.
APA, Harvard, Vancouver, ISO, and other styles
29

Sörman, Paulsson Elsa. "Evaluation of In-Silico Labeling for Live Cell Imaging." Thesis, Umeå universitet, Institutionen för fysik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-180590.

Full text
Abstract:
Today new drugs are tested on cell cultures in wells to minimize time, cost, andanimal testing. The cells are studied using microscopy in different ways and fluorescentprobes are used to study finer details than the light microscopy can observe.This is an invasive method, so instead of molecular analysis, imaging can be used.In this project, phase-contrast microscopy images of cells together with fluorescentmicroscopy images were used. We use Machine Learning to predict the fluorescentimages from the light microscopy images using a strategy called In-Silico Labeling.A Convolutional Neural Network called U-Net was trained and showed good resultson two different datasets. Pixel-wise regression, pixel-wise classification, andimage classification with one cell in each image was tested. The image classificationwas the most difficult part due to difficulties assigning good quality labels tosingle cells. Pixel-wise regression showed the best result.
APA, Harvard, Vancouver, ISO, and other styles
30

Hrabovszki, Dávid. "Classification of brain tumors in weakly annotated histopathology images with deep learning." Thesis, Linköpings universitet, Statistik och maskininlärning, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-177271.

Full text
Abstract:
Brain and nervous system tumors were responsible for around 250,000 deaths in 2020 worldwide. Correctly identifying different tumors is very important, because treatment options largely depend on the diagnosis. This is an expert task, but recently machine learning, and especially deep learning models have shown huge potential in tumor classification problems, and can provide fast and reliable support for pathologists in the decision making process. This thesis investigates classification of two brain tumors, glioblastoma multiforme and lower grade glioma in high-resolution H&E-stained histology images using deep learning. The dataset is publicly available from TCGA, and 220 whole slide images were used in this study. Ground truth labels were only available on whole slide level, but due to their large size, they could not be processed by convolutional neural networks. Therefore, patches were extracted from the whole slide images in two sizes and fed into separate networks for training. Preprocessing steps ensured that irrelevant information about the background was excluded, and that the images were stain normalized. The patch-level predictions were then combined to slide level, and the classification performance was measured on a test set. Experiments were conducted about the usefulness of pre-trained CNN models and data augmentation techniques, and the best method was selected after statistical comparisons. Following the patch-level training, five slide aggregation approaches were studied, and compared to build a whole slide classifier model. Best performance was achieved when using small patches (336 x 336 pixels), pre-trained CNN model without frozen layers, and mirroring data augmentation. The majority voting slide aggregation method resulted in the best whole slide classifier with 91.7% test accuracy and 100% sensitivity. In many comparisons, however, statistical significance could not be shown because of the relatively small size of the test set.
APA, Harvard, Vancouver, ISO, and other styles
31

Koppers, Simon [Verfasser], Dorit [Akademischer Betreuer] Merhof, and Thomas [Akademischer Betreuer] Schultz. "Signal enhancement and signal reconstruction for diffusion imaging using deep learning / Simon Koppers ; Dorit Merhof, Thomas Schultz." Aachen : Universitätsbibliothek der RWTH Aachen, 2019. http://d-nb.info/1218727691/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Martínez, Mora Andrés. "Automation of Kidney Perfusion Analysis from Dynamic Phase-Contrast MRI using Deep Learning." Thesis, KTH, Skolan för kemi, bioteknologi och hälsa (CBH), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-277752.

Full text
Abstract:
Renal phase-contrast magnetic resonance imaging (PC-MRI) is an MRI modality where the phase component of the MR signal is made sensitive to the velocity of water molecules in the kidneys. PC-MRI is able to assess the Renal Blood Flow (RBF), which is an important biomarker in the development of kidney disease. RBF is analyzed with the manual or semi-automatic delineation by experts of the renal arteries in PC-MRI. This is a time-consuming and operator-dependent process. We have therefore trained, validated and tested a fully-automated deep learning model for faster and more objective renal artery segmentation. The PC-MRI data used in model training, validation and testing come from four studies (N=131 subjects). Images were acquired from three manufacturers with different imaging parameters. The best deep learning model found consists of a deeply-supervised 2D attention U-Net with residual skip connections. The output of this model was re-introduced as an extra channel in a second iteration to refine the segmentation result. The flow values in the segmented regions were integrated to provide a quantification of the mean arterial flow in the segmented renal arteries. The automated segmentation was evaluated in all the images that had manual segmentation ground-truths that come from a single operator. The evaluation was completed in terms of a segmentation accuracy metric called Dice Coefficient. The mean arterial flow values that were quantified from the auto-mated segmentation were also evaluated against ground-truth flow values from semi-automatic software. The deep learning model was trained and validated on images with segmentation ground-truths with 4-fold cross-validation. A Dice segmentation accuracy of 0.71±0.21 was achieved (N=73 subjects). Although segmentation results were accurate for most arteries, the algorithm failed in ten out of 144arteries. The flow quantification from the segmentation was highly correlated without significant bias in comparison to the ground-truth flow measurements. This method shows promise for supporting RBF measurements from PC-MRI and can likely be used to save analysis time in future studies. More training data has to be used for further improvement, both in terms of accuracy and generalizability.
APA, Harvard, Vancouver, ISO, and other styles
33

Torrents, Barrena Jordina. "Deep learning -based segmentation methods for computer-assisted fetal surgery." Doctoral thesis, Universitat Pompeu Fabra, 2019. http://hdl.handle.net/10803/668188.

Full text
Abstract:
This thesis focuses on the development of deep learning-based image processing techniques for the detection and segmentation of fetal structures in magnetic resonance imaging (MRI) and 3D ultrasound (US) images of singleton and twin pregnancies. Special attention is laid on monochorionic twins affected by the twin-to-twin transfusion syndrome (TTTS). In this context, we propose the first TTTS fetal surgery planning and simulation platform. Different approaches are utilized to automatically segment the mother’s soft tissue, uterus, placenta, its peripheral blood vessels, and umbilical cord from multiple (axial, sagittal and coronal) MRI views or a super-resolution reconstruction. (Conditional) generative adversarial networks (GANs) are used for segmentation of fetal structures from (3D) US and the umbilical cord insertion is localized from color Doppler US. Finally, we present a comparative study of deep-learning approaches and Radiomics over the segmentation performance of several fetal and maternal anatomies in both MRI and 3D US.
Aquesta tesi comprèn el desenvolupament de tècniques de processament d’imatge basades en aprenentatge profund per a la detecció i segmentació d’estructures fetals en imatges de ressonància magnètica (RM) i ultrasò (US) tridimensional d’embarassos normals i de bessons. S’ha fet especial èmfasi en el cas de bessons monocoriònics afectats per la síndrome de transfusió feto fetal (STFF). En aquest context es proposa la primera plataforma de planificació i simulació quirúrgica orientada a STFF. S’han utilitzat diferents mètodes per segmentar automàticament el teixit de la mare, l’úter, la placenta, els seus vasos perifèrics i el cordó umbilical a partir de les diferents vistes en RM o a partir d’un volum en super-resolució. S’han utilitzat xarxes generatives antagòniques (condicionals) per a la segmentació d’estructures en imatges d’US tridimensionals i s’ha localitzat la inserció del cordó a partir d’US Doppler. Finalment, es presenta un estudi comparatiu de les metodologies d’aprenentatge profund i Radiomics.
APA, Harvard, Vancouver, ISO, and other styles
34

Tardy, Mickael. "Deep learning for computer-aided early diagnosis of breast cancer." Thesis, Ecole centrale de Nantes, 2021. http://www.theses.fr/2021ECDN0035.

Full text
Abstract:
Le cancer du sein est un des plus répandus chez la femme. Le dépistage systématique permet de baisser le taux de mortalité mais crée une charge de travail importante pour les professionnels de santé. Des outils d’aide au diagnostic sont conçus pour réduire ladite charge, mais un niveau de performance élevé est attendu. Les techniques d’apprentissage profond peuvent palier les limitations des algorithmes de traitement d’image traditionnel et apporter une véritable aide à la décision. Néanmoins, plusieurs verrous technologiques sont associés à l’apprentissage profond appliqué à l’imagerie du sein, tels que l’hétérogénéité et le déséquilibre de données, le manque d’annotations, ainsi que la haute résolution d’imagerie. Confrontés auxdits verrous, nous abordons la problématique d’aide au diagnostic de plusieurs angles et nous proposons plusieurs méthodes constituant un outil complet. Ainsi, nous proposons deux méthodes d’évaluation de densité du sein étant un des facteur de risque, une méthode de détection d’anormalités, une technique d’estimation d’incertitude d’un classifieur basé sur des réseaux neuronaux, et une méthode de transfert de connaissances depuis mammographie 2D vers l’imagerie de tomosynthèse. Nos méthodes contribuent notamment à l’état de l’art des méthodes d’apprentissage faible et ouvrent des nouvelles voies de recherche
Breast cancer has the highest incidence amongst women. Regular screening allows to reduce the mortality rate, but creates a heavy workload for clinicians. To reduce it, the computer-aided diagnosis tools are designed, but a high level of performances is expected. Deep learning techniques have a potential to overcome the limitations of the traditional image processing algorithms. Although several challenges come with the deep learning applied to breast imaging, including heterogeneous and unbalanced data, limited amount of annotations, and high resolution. Facing these challenges, we approach the problem from multiple angles and propose several methods integrated in complete solution. Hence, we propose two methods for the assessment of the breast density as one of the cancer development risk factors, a method for abnormality detection, a method for uncertainty estimation of a classifier, and a method of transfer knowledge from mammography to tomosynthesis. Our methods contribute to the state of the art of weakly supervised learning and open new paths for further research
APA, Harvard, Vancouver, ISO, and other styles
35

Dong, Xu. "Material-Specific Computed Tomography for Molecular X-Imaging in Biomedical Research." Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/88869.

Full text
Abstract:
X-ray Computed Tomography (CT) imaging has been playing a central role in clinical practice since it was invented in 1972. However, the traditional x-ray CT technique fails to distinguish different materials with similar density, especially for biological tissues. The lack of a quantitative imaging representation has constrained the application of CT technique from a broadening application such as personal or precision medicine. Therefore, my major thesis statement is to develop novel material-specific CT imaging techniques for molecular imaging in biological bodies. To achieve the goal, comprehensive studies were conducted to investigate three different techniques: x-ray fluorescence molecular imaging, material identification (specification) from photon counting CT, and photon counting CT data distortion correction approach based on deep learning. X-ray fluorescence molecular imaging (XFMI) has shown great promise as a low-cost molecular imaging modality for clinical and pre-clinical applications with high sensitivity. In this study, the effects of excitation beam spectrum on the molecular sensitivity of XFMI were experimentally investigated, by quantitatively deriving minimum detectable concentration (MDC) under a fixed surface entrance dose of 200 mR at three different excitation beam spectra. The result shows that the MDC can be readily increased by a factor of 5.26 via excitation spectrum optimization. Furthermore, a numerical model was developed and validated by the experimental data (≥0.976). The numerical model can be used to optimize XFMI system configurations to further improve the molecular sensitivity. Findings from this investigation could find applications for in vivo pre-clinical small-animal XFMI in the future. PCCT is an emerging technique that has the ability to distinguish photon energy and generate much richer image data that contains x-ray spectral information compared to conventional CT. In this study, a physics model was developed based on x-ray matter interaction physics to calculate the effective atomic number () and effective electron density () from PCCT image data for material identification. As the validation of the physics model, the and were calculated under various energy conditions for many materials. The relative standard deviations are mostly less than 1% (161 out of 168) shows that the developed model obtains good accuracy and robustness to energy conditions. To study the feasibility of applying the model with PCCT image data for material identification, both PCCT system numerical simulation and physical experiment were conducted. The result shows different materials can be clearly identified in the − map (with relative error ≤8.8%). The model has the value to serve as a material identification scheme for PCCT system for practical use in the future. As PCCT appears to be a significant breakthrough in CT imaging field, there exists severe data distortion problem in PCCT, which greatly limits the application of PCCT in practice. Lately, deep learning (DL) neural network has demonstrated tremendous success in medical imaging field. In this study, a deep learning neural network based PCCT data distortion correction method was proposed. When applying the algorithm to process the test dataset data, the accuracy of the PCCT data can be greatly improved (RMSE improved 73.7%). Compared with traditional data correction approaches such as maximum likelihood, the deep learning approach demonstrate superiority in terms of RMSE, SSIM, PSNR, and most importantly, runtime (4053.21 sec vs. 1.98 sec). The proposed method has the potential to facilitate the PCCT studies and applications in practice.
Doctor of Philosophy
X-ray Computed Tomography (CT) has played a central role in clinical imaging since it was invented in 1972. It has distinguishing characteristics of being able to generate three dimensional images with comprehensive inner structural information in fast speed (less than one second). However, traditional CT imaging lacks of material-specific capability due to the mechanism of image formation, which makes it cannot be used for molecular imaging. Molecular imaging plays a central role in present and future biomedical research and clinical diagnosis and treatment. For example, imaging of biological processes and molecular markers can provide unprecedented rich information, which has huge potentials for individualized therapies, novel drug design, earlier diagnosis, and personalized medicine. Therefore there exists a pressing need to enable the traditional CT imaging technique with material-specific capability for molecular imaging purpose. This dissertation conducted comprehensive study to separately investigate three different techniques: x-ray fluorescence molecular imaging, material identification (specification) from photon counting CT, and photon counting CT data distortion correction approach based on deep learning. X-ray fluorescence molecular imaging utilizes fluorescence signal to achieve molecular imaging in CT; Material identification can be achieved based on the rich image data from PCCT; The deep learning based correction method is an efficient approach for PCCT data distortion correction, and furthermore can boost its performance on material identification. With those techniques, the material-specific capability of CT can be greatly enhanced and the molecular imaging can be approached in biological bodies.
APA, Harvard, Vancouver, ISO, and other styles
36

Singh, Praveer. "Processing high-resolution images through deep learning techniques." Thesis, Paris Est, 2018. http://www.theses.fr/2018PESC1172.

Full text
Abstract:
Dans cette thèse, nous discutons de quatre scénarios d’application différents qui peuvent être largement regroupés dans le cadre plus large de l’analyse et du traitement d’images à haute résolution à l’aide de techniques d’apprentissage approfondi. Les trois premiers chapitres portent sur le traitement des images de télédétection (RS) captées soit par avion, soit par satellite à des centaines de kilomètres de la Terre. Nous commençons par aborder un problème difficile lié à l’amélioration de la classification des scènes aériennes complexes par le biais d’un paradigme d’apprentissage profondément faiblement supervisé. Nous montrons comment en n’utilisant que les étiquettes de niveau d’image, nous pouvons localiser efficacement les régions les plus distinctives dans les scènes complexes et éliminer ainsi les ambiguïtés qui mènent à une meilleure performance de classification dans les scènes aériennes très complexes. Dans le deuxième chapitre, nous traiterons de l’affinement des étiquettes de segmentation des empreintes de pas des bâtiments dans les images aériennes. Pour ce faire, nous détectons d’abord les erreurs dans les masques de segmentation initiaux et corrigeons uniquement les pixels de segmentation où nous trouvons une forte probabilité d’erreurs. Les deux prochains chapitres de la thèse portent sur l’application des Réseaux Adversariatifs Génératifs. Dans le premier, nous construisons un modèle GAN nuageux efficace pour éliminer les couches minces de nuages dans l’imagerie Sentinel-2 en adoptant une perte de consistance cyclique. Ceci utilise une fonction de perte antagoniste pour mapper des images nuageuses avec des images non nuageuses d’une manière totalement non supervisée, où la perte cyclique aide à contraindre le réseau à produire une image sans nuage correspondant a` l’image nuageuse d’entrée et non à aucune image aléatoire dans le domaine cible. Enfin, le dernier chapitre traite d’un ensemble différent d’images `à haute résolution, ne provenant pas du domaine RS mais plutôt de l’application d’imagerie à gamme dynamique élevée (HDRI). Ce sont des images 32 bits qui capturent toute l’étendue de la luminance présente dans la scène. Notre objectif est de les quantifier en images LDR (Low Dynamic Range) de 8 bits afin qu’elles puissent être projetées efficacement sur nos écrans d’affichage normaux tout en conservant un contraste global et une qualité de perception similaires à ceux des images HDR. Nous adoptons un modèle GAN multi-échelle qui met l’accent à la fois sur les informations plus grossières et plus fines nécessaires aux images à haute résolution. Les sorties finales cartographiées par ton ont une haute qualité subjective sans artefacts perçus
In this thesis, we discuss four different application scenarios that can be broadly grouped under the larger umbrella of Analyzing and Processing high-resolution images using deep learning techniques. The first three chapters encompass processing remote-sensing (RS) images which are captured either from airplanes or satellites from hundreds of kilometers away from the Earth. We start by addressing a challenging problem related to improving the classification of complex aerial scenes through a deep weakly supervised learning paradigm. We showcase as to how by only using the image level labels we can effectively localize the most distinctive regions in complex scenes and thus remove ambiguities leading to enhanced classification performance in highly complex aerial scenes. In the second chapter, we deal with refining segmentation labels of Building footprints in aerial images. This we effectively perform by first detecting errors in the initial segmentation masks and correcting only those segmentation pixels where we find a high probability of errors. The next two chapters of the thesis are related to the application of Generative Adversarial Networks. In the first one, we build an effective Cloud-GAN model to remove thin films of clouds in Sentinel-2 imagery by adopting a cyclic consistency loss. This utilizes an adversarial lossfunction to map cloudy-images to non-cloudy images in a fully unsupervised fashion, where the cyclic-loss helps in constraining the network to output a cloud-free image corresponding to the input cloudy image and not any random image in the target domain. Finally, the last chapter addresses a different set of high-resolution images, not coming from the RS domain but instead from High Dynamic Range Imaging (HDRI) application. These are 32-bit imageswhich capture the full extent of luminance present in the scene. Our goal is to quantize them to 8-bit Low Dynamic Range (LDR) images so that they can be projected effectively on our normal display screens while keeping the overall contrast and perception quality similar to that found in HDR images. We adopt a Multi-scale GAN model that focuses on both coarser as well as finer-level information necessary for high-resolution images. The final tone-mapped outputs have a high subjective quality without any perceived artifacts
APA, Harvard, Vancouver, ISO, and other styles
37

Alom, Md Zahangir. "Improved Deep Convolutional Neural Networks (DCNN) Approaches for Computer Vision and Bio-Medical Imaging." University of Dayton / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1541685818030003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Zhang, Yi. "NOVEL APPLICATIONS OF MACHINE LEARNING IN BIOINFORMATICS." UKnowledge, 2019. https://uknowledge.uky.edu/cs_etds/83.

Full text
Abstract:
Technological advances in next-generation sequencing and biomedical imaging have led to a rapid increase in biomedical data dimension and acquisition rate, which is challenging the conventional data analysis strategies. Modern machine learning techniques promise to leverage large data sets for finding hidden patterns within them, and for making accurate predictions. This dissertation aims to design novel machine learning-based models to transform biomedical big data into valuable biological insights. The research presented in this dissertation focuses on three bioinformatics domains: splice junction classification, gene regulatory network reconstruction, and lesion detection in mammograms. A critical step in defining gene structures and mRNA transcript variants is to accurately identify splice junctions. In the first work, we built the first deep learning-based splice junction classifier, DeepSplice. It outperforms the state-of-the-art classification tools in terms of both classification accuracy and computational efficiency. To uncover transcription factors governing metabolic reprogramming in non-small-cell lung cancer patients, we developed TFmeta, a machine learning approach to reconstruct relationships between transcription factors and their target genes in the second work. Our approach achieves the best performance on benchmark data sets. In the third work, we designed deep learning-based architectures to perform lesion detection in both 2D and 3D whole mammogram images.
APA, Harvard, Vancouver, ISO, and other styles
39

Braman, Nathaniel. "Novel Radiomics and Deep Learning Approaches Targeting the Tumor Environment to Predict Response to Chemotherapy." Case Western Reserve University School of Graduate Studies / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=case1586546527544791.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Aderghal, Karim. "Classification of multimodal MRI images using Deep Learning : Application to the diagnosis of Alzheimer’s disease." Thesis, Bordeaux, 2021. http://www.theses.fr/2021BORD0045.

Full text
Abstract:
Dans cette thèse, nous nous intéressons à la classification automatique des images IRM cérébrales pour le diagnostic de la maladie d’Alzheimer (MA). Nous cherchons à construire des modèles intelligents qui fournissent au clinicien des décisions sur l’état de la maladie d’un patient à partir de caractéristiques visuelles extraites d’images IRM. L’objectif consiste à classifier les patients (sujets) en trois catégories principales : sujets sains (NC), sujets atteints de troubles cognitifs légers (MCI), et sujets atteints de la maladie d’Alzheimer (AD). Nous utilisons des méthodes d’apprentissage profond (Deep learning), plus précisément les réseaux neuronaux convolutifs (CNN) basés sur des biomarqueurs visuels à partir d’images IRM multimodales (IRM structurelle et l’IRM de tenseur de diffusion - DTI), pour détecter les changements structurels dans le cerveau, en particulier dans la région hippocampique du cortex limbique. Nous proposons une approche appelée "2-D+e" appliquée sur notre ROI (Region-of-Interest): hippocampe. Cette approche permet d’extraire des coupes 2D à partir de trois plans (sagittale, coronale et axiale) de notre région en préservant les dépendances spatiales entre les coupes adjacentes selon chaque dimension. Nous présentons une étude complète de différentes méthodes artificielles d’augmentation de données, ainsi que différentes approches d’équilibrage de données pour analyser l’impact de ces conditions sur nos modèles pendant la phase d’entraînement. Ensuite, nous proposons nos méthodes pour combiner des informations provenant de différentes sources (projections/modalités) avec notamment deux stratégies de fusion (fusion précoce et fusion tardive). Enfin, nous présentons des schémas d’apprentissage par transfert en introduisant trois cadres : (i) un schéma inter-modale (IRM structurelle et DTI), (ii) un schéma inter-domaine qui implique des données externes (MNIST), (iii) et un schéma hybride avec ces deux méthodes (i) et (ii). Les méthodes que nous proposons conviennent à l’utilisation des réseaux (CNN) peu profonds pour les images IRM multimodales. Elles donnent des résultats encourageants même si le modèle est entraîné sur de petits ensembles de données, ce qui est souvent le cas en analyse d’images médicales
In this thesis, we are interested in the automatic classification of brain MRI images to diagnose Alzheimer’s disease (AD). We aim to build intelligent models that provide decisions about a patient’s disease state to the clinician based on visual features extracted from MRI images. The goal is to classify patients (subjects) into three main categories: healthy subjects (NC), subjects with mild cognitive impairment (MCI), and subjects with Alzheimer’s disease (AD). We use deep learning methods, specifically convolutional neural networks (CNN) based on visual biomarkers from multimodal MRI images (structural MRI and DTI), to detect structural changes in the brain hippocampal region of the limbic cortex. We propose an approach called "2-D+e" applied to our ROI (Region-of-Interest): the hippocampus. This approach allows extracting 2D slices from three planes (sagittal, coronal, and axial) of our region by preserving the spatial dependencies between adjacent slices according to each dimension. We present a complete study of different artificial data augmentation methods and different data balancing approaches to analyze the impact of these conditions on our models during the training phase. We propose our methods for combining information from different sources (projections/modalities), including two fusion strategies (early fusion and late fusion). Finally, we present transfer learning schemes by introducing three frameworks: (i) a cross-modal scheme (using sMRI and DTI), (ii) a cross-domain scheme that involves external data (MNIST), and (iii) a hybrid scheme with these two methods (i) and (ii). Our proposed methods are suitable for using shallow CNNs for multimodal MRI images. They give encouraging results even if the model is trained on small datasets, which is often the case in medical image analysis
APA, Harvard, Vancouver, ISO, and other styles
41

Rezaei, Mina [Verfasser], Christoph [Akademischer Betreuer] Meinel, Christoph Gutachter] Meinel, Nassir [Gutachter] [Navab, and Heinz [Gutachter] Handels. "Deep representation learning from imbalanced medical imaging / Mina Rezaei ; Gutachter: Christoph Meinel, Nassir Navab, Heinz Handels ; Betreuer: Christoph Meinel." Potsdam : Universität Potsdam, 2019. http://nbn-resolving.de/urn:nbn:de:kobv:517-opus4-442759.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Belbaisi, Adham. "Deep Learning-Based Skeleton Segmentation for Analysis of Bone Marrow and Cortical Bone in Water-Fat Magnetic Resonance Imaging." Thesis, KTH, Skolan för kemi, bioteknologi och hälsa (CBH), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-297528.

Full text
Abstract:
A major health concern for subjects with diabetes is weaker bones and increased fracture risk. Current clinical assessment of the bone strength is performed by measuring Bone Mineral Density (BMD), where low BMD-values are associated with an increased risk of fracture. However, subjects with Type 2 Diabetes (T2D) have been shown to have normal or higher BMD-levels compared to healthy controls, which does not reflect the recognized bone fragility among diabetics. Thus, there is need for more research about diabetes-related bone fragility to find other factors of impaired bone health. One potential biomarker that has recently been studied is Bone Marrow Fat (BMF). The data in this project consisted of whole-body water-fat Magnetic Resonance Imaging (MRI) volumes from the UK Biobank Imaging study (UKBB). Each subject in this data has a water volume and a fat volume, allowing for a quantitative assessment of water and fat content in the body. To analyze and perform quantitative measurements of the bones specifically, a Deep Learning (DL) model was trained, validated, and tested for performing fully automated and objective skeleton segmentation, where six different bones were segmented: spine, femur, pelvis, scapula, clavicle and humerus. The model was trained and validated on 120 subjects with 6-fold cross-validation and tested on eight subjects. All ground-truth segmentations of the training and test data were generated using two semi-automatic pipelines. The model was evaluated for each bone separately as well as the overall skeleton segmentation and achieved varying accuracy, performing better on larger bones than on smaller ones. The final trained model was applied on a larger dataset of 9562 subjects (16% type 2 diabetics) and the BMF, as well as bone marrow volume (BMV) and cortical bone volume (CBV), were measured in the segmented bones of each subject. The results of the quantified biomarkers were compared between T2D and healthy subjects. The comparison revealed possible differences between healthy and diabetic subjects, suggesting a potential for new findings related to diabetes and associated bone fragility.
APA, Harvard, Vancouver, ISO, and other styles
43

Rezaei, Mina [Verfasser], Christoph [Akademischer Betreuer] Meinel, Christoph [Gutachter] Meinel, Nassir [Gutachter] Navab, and Heinz [Gutachter] Handels. "Deep representation learning from imbalanced medical imaging / Mina Rezaei ; Gutachter: Christoph Meinel, Nassir Navab, Heinz Handels ; Betreuer: Christoph Meinel." Potsdam : Universität Potsdam, 2019. http://d-nb.info/1218169796/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Sargent, Garrett Craig. "A Conditional Generative Adversarial Network Demosaicing Strategy for Division of Focal Plane Polarimeters." University of Dayton / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1606050550958383.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Berry, Jeffrey James. "Machine Learning Methods for Articulatory Data." Diss., The University of Arizona, 2012. http://hdl.handle.net/10150/223348.

Full text
Abstract:
Humans make use of more than just the audio signal to perceive speech. Behavioral and neurological research has shown that a person's knowledge of how speech is produced influences what is perceived. With methods for collecting articulatory data becoming more ubiquitous, methods for extracting useful information are needed to make this data useful to speech scientists, and for speech technology applications. This dissertation presents feature extraction methods for ultrasound images of the tongue and for data collected with an Electro-Magnetic Articulograph (EMA). The usefulness of these features is tested in several phoneme classification tasks. Feature extraction methods for ultrasound tongue images presented here consist of automatically tracing the tongue surface contour using a modified Deep Belief Network (DBN) (Hinton et al. 2006), and methods inspired by research in face recognition which use the entire image. The tongue tracing method consists of training a DBN as an autoencoder on concatenated images and traces, and then retraining the first two layers to accept only the image at runtime. This 'translational' DBN (tDBN) method is shown to produce traces comparable to those made by human experts. An iterative bootstrapping procedure is presented for using the tDBN to assist a human expert in labeling a new data set. Tongue contour traces are compared with the Eigentongues method of (Hueber et al. 2007), and a Gabor Jet representation in a 6-class phoneme classification task using Support Vector Classifiers (SVC), with Gabor Jets performing the best. These SVC methods are compared to a tDBN classifier, which extracts features from raw images and classifies them with accuracy only slightly lower than the Gabor Jet SVC method.For EMA data, supervised binary SVC feature detectors are trained for each feature in three versions of Distinctive Feature Theory (DFT): Preliminaries (Jakobson et al. 1954), The Sound Pattern of English (Chomsky and Halle 1968), and Unified Feature Theory (Clements and Hume 1995). Each of these feature sets, together with a fourth unsupervised feature set learned using Independent Components Analysis (ICA), are compared on their usefulness in a 46-class phoneme recognition task. Phoneme recognition is performed using a linear-chain Conditional Random Field (CRF) (Lafferty et al. 2001), which takes advantage of the temporal nature of speech, by looking at observations adjacent in time. Results of the phoneme recognition task show that Unified Feature Theory performs slightly better than the other versions of DFT. Surprisingly, ICA actually performs worse than running the CRF on raw EMA data.
APA, Harvard, Vancouver, ISO, and other styles
46

Elsaadouny, Mostafa [Verfasser], Ilona [Gutachter] Rolfes, and Nils [Gutachter] Pohl. "Deep learning models for SAR imaging results interpretation / Mostafa Elsaadouny ; Gutachter: Ilona Rolfes, Nils Pohl ; Fakultät für Elektrotechnik und Informationstechnik." Bochum : Ruhr-Universität Bochum, 2021. http://d-nb.info/1226428592/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Östling, Andreas. "Automated Kidney Segmentation in Magnetic Resonance Imaging using U-Net." Thesis, Uppsala universitet, Statistiska institutionen, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-391269.

Full text
Abstract:
Manual analysis of medical images such as magnetic resonance imaging (MRI) requires a trained professional, is time-consuming and results may vary between experts. We propose an automated method for kidney segmentation using a convolutional Neural Network (CNN) model based on the U-Net architecture. Investigations are done to compare segmentations between trained experts, inexperienced operators and the Neural Network model, showing near human expert level performance from the Neural Network. Stratified sampling is performed when selecting which subject volumes to perform manual segmentations on to create training data. Experiments are run to test the effectiveness of transfer learning and data augmentation and we show that one of the most important components of a successful machine learning pipeline is larger quantities of carefully annotated data for training.
APA, Harvard, Vancouver, ISO, and other styles
48

Kounalakis, Tsampikos. "Depth-adaptive methodologies for 3D image caregorization." Thesis, Brunel University, 2015. http://bura.brunel.ac.uk/handle/2438/11531.

Full text
Abstract:
Image classification is an active topic of computer vision research. This topic deals with the learning of patterns in order to allow efficient classification of visual information. However, most research efforts have focused on 2D image classification. In recent years, advances of 3D imaging enabled the development of applications and provided new research directions. In this thesis, we present methodologies and techniques for image classification using 3D image data. We conducted our research focusing on the attributes and limitations of depth information regarding possible uses. This research led us to the development of depth feature extraction methodologies that contribute to the representation of images thus enhancing the recognition efficiency. We proposed a new classification algorithm that adapts to the need of image representations by implementing a scale-based decision that exploits discriminant parts of representations. Learning from the design of image representation methods, we introduced our own which describes each image by its depicting content providing more discriminative image representation. We also propose a dictionary learning method that exploits the relation of training features by assessing the similarity of features originating from similar context regions. Finally, we present our research on deep learning algorithms combined with data and techniques used in 3D imaging. Our novel methods provide state-of-the-art results, thus contributing to the research of 3D image classification.
APA, Harvard, Vancouver, ISO, and other styles
49

Losch, Max. "Detection and Segmentation of Brain Metastases with Deep Convolutional Networks." Thesis, KTH, Datorseende och robotik, CVAP, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-173519.

Full text
Abstract:
As deep convolutional networks (ConvNets) reach spectacular results on a multitude of computer vision tasks and perform almost as well as a human rater on the task of segmenting gliomas in the brain, I investigated the applicability for detecting and segmenting brain metastases. I trained networks with increasing depth to improve the detection rate and introduced a border-pair-scheme to reduce oversegmentation. A constraint on the time for segmenting a complete brain scan required the utilization of fully convolutional networks which reduced the time from 90 minutes to 40 seconds. Despite some present noise and label errors in the 490 full brain MRI scans, the final network achieves a true positive rate of 82.8% and 0.05 misclassifications per slice where all lesions greater than 3 mm have a perfect detection score. This work indicates that ConvNets are a suitable approach to both detect and segment metastases, especially as further architectural extensions might improve the predictive performance even more.
APA, Harvard, Vancouver, ISO, and other styles
50

Rydell, Christopher. "Deep Learning for Whole Slide Image Cytology : A Human-in-the-Loop Approach." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-450356.

Full text
Abstract:
With cancer being one of the leading causes of death globally, and with oral cancers being among the most common types of cancer, it is of interest to conduct large-scale oral cancer screening among the general population. Deep Learning can be used to make this possible despite the medical expertise required for early detection of oral cancers. A bottleneck of Deep Learning is the large amount of data required to train a good model. This project investigates two topics: certainty calibration, which aims to make a machine learning model produce more reliable predictions, and Active Learning, which aims to reduce the amount of data that needs to be labeled for Deep Learning to be effective. In the investigation of certainty calibration, five different methods are compared, and the best method is found to be Dirichlet calibration. The Active Learning investigation studies a single method, Cost-Effective Active Learning, but it is found to produce poor results with the given experiment setting. These two topics inspire the further development of the cytological annotation tool CytoBrowser, which is designed with oral cancer data labeling in mind. The proposedevolution integrates into the existing tool a Deep Learning-assisted annotation workflow that supports multiple users.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography