Dissertations / Theses on the topic 'Apprentissage automatique – Imagerie spectroscopique'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Apprentissage automatique – Imagerie spectroscopique.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Abushawish, Mojahed. "New Machine Learning-Based Approaches for AGATA Detectors Characterization and Nuclear Structure Studies of Neutron-Rich Nb Isotopes." Electronic Thesis or Diss., Lyon 1, 2024. http://www.theses.fr/2024LYO10344.
Full textIn-beam gamma-ray spectroscopy, particularly with high-velocity recoil nuclei, requires precise Doppler correction. The Advanced GAmma Tracking Array (AGATA) represents a groundbreaking development in gamma-ray spectrometers, boosting the ability to track gamma-rays within the detector. This capability leads to exceptional position resolution which ensures optimal Doppler corrections. The high-purity germanium crystals used in AGATA are divided into 36 segments. The determination of interaction point positions is achieved by analyzing the shape of the measured electrical pulses. The algorithm used, PSA (Pulse Shape Analysis), compares the measured signals with simulated reference simulated databases, which presents accuracy limitations. On the other hand, experimental databases can be obtained by scanning crystals with collimated gamma-ray sources using a computationally expensive method called Pulse Shape Coincidence Scan (PSCS). This work proposes, a novel machine learning algorithm based on Long Short-Term Memory (LSTM) networks that replaces the PSCS method, reducing processing time and achieving higher consistency and accuracy. This thesis also explores the nuclear structure of neutron-rich Niobium isotopes. These nuclei, with Z and N around 40 and 60, respectively, exhibit one of the most remarkable examples of a sudden shape transition between spherical and highly deformed nuclei. These isotopes were produced at GANIL during two experiments involving transfer-induced fission and fusion. The combination of the VAMOS++ spectrometer, AGATA, and the EXOGAM gamma spectrometer offers a unique opportunity to obtain precise isotopic identification (A, Z) on an event-by-event basis for one of the fission fragments, with the prompt and delayed gamma-rays emitted in coincidence with unprecedented resolution. The research presents updated level schemes for the Nb isotopes and introduces new band structures for the Nb nuclei, pushing the boundaries of what is possible in fission experiments. It highlights spherical/deformed shape coexistence in theNb isotope, reassesses the level scheme of Nb and the placement of its rotational band, and tracks the evolution of nuclear deformation with increasing neutron number, providing valuable experimental data to refine nuclear models. The results are compared with the most recent theoretical calculations of each isotope
Armanni, Thibaut. "Étude de nouveaux alliages de titane pour applications aéronautiques hautes températures." Electronic Thesis or Diss., Université de Lorraine, 2023. http://www.theses.fr/2023LORR0342.
Full textImproving the high-temperature resistance of titanium alloys is a major challenge for the aerospace industry. Exceeding the current limit of 550°C in aircraft engines requires finding the best compromise between good oxidation resistance and good mechanical properties. Near-alpha alloys consisting mainly of a compact hexagonal phase are the best candidates. Unfortunately, they are sensitive to cold creep-fatigue, known as the dwell effect. In this context, our work aims to achieve two main objectives. Firstly, to contribute to the design of new near-alpha alloys based on machine learning, supported by extensive mechanical testing, at both ambient and high temperatures. Secondly, to gain a better understanding of the effect of chemical composition, particularly silicon content, on the microstructure and mechanical behaviour. Our approach was based on multi-scale microstructure study of selected alloys using a combination of different microscopy techniques. We examined the influence of a variation in silicon content using a combination of scanning electron microscopy (SEM) and transmission electron microscopy (TEM). We showed that silicide precipitation occurs above a certain silicon content. We demonstrated the limitations of two-dimensional analysis, and used an alternative technique combining ion beam cutting (FIB) with SEM observation to reconstruct the 3D microstructure. This approach enabled us to analyze and quantify the shapes, sizes and spatial distributions of the silicides. Finally, we carried out tensile tests at different strain rates as well as creep tests under various conditions to better understand how silicon addition improves the behaviour of near-alpha alloys
Mensch, Arthur. "Apprentissage de représentations en imagerie fonctionnelle." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS300/document.
Full textThanks to the advent of functional brain-imaging technologies, cognitive neuroscience is accumulating maps of neural activity responses to specific tasks or stimuli, or of spontaneous activity. In this work, we consider data from functional Magnetic Resonance Imaging (fMRI), that we study in a machine learning setting: we learn a model of brain activity that should generalize on unseen data. After reviewing the standard fMRI data analysis techniques, we propose new methods and models to benefit from the recently released large fMRI data repositories. Our goal is to learn richer representations of brain activity. We first focus on unsupervised analysis of terabyte-scale fMRI data acquired on subjects at rest (resting-state fMRI). We perform this analysis using matrix factorization. We present new methods for running sparse matrix factorization/dictionary learning on hundreds of fMRI records in reasonable time. Our leading approach relies on introducing randomness in stochastic optimization loops and provides speed-up of an order of magnitude on a variety of settings and datasets. We provide an extended empirical validation of our stochastic subsampling approach, for datasets from fMRI, hyperspectral imaging and collaborative filtering. We derive convergence properties for our algorithm, in a theoretical analysis that reaches beyond the matrix factorization problem. We then turn to work with fMRI data acquired on subject undergoing behavioral protocols (task fMRI). We investigate how to aggregate data from many source studies, acquired with many different protocols, in order to learn more accurate and interpretable decoding models, that predicts stimuli or tasks from brain maps. Our multi-study shared-layer model learns to reduce the dimensionality of input brain images, simultaneously to learning to decode these images from their reduced representation. This fosters transfer learning in between studies, as we learn the undocumented cognitive common aspects that the many fMRI studies share. As a consequence, our multi-study model performs better than single-study decoding. Our approach identifies universally relevant representation of brain activity, supported by a few task-optimized networks learned during model fitting. Finally, on a related topic, we show how to use dynamic programming within end-to-end trained deep networks, with applications in natural language processing
Pitiot, Alain. "Segmentation Automatique des Structures Cérébrales s'appuyant sur des Connaissances Explicites." Phd thesis, École Nationale Supérieure des Mines de Paris, 2003. http://pastel.archives-ouvertes.fr/pastel-00001346.
Full textBertrand, Hadrien. "Optimisation d'hyper-paramètres en apprentissage profond et apprentissage par transfert : applications en imagerie médicale." Electronic Thesis or Diss., Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLT001.
Full textIn the last few years, deep learning has changed irrevocably the field of computer vision. Faster, giving better results, and requiring a lower degree of expertise to use than traditional computer vision methods, deep learning has become ubiquitous in every imaging application. This includes medical imaging applications. At the beginning of this thesis, there was still a strong lack of tools and understanding of how to build efficient neural networks for specific tasks. Thus this thesis first focused on the topic of hyper-parameter optimization for deep neural networks, i.e. methods for automatically finding efficient neural networks on specific tasks. The thesis includes a comparison of different methods, a performance improvement of one of these methods, Bayesian optimization, and the proposal of a new method of hyper-parameter optimization by combining two existing methods: Bayesian optimization and Hyperband.From there, we used these methods for medical imaging applications such as the classification of field-of-view in MRI, and the segmentation of the kidney in 3D ultrasound images across two populations of patients. This last task required the development of a new transfer learning method based on the modification of the source network by adding new geometric and intensity transformation layers.Finally this thesis loops back to older computer vision methods, and we propose a new segmentation algorithm combining template deformation and deep learning. We show how to use a neural network to predict global and local transformations without requiring the ground-truth of these transformations. The method is validated on the task of kidney segmentation in 3D US images
Ratiney, Hélène. "Quantification automatique de signaux de spectrométrie et d'imagerie spectroscopique de résonance magnétique fondée sur une base de métabolites : une approche semi-paramétrique." Lyon 1, 2004. http://www.theses.fr/2004LYO10195.
Full textWei, Wen. "Apprentissage automatique des altérations cérébrales causées par la sclérose en plaques en neuro-imagerie multimodale." Thesis, Université Côte d'Azur, 2020. http://www.theses.fr/2020COAZ4021.
Full textMultiple Sclerosis (MS) is the most common progressive neurological disease of young adults worldwide and thus represents a major public health issue with about 90,000 patients in France and more than 500,000 people affected with MS in Europe. In order to optimize treatments, it is essential to be able to measure and track brain alterations in MS patients. In fact, MS is a multi-faceted disease which involves different types of alterations, such as myelin damage and repair. Under this observation, multimodal neuroimaging are needed to fully characterize the disease. Magnetic resonance imaging (MRI) has emerged as a fundamental imaging biomarker for multiple sclerosis because of its high sensitivity to reveal macroscopic tissue abnormalities in patients with MS. Conventional MR scanning provides a direct way to detect MS lesions and their changes, and plays a dominant role in the diagnostic criteria of MS. Moreover, positron emission tomography (PET) imaging, an alternative imaging modality, can provide functional information and detect target tissue changes at the cellular and molecular level by using various radiotracers. For example, by using the radiotracer [11C]PIB, PET allows a direct pathological measure of myelin alteration. However, in clinical settings, not all the modalities are available because of various reasons. In this thesis, we therefore focus on learning and predicting missing-modality-derived brain alterations in MS from multimodal neuroimaging data
Richard, Hugo. "Unsupervised component analysis for neuroimaging data." Electronic Thesis or Diss., université Paris-Saclay, 2021. http://www.theses.fr/2021UPASG115.
Full textThis thesis in computer science and mathematics is applied to the field ofneuroscience, and more particularly to the mapping of brain activity based on imaging electrophysiology. In this field, a rising trend is to experiment with naturalistic stimuli such as movie watching or audio track listening,rather than tightly controlled but outrageously simple stimuli. However, the analysis of these "naturalistic" stimuli and their effects requires a huge amount of images that remain hard and costly to acquire. Without mathematical modeling, theidentification of neural signal from the measurements is very hard if not impossible. However, the stimulations that elicit neural activity are challenging to model in this context, and therefore, the statistical analysis of the data using regression-based approaches is difficult. This has motivated the use of unsupervised learning methods that do not make assumptions about what triggers brain activations in the presented stimuli. In this thesis, we first consider the case of the shared response model (SRM), wheresubjects are assumed to share a common response. While this algorithm is usefulto perform dimension reduction, it is particularly costly on functional magneticresonance imaging (fMRI) data where thedimension can be very large. We considerably speed up thealgorithm and reduce its memory usage. However, SRM relies on assumptions thatare not biologically plausible. In contrast, independent component analysis (ICA) is more realistic but not suited to multi-subject datasets. In this thesis, we present a well-principled method called MultiViewICA that extends ICA to datasets containing multiple subjects. MultiViewICA is a maximum likelihood estimator. It comes with a closed-formlikelihood that can be efficiently optimized. However, it assumes the same amount of noise for all subjects. We therefore introduce ShICA, a generalization of MultiViewICA that comes with a more general noise model. In contrast to almost all ICA-based models, ShICA can separate Gaussian and non-Gaussian sources and comes with a minimum mean square error estimate of the common sources that weights each subject according to its estimated noise level. In practice, MultiViewICA and ShICA yield on magnetoencephalography and functional magnetic resonance imaging a more reliable estimateof the shared response than competitors. Lastly, we use independent component analysis as a basis to perform data augmentation. More precisely, we introduce CondICA, a data augmentation method that leverages a large amount of unlabeled fMRI data to build a generative model for labeled data using only a few labeled samples. CondICA yields an increase in decoding accuracy on eight large fMRI datasets. Our main contributions consist in the reduction of SRM's training time as well as in the introduction of two more realistic models for the analysis of brain activity of subjects exposed to naturalistic stimuli: MultiViewICA and ShICA. Lastly, our results showing that ICA can be used for data augmentation are promising. In conclusion, we present some directions that could guide future work. From apractical point of view, minor modifications of our methods could allow theanalysis of resting state data assuming a shared spatial organization instead of a shared response. From a theoretical perspective, future work could focus on understanding how dimension reduction and shared response identification can be achieved jointly
Bertrand, Hadrien. "Optimisation d'hyper-paramètres en apprentissage profond et apprentissage par transfert : applications en imagerie médicale." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLT001/document.
Full textIn the last few years, deep learning has changed irrevocably the field of computer vision. Faster, giving better results, and requiring a lower degree of expertise to use than traditional computer vision methods, deep learning has become ubiquitous in every imaging application. This includes medical imaging applications. At the beginning of this thesis, there was still a strong lack of tools and understanding of how to build efficient neural networks for specific tasks. Thus this thesis first focused on the topic of hyper-parameter optimization for deep neural networks, i.e. methods for automatically finding efficient neural networks on specific tasks. The thesis includes a comparison of different methods, a performance improvement of one of these methods, Bayesian optimization, and the proposal of a new method of hyper-parameter optimization by combining two existing methods: Bayesian optimization and Hyperband.From there, we used these methods for medical imaging applications such as the classification of field-of-view in MRI, and the segmentation of the kidney in 3D ultrasound images across two populations of patients. This last task required the development of a new transfer learning method based on the modification of the source network by adding new geometric and intensity transformation layers.Finally this thesis loops back to older computer vision methods, and we propose a new segmentation algorithm combining template deformation and deep learning. We show how to use a neural network to predict global and local transformations without requiring the ground-truth of these transformations. The method is validated on the task of kidney segmentation in 3D US images
Couteaux, Vincent. "Apprentissage profond pour la segmentation et la détection automatique en imagerie multi-modale : application à l'oncologie hépatique." Electronic Thesis or Diss., Institut polytechnique de Paris, 2021. http://www.theses.fr/2021IPPAT009.
Full textIn order to characterize hepatic lesions,radiologists rely on several images using different modalities (different MRI sequences, CT scan, etc.) because they provide complementary information.In addition, automatic segmentation and detection tools are a great help in characterizing lesions, monitoring disease or planning interventions.At a time when deep learning dominates the state of the art in all fields related to medical image processing, this thesis aims to study how these methods can meet certain challenges related to multi-modal image analysis, revolving around three axes : automatic segmentation of the liver, the interpretability of segmentation networks and detection of hepatic lesions.Multi-modal segmentation in a context where the images are paired but not registered with respect to each other is a problem that is little addressed in the literature.I propose a comparison of learning strategies that have been proposed for related problems, as well as a method to enforce a constraint of similarity of predictions into learning.Interpretability in machine learning is a young field of research with particularly important issues in medical image processing, but which so far has focused on natural image classification networks.I propose a method for interpreting medical image segmentation networks.Finally, I present preliminary work on a method for detecting liver lesions in pairs of images of different modalities
Razakarivony, Sébastien. "Apprentissage de variétés pour la Détection et Reconnaissance de véhicules faiblement résolus en imagerie aérienne." Caen, 2014. http://www.theses.fr/2014CAEN2055.
Full textThis manuscript addresses the problematics of Detection and Recognition of vehicles of poor resolution in aerial imagery. First, we present these two problems and we give a survey of the different state-of-the-art techniques that exist to solve them. We then introduce databases which are used by the Computer Vision community and the databases created and used during our work, that are more suited to our industrial context. Thirdly, we test some of the state-of-the-art algorithms and we present the related results on these databases. Next, we introduce the use of manifolds as generative models in order to decorrelate the modelisation of the vehicles from the modelisation of the background regions. Then, the discriminative autoencoder, a novel algorithm based on metric learning to detect and recognise efficiently vehicles, as well as its extension, the convolutional discriminative autoencoder, are presented with the associated experiments and results. Finally, we present some experiments on learning the characteristics of the background of an image. The document is closed by the conclusions and a discussion about the future works
Ruppli, Camille. "Methods and frameworks of annotation cost optimization for deep learning algorithms applied to medical imaging." Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAT039.
Full textIn recent years, the amount of medical imaging data has kept on growing. In 1980, 30 minutes of acquisition were necessary to obtain 40 medical images.Today, 1000 images can be acquired in 4 seconds. This growth in the amount of data has gone hand in hand with the development of deep learning techniques which need quality labels to be trained. In medical imaging, labels are much more expensive to obtain as they require the expertise of a radiologist whose time is limited. The goal of this thesis is to propose and develop methods to limit the annotation load in medical imaging while maintaining a high performance of deep learning algorithms.In the first part of this thesis, we focus on self-supervised learning methods which introduce pretext tasks of various types: generation based, context based and self-distillation approaches. These tasks are used to pretrain a neural network with no additional annotations to take advantage of the amount of available unannotated data. Most of these tasks use perturbations often quite generic, unrelated to the objective task and sampled at random in a fixed list with fixed parameters. How to best combine and choose these perturbations and their parameters remains unclear. Furthermore, some perturbations can be detrimental to the target supervised task. Some works mitigate this issue by designing pretext tasks for a specific supervised task, especially in medical imaging. But these tasks do not generalize well to other problems.A balance must be found between perturbation or pretext task optimization for a given supervised problem and method generalization ability.Among context-based methods, contrastive learning approaches propose an instance-level discrimination task: the latent space is structured with instance similarity. Defining instance similarity is the main challenge of these approaches and has been widely explored.When defining similarity through perturbed versions of the same image, the same questions of perturbations optimization arise.We introduce a perturbation generator optimized for contrastive pre-training guided by a small amount of supervision.Class labels and metadata have been used to condition instance similarity, but these data can be subject to annotator variability, especially in the medical domain. Some methods have been proposed to use confidence in fully supervised and self-supervised training, but it is mostly based on loss function values. However, confidence on labels and metadata is often linked to a priori domain knowledge such as data acquisition, annotators experience and agreement. This is even more relevant for medical data.In the second part of this thesis, we focus we design an adapted contrastive loss introducing annotation confidence for the specific problem of prostate cancer lesion detection.Finally, we explore some approaches to apply self-supervised and contrastive learning to prostate cancer lesion segmentation
Margeta, Ján. "Apprentissage automatique pour simplifier l’utilisation de banques d’images cardiaques." Thesis, Paris, ENMP, 2015. http://www.theses.fr/2015ENMP0055/document.
Full textThe recent growth of data in cardiac databases has been phenomenal. Cleveruse of these databases could help find supporting evidence for better diagnosis and treatment planning. In addition to the challenges inherent to the large quantity of data, the databases are difficult to use in their current state. Data coming from multiple sources are often unstructured, the image content is variable and the metadata are not standardised. The objective of this thesis is therefore to simplify the use of large databases for cardiology specialists withautomated image processing, analysis and interpretation tools. The proposed tools are largely based on supervised machine learning techniques, i.e. algorithms which can learn from large quantities of cardiac images with groundtruth annotations and which automatically find the best representations. First, the inconsistent metadata are cleaned, interpretation and visualisation of images is improved by automatically recognising commonly used cardiac magnetic resonance imaging views from image content. The method is based on decision forests and convolutional neural networks trained on a large image dataset. Second, the thesis explores ways to use machine learning for extraction of relevant clinical measures (e.g. volumes and masses) from3D and 3D+t cardiac images. New spatio-temporal image features are designed andclassification forests are trained to learn how to automatically segment the main cardiac structures (left ventricle and left atrium) from voxel-wise label maps. Third, a web interface is designed to collect pairwise image comparisons and to learn how to describe the hearts with semantic attributes (e.g. dilation, kineticity). In the last part of the thesis, a forest-based machinelearning technique is used to map cardiac images to establish distances and neighborhoods between images. One application is retrieval of the most similar images
Yousefi, Bardia. "Mineral identification using data-mining in hyperspectral infrared imagery." Doctoral thesis, Université Laval, 2018. http://hdl.handle.net/20.500.11794/30304.
Full textThe geological applications of hyperspectral infrared imagery mainly consist in mineral identification, mapping, airborne or portable instruments, and core logging. Finding the mineral indicators offer considerable benefits in terms of mineralogy and mineral exploration which usually involves application of portable instrument and core logging. Moreover, faster and more mechanized systems development increases the precision of identifying mineral indicators and avoid any possible mis-classification. Therefore, the objective of this thesis was to create a tool to using hyperspectral infrared imagery and process the data through image analysis and machine learning methods to identify small size mineral grains used as mineral indicators. This system would be applied for different circumstances to provide an assistant for geological analysis and mineralogy exploration. The experiments were conducted in laboratory conditions in the long-wave infrared (7.7μm to 11.8μm - LWIR), with a LWIR-macro lens (to improve spatial resolution), an Infragold plate, and a heating source. The process began with a method to calculate the continuum removal. The approach is the application of Non-negative Matrix Factorization (NMF) to extract Rank-1 NMF and estimate the down-welling radiance and then compare it with other conventional methods. The results indicate successful suppression of the continuum from the spectra and enable the spectra to be compared with spectral libraries. Afterwards, to have an automated system, supervised and unsupervised approaches have been tested for identification of pyrope, olivine and quartz grains. The results indicated that the unsupervised approach was more suitable due to independent behavior against training stage. Once these results obtained, two algorithms were tested to create False Color Composites (FCC) applying a clustering approach. The results of this comparison indicate significant computational efficiency (more than 20 times faster) and promising performance for mineral identification. Finally, the reliability of the automated LWIR hyperspectral infrared mineral identification has been tested and the difficulty for identification of the irregular grain’s surface along with the mineral aggregates has been verified. The results were compared to two different Ground Truth(GT) (i.e. rigid-GT and observed-GT) for quantitative calculation. Observed-GT increased the accuracy up to 1.5 times than rigid-GT. The samples were also examined by Micro X-ray Fluorescence (XRF) and Scanning Electron Microscope (SEM) in order to retrieve information for the mineral aggregates and the grain’s surface (biotite, epidote, goethite, diopside, smithsonite, tourmaline, kyanite, scheelite, pyrope, olivine, and quartz). The results of XRF imagery compared with automatic mineral identification techniques, using ArcGIS, and represented a promising performance for automatic identification and have been used for GT validation. In overall, the four methods (i.e. 1.Continuum removal methods; 2. Classification or clustering methods for mineral identification; 3. Two algorithms for clustering of mineral spectra; 4. Reliability verification) in this thesis represent beneficial methodologies to identify minerals. These methods have the advantages to be a non-destructive, relatively accurate and have low computational complexity that might be used to identify and assess mineral grains in the laboratory conditions or in the field.
Aghaei, Mazaheri Jérémy. "Représentations parcimonieuses et apprentissage de dictionnaires pour la compression et la classification d'images satellites." Thesis, Rennes 1, 2015. http://www.theses.fr/2015REN1S028/document.
Full textThis thesis explores sparse representation and dictionary learning methods to compress and classify satellite images. Sparse representations consist in approximating a signal by a linear combination of a few columns, known as atoms, from a dictionary, and thus representing it by only a few non-zero coefficients contained in a sparse vector. In order to improve the quality of the representations and to increase their sparsity, it is interesting to learn the dictionary. The first part of the thesis presents a state of the art about sparse representations and dictionary learning methods. Several applications of these methods are explored. Some image compression standards are also presented. The second part deals with the learning of dictionaries structured in several levels, from a tree structure to an adaptive structure, and their application to the compression of satellite images, by integrating them in an adapted coding scheme. Finally, the third part is about the use of learned structured dictionaries for the classification of satellite images. A method to estimate the Modulation Transfer Function (MTF) of the instrument used to capture an image is studied. A supervised classification algorithm, using structured dictionaries made discriminant between classes during the learning, is then presented in the scope of scene recognition in a picture
Pinte, Caroline. "Machine learning for bi-modal EEG-fMRI neurofeedback : EEG electrodes localization and fMRI NF scores prediction." Electronic Thesis or Diss., Université de Rennes (2023-....), 2024. http://www.theses.fr/2024URENS053.
Full textThis thesis explores the impact of machine learning methods in the context of bi- modal EEG-fMRI, with the goal of automatically and accurately localizing EEG electrodes in an MRI volume and predicting fMRI neurofeedback scores from EEG signals. The first part presents the context and tools used, covering EEG and fMRI modalities as well as their combination, neurofeedback, artificial neural networks, image segmentation and time series regression. The second part contains three main contributions. The first one describes the development of a method for automatically detecting the position and labeling of EEG electrodes in an MRI volume using a specific MRI sequence. The second contribution proposes a method for finding model architecture hyperparameters based on a genetic algorithm. These models are then trained on several subjects to predict fMRI neurofeedback scores from EEG signals. This study compares different architectures from two categories of neural networks: LSTMs and CNNs. Finally, the third contribution consists in investigating an area of improvement for these models. This work evaluates the impact on model performance of reducing inter-subject variability, by applying an alignment in Euclidean space to EEG data
Blanchart, Pierre. "Apprentissage rapide adapté aux spécificités de l'utilisateur : application à l'extraction d'informations d'images de télédétection." Phd thesis, Paris, Télécom ParisTech, 2011. https://pastel.hal.science/pastel-00662747.
Full textAn important emerging topic in satellite image content extraction and classification is building retrieval systems that automatically learn high-level semantic interpretations from images, possibly under the direct supervision of the user. In this thesis, we envisage successively the two very broad categories of auto-annotation systems and interactive image search engine to propose our own solutions to the recurring problem of learning from small and non-exhaustive training datasets and of generalizing over a very high-volume of unlabeled data. In our first contribution, we look into the problem of exploiting the huge volume of unlabeled data to discover "unknown" semantic structures, that is, semantic classes which are not represented in the training dataset. We propose a semi-supervised algorithm able to build an auto-annotation model over non-exhaustive training datasets and to point out to the user new interesting semantic structures in the purpose of guiding him in his database exploration task. In our second contribution, we envisage the problem of speeding up the learning in interactive image search engines. We derive a semi-supervised active learning algorithm which exploits the intrinsic data distribution to achieve faster identification of the target category. In our last contribution, we describe a cascaded active learning strategy to retrieve objects in large satellite image scenes. We propose consequently an active learning method which exploits a coarse-to-fine scheme to avoid the computational overload inherent to multiple evaluations of the decision function of complex classifiers such as needed to retrieve complex object classes
Blanchart, Pierre. "Apprentissage rapide adapté aux spécificités de l'utilisateur : application à l'extraction d'informations d'images de télédétection." Phd thesis, Télécom ParisTech, 2011. http://pastel.archives-ouvertes.fr/pastel-00662747.
Full textTomasini, Linda. "Apprentissage d'une représentation statistique et topologique d'un environnement." Toulouse, ENSAE, 1993. http://www.theses.fr/1993ESAE0024.
Full textYang, Jinlong. "Apprentissage des espaces de forme du modèle 3d humain habillé en mouvement." Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAM008/document.
Full textThe 3D virtual representations of dressed humans appear in movies, video games and since recently, VR contents. To generate these representations, we usually perform 3D acquisitions or synthesize sequences with physics-based simulation or other computer graphics techniques such as rigging and skinning. These traditional methods generally require tedious manual intervention and generate new contents with low speed or low quality, due to the complexity of clothing motion. To deal with this problem, we propose in this work, a data-driven learning approach, which can take both captures and simulated sequences as learning data, and output unseen 3D shapes of dressed human with different body shape, body motion, clothing fit and clothing materials.Due to the lack of temporal coherence and semantic information, raw captures can hardly be used directly for analysis and learning. Therefore, we first propose an automatic method to extract the human body under clothing from unstructured 3D sequences. It is achieved by exploiting a statistical human body model and optimizing the model parameters so that the body surface stays always within while as close as possible to the observed clothed surface throughout the sequence. We show that our method can achieve similar or better result compared with other state-of-the-art methods, and does not need any manual intervention.After extracting the human body under clothing, we propose a method to register the clothing surface with the help of isometric patches. Some anatomical points on the human body model are first projected to the clothing surface in each frame of the sequence. Those projected points give the starting correspondence between clothing surfaces across a sequence. We isometrically grow patches around these points in order to propagate the correspondences on the clothing surface. Subsequently, those dense correspondences are used to guide non-rigid registration so that we can deform the template mesh to obtain temporal coherence of the raw captures.Based on processed captures and simulated data, we finally propose a comprehensive analysis of the statistics of the clothing layer with a simple two-component model. It is based on PCA subspace reduction of the layer information on one hand, and a generic parameter regression model using neural networks on the other hand, designed to regress from any semantic parameter whose variation is observed in a training set, to the layer parameterization space. We show that our model not only allows to reproduce previous re-targeting works, but generalizes the data synthesizing capabilities to other semantic parameters such as body motion, clothing fit, and physical material parameters, paving the way for many kinds of data-driven creation and augmentation applications
Taiello, Riccardo. "Apprentissage automatique sécurisé pour l'analyse collaborative des données de santé à grande échelle." Electronic Thesis or Diss., Université Côte d'Azur, 2024. http://www.theses.fr/2024COAZ4031.
Full textThis PhD thesis explores the integration of privacy preservation, medical imaging, and Federated Learning (FL) using advanced cryptographic methods. Within the context of medical image analysis, we develop a privacy-preserving image registration (PPIR) framework. This framework addresses the challenge of registering images confidentially, without revealing their contents. By extending classical registration paradigms, we incorporate cryptographic tools like secure multi-party computation and homomorphic encryption to perform these operations securely. These tools are vital as they prevent data leakage during processing. Given the challenges associated with the performance and scalability of cryptographic methods in high-dimensional data, we optimize our image registration operations using gradient approximations. Our focus extends to increasingly complex registration methods, such as rigid, affine, and non-linear approaches using cubic splines or diffeomorphisms, parameterized by time-varying velocity fields. We demonstrate how these sophisticated registration methods can integrate privacy-preserving mechanisms effectively across various tasks. Concurrently, the thesis addresses the challenge of stragglers in FL, emphasizing the role of Secure Aggregation (SA) in collaborative model training. We introduce "Eagle", a synchronous SA scheme designed to optimize participation by late-arriving devices, significantly enhancing computational and communication efficiencies. We also present "Owl", tailored for buffered asynchronous FL settings, consistently outperforming earlier solutions. Furthermore, in the realm of Buffered AsyncSA, we propose two novel approaches: "Buffalo" and "Buffalo+". "Buffalo" advances SA techniques for Buffered AsyncSA, while "Buffalo+" counters sophisticated attacks that traditional methods fail to detect, such as model replacement. This solution leverages the properties of incremental hash functions and explores the sparsity in the quantization of local gradients from client models. Both Buffalo and Buffalo+ are validated theoretically and experimentally, demonstrating their effectiveness in a new cross-device FL task for medical devices.Finally, this thesis has devoted particular attention to the translation of privacy-preserving tools in real-world applications, notably through the FL open-source framework Fed-BioMed. Contributions concern the introduction of one of the first practical SA implementations specifically designed for cross-silo FL among hospitals, showcasing several practical use cases
Leclerc, Sarah Marie-Solveig. "Automatisation de la segmentation sémantique de structures cardiaques en imagerie ultrasonore par apprentissage supervisé." Thesis, Lyon, 2019. http://www.theses.fr/2019LYSEI121.
Full textThe analysis of medical images plays a critical role in cardiology. Ultrasound imaging, as a real-time, low cost and bed side applicable modality, is nowadays the most commonly used image modality to monitor patient status and perform clinical cardiac diagnosis. However, the semantic segmentation (i.e the accurate delineation and identification) of heart structures is a difficult task due to the low quality of ultrasound images, characterized in particular by the lack of clear boundaries. To compensate for missing information, the best performing methods before this thesis relied on the integration of prior information on cardiac shape or motion, which in turns reduced the adaptability of the corresponding methods. Furthermore, such approaches require man- ual identifications of key points to be adapted to a given image, which makes the full process difficult to reproduce. In this thesis, we propose several original fully-automatic algorithms for the semantic segmentation of echocardiographic images based on supervised learning ap- proaches, where the resolution of the problem is automatically set up using data previously analyzed by trained cardiologists. From the design of a dedicated dataset and evaluation platform, we prove in this project the clinical applicability of fully-automatic supervised learning methods, in particular deep learning methods, as well as the possibility to improve the robustness by incorporating in the full process the prior automatic detection of regions of interest
Hage, Chehade Aya. "Détection et classification multi-label de maladies pulmonaires par apprentissage automatique à partir d’images de radiographie thoracique." Electronic Thesis or Diss., Angers, 2024. http://www.theses.fr/2024ANGE0020.
Full textLung diseases are a major cause of death worldwide, and early diagnosis is crucial to improve the chance of recovery. Artificial Intelligence technologies have opened promising avenues in the biomedical field. Thus, in this thesis, AI models are used to improve the classification performance of lung diseases from chest X-ray images. New preprocessing approaches based on CycleGAN are developed to reduce the noise effect caused by artifacts such as medical devices in chest Xrays, as well as to generate masks that include pathological areas within the regions of interest. Additionally, a new feature selection approach is developed to identify the statistically most significant features a priori before classification. Beyond image analysis, the associated clinical data are also examined to refine the classification model according to the patient’s profile, enhancing diagnostic effectiveness. The proposed advancements show promising results in improving the performance of both binary and multi-label classification of lung diseases
Calandre, Jordan. "Analyse non intrusive du geste sportif dans des vidéos par apprentissage automatique." Electronic Thesis or Diss., La Rochelle, 2022. http://www.theses.fr/2022LAROS040.
Full textIn this thesis, we are interested in the characterization and fine-grained analysis of sports gestures in videos, and more particularly in non-intrusive 3D analysis using a single camera. Our case study is table tennis. We propose a method for reconstructing 3D ball positions using a high-speed calibrated camera (240fps). For this, we propose and train a convolutional network that extracts the apparent diameter of the ball from the images. The knowledge of the real diameter of the ball allows us to compute the distance between the camera and the ball, and then to position the latter in a 3D coordinate system linked to the table. Then, we use a physical model, taking into account the Magnus effect, to estimate the kinematic parameters of the ball from its successive 3D positions. The proposed method segments the trajectories from the impacts of the ball on the table or the racket. This allows, using a physical model of rebound, to refinethe estimates of the kinematic parameters of the ball. It is then possible to compute the racket's speed and orientation after the stroke and to deduce relevant performance indicators. Two databases have been built: the first one is made of real game sequence acquisitions. The second is a synthetic dataset that reproduces the acquisition conditions of the previous one. This allows us to validate our methods as the physical parameters used to generate it are known.Finally, we present our participation to the Sport\&Vision task of the MediaEval challenge on the classification of human actions, using approaches based on the analysis and representation of movement
Rebaud, Louis. "Whole-body / total-body biomarkers in PET imaging." Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPAST047.
Full textThis thesis in partnership with Institut Curie and Siemens Healthineers explores the use of Positron Emission Tomography (PET) for cancer prognosis, focusing on non-Hodgkin lymphomas, especially follicular lymphoma (FL) and diffuse large B cell lymphoma (DLBCL). Assuming that current biomarkers computed in PET images overlook significant information, this work focuses on the search for new biomarkers in whole-body PET imaging. An initial manual approach validated a previously identified feature (tumor fragmentation) and explored the prognostic significance of splenic involvement in DLBCL, finding that the volume of splenic involvement does not further stratify patients with such an involvement. To overcome the empirical limitations of the manual search, a semi-automatic feature identification method was developed. It consisted in the automatic extraction of thousands of candidate biomarkers and there subsequent testing by a selection pipeline design to identify features quantifying new prognostic information. The selected biomarkers were then analysed and re-encoded in simpler and more intuitive ways. Using this approach, 22 new image-based biomarkers were identified, reflecting biological information about the tumours, but also the overall health status of the patient. Among them, 10 features were found prognostic of both FL and DLBCL patient outcome. The thesis also addresses the challenge of using these features in clinical practice, proposing the Individual Coefficient Approximation for Risk Estimation (ICARE) model. This machine learning model, designed to reduce overfitting and improve generalizability, demonstrated effectiveness in the HECKTOR 2022 challenge for predicting outcomes from head and neck cancer patients [18F]-PET/CT scans. This model was also found to overfit less than other machine learning methods on an exhaustive comparison using a benchmark of 71 medical datasets. All these developments were implemented in a software extension of a prototype developed by Siemens Healthineers
Pierrefeu, Amicie de. "Apprentissage automatique avec parcimonie structurée : application au phénotypage basé sur la neuroimagerie pour la schizophrénie." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS329/document.
Full textSchizophrenia is a disabling chronic mental disorder characterized by various symptoms such as hallucinations, delusions as well as impairments in high-order cognitive functions. Over the years, Magnetic Resonance Imaging (MRI) has been increasingly used to gain insights on the structural and functional abnormalities inherent to the disorder. Recent progress in machine learning together with the availability of large datasets now pave the way to capture complex relationships to make inferences at an individual level in the perspective of computer-aided diagnosis/prognosis or biomarkers discovery. Given the limitations of state-of-the-art sparse algorithms to produce stable and interpretable predictive signatures, we have pushed forward the regularization approaches extending classical algorithms with structural constraints issued from the known biological structure (spatial structure of the brain) in order to force the solution to adhere to biological priors, producing more plausible interpretable solutions. Such structured sparsity constraints have been leveraged to identify first, a neuroanatomical signature of schizophrenia and second a neuroimaging functional signature of hallucinations in patients with schizophrenia. Additionally, we also extended the popular PCA (Principal Component Analysis) with spatial regularization to identify interpretable patterns of the neuroimaging variability in either functional or anatomical meshes of the cortical surface
Nguyen, Bang Giang. "Classification en espaces fonctionnels utilisant la norme BV avec applications aux images ophtalmologiques et à la complexité du trafic aérien." Toulouse 3, 2014. http://thesesups.ups-tlse.fr/2473/.
Full textIn this thesis, we deal with two different problems using Total Variation concept. The first problem concerns the classification of vasculitis in multiple sclerosis fundus angiography, aiming to help ophthalmologists to diagnose such autoimmune diseases. It also aims at determining potential angiography details in intermediate uveitis in order to help diagnosing multiple sclerosis. The second problem aims at developing new airspace congestion metric, which is an important index that is used for improving Air Traffic Management (ATM) capacity. In the first part of this thesis, we provide preliminary knowledge required to solve the above-mentioned problems. First, we present an overview of the Total Variation and express how it is used in our methods. Then, we present a tutorial on Support Vector Machines (SVMs) which is a learning algorithm used for classification and regression. In the second part of this thesis, we first provide a review of methods for segmentation and measurement of blood vessel in retinal image that is an important step in our method. Then, we present our proposed method for classification of retinal images. First, we detect the diseased region in the pathological images based on the computation of BV norm at each point along the centerline of the blood vessels. Then, to classify the images, we introduce a feature extraction strategy to generate a set of feature vectors that represents the input image set for the SVMs. After that, a standard SVM classifier is applied in order to classify the images. Finally, in the third part of this thesis, we address two applications of TV in the ATM domain. In the first application, based on the ideas developed in the second part, we introduce a methodology to extract the main air traffic flows in the airspace. Moreover, we develop a new airspace complexity indicator which can be used to organize air traffic at macroscopic level. This indicator is then compared to the regular density metric which is computed just by counting the number of aircraft in the airspace sector. The second application is based on a dynamical system model of air traffic. We propose a method for developing a new traffic complexity metric by computing the local vectorial total variation norm of the relative deviation vector field. Its aim is to reduce complexity. Three different traffic situations are investigated to evaluate the fitness of the proposed method
Germani, Élodie. "Exploring and mitigating analytical variability in fMRI results using representation learning." Electronic Thesis or Diss., Université de Rennes (2023-....), 2024. http://www.theses.fr/2024URENS031.
Full textIn this thesis, we focus on the variations induced by different analysis methods, also known as analytical variability, in brain imaging studies. This phenomenon is now well known in the community, and our aim is now to better understand the factors leading to this variability and to find solutions to better account for it. To do so, I analyse data and explore the relationships between the results of different methods. At the same time, I study the constraints related to data reuse and I propose solutions based on artificial intelligence to build more robust studies
Felefly, Tony. "Quantum-classical machine learning for brain tumor imaging analysis." Electronic Thesis or Diss., Strasbourg, 2024. http://www.theses.fr/2024STRAJ064.
Full textBrain tumor characterization using non-invasive techniques is eagerly needed. The objective of this thesis is to use advanced machine learning techniques and quantum technology on brain medical images to characterize brain tumors. First, we built a Quantum Neural Network using radiomic features from on brain MRI to differentiate between metastases and gliomas. We used a Mutual Information feature selection technique, and solved the resulting heuristic on D-Wave’s Quantum Annealer. We trained the model on a Quantum Simulator. We employed instance-wise Shapley values to explain the model predictions. We benchmarked the results against two state-of-the-art classical models, Dense Neural Network and Extreme Gradient Boosting. The model showed comparable performance.Second, we developed a 3D Convolutional Neural Network using non-enhanced brain CT scans to identify patients with brain metastases. For this purpose, we curated two cohorts of patients, one with brain metastases, and one without brain abnormalities. The brain was automatically segmented. We trained several versions of the model, and the best model showed an impressive performance
Zubiolo, Alexis. "Extraction de caractéristiques et apprentissage statistique pour l'imagerie biomédicale cellulaire et tissulaire." Thesis, Nice, 2015. http://www.theses.fr/2015NICE4117/document.
Full textThe purpose of this Ph.D. thesis is to study the classification based on morphological features of cells and tissues taken from biomedical images. The goal is to help medical doctors and biologists better understand some biological phenomena. This work is spread in three main parts corresponding to the three typical problems in biomedical imaging tackled. The first part consists in analyzing endomicroscopic videos of the colon in which the pathological class of the polyps has to be determined. This task is performed using a supervised multiclass machine learning algorithm combining support vector machines and graph theory tools. The second part concerns the study of the morphology of mice neurons taken from fluorescent confocal microscopy. In order to obtain a rich information, the neurons are imaged at two different magnifications, the higher magnification where the soma appears in details, and the lower showing the whole cortex, including the apical dendrites. On these images, morphological features are automatically extracted with the intention of performing a classification. The last part is about the multi-scale processing of digital histology images in the context of kidney cancer. The vascular network is extracted and modeled by a graph to establish a link between the architecture of the tumor and its pathological class
Goya, Outi Jessica. "Développements en radiomique pour une meilleure caractérisation du gliome infiltrant du tronc cérébral à partir d'imagerie par résonance magnétique." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLS219/document.
Full textRadiomics is based on the assumption that relevant, non-visually identifiable information can be found by calculating a large amount of quantitative indices from medical images. In oncology, this information could characterize the phenotype of the tumor and define the prognosis of the patient. DIPG is a rare pediatric tumor diagnosed by clinical signs and MRI appearance. This work presents the first radiomic studies for patients with DIPG. Since clinical MRI intensities are expressed in arbitrary units, the first step in the study was image standardization. A normalization method based on intensity estimation of the normal-appearing white matter has been shown to be effective on more than 1500 image volumes. Methodological studies on the calculation of texture indices have then defined the following recommendations: (a) discretize gray levels with a constant width for all patients, (b) use a constant volume of interest or pay attention to the bias introduced by volumes of different size and shape. Based on these recommendations, radiomic indices from four MRI modalities were systematically analyzed to predict the main genetic mutations associated with DIPG and the overall survival of patients at the time of diagnosis. An index selection pipeline was proposed and different cross-validated machine learning methods were implemented for both prediction tasks. The combination of clinical indices with imaging indices is more effective than the clinical or imaging indices alone for the prediction of the two main mutations in histone H3 (H3.1 versus H3.3) associated with DIPG. As some imaging modalities were missing, a methodology adapted to the analysis of multi-modal imaging databases with missing data was proposed to overcome the limitations of the collection of imaging data. This approach made it possible to integrate new patients. The results of the external prediction test for the two main mutations of H3 histone are encouraging. Regarding survival, some radiomic indices seem to be informative. However, the small number of patients did not make it possible to establish the performance of the proposed predictors. Finally, these first radiomic studies suggest the relevance of the radiomic indices for the management of patients with DIPG in the absence of biopsy but the database need to be increased in order to confirm these results. The proposed methodology can be applied to other studies
Jas, Mainak. "Contributions pour l'analyse automatique de signaux neuronaux." Electronic Thesis or Diss., Paris, ENST, 2018. http://www.theses.fr/2018ENST0021.
Full textElectrophysiology experiments has for long relied upon small cohorts of subjects to uncover statistically significant effects of interest. However, the low sample size translates into a low power which leads to a high false discovery rate, and hence a low rate of reproducibility. To address this issue means solving two related problems: first, how do we facilitate data sharing and reusability to build large datasets; and second, once big datasets are available, what tools can we build to analyze them ? In the first part of the thesis, we introduce a new data standard for sharing data known as the Brain Imaging Data Structure (BIDS), and its extension MEG-BIDS. Next, we introduce the reader to a typical electrophysiological pipeline analyzed with the MNE software package. We consider the different choices that users have to deal with at each stage of the pipeline and provide standard recommendations. Next, we focus our attention on tools to automate analysis of large datasets. We propose an automated tool to remove segments of data corrupted by artifacts. We develop an outlier detection algorithm based on tuning rejection thresholds. More importantly, we use the HCP data, which is manually annotated, to benchmark our algorithm against existing state-of-the-art methods. Finally, we use convolutional sparse coding to uncover structures in neural time series. We reformulate the existing approach in computer vision as a maximuma posteriori (MAP) inference problem to deal with heavy tailed distributions and high amplitude artifacts. Taken together, this thesis represents an attempt to shift from slow and manual methods of analysis to automated, reproducible analysis
Mascarilla, Laurent. "Apprentissage de connaissances pour l'interprétation des images satellite." Toulouse 3, 1996. http://www.theses.fr/1996TOU30300.
Full textDesir, Chesner. "Classification automatique d'images, application à l'imagerie du poumon profond." Phd thesis, Rouen, 2013. http://www.theses.fr/2013ROUES053.
Full textThis thesis deals with automated image classification, applied to images acquired with alveoscopy, a new imaging technique of the distal lung. The aim is to propose and develop a computer aided-diagnosis system, so as to help the clinician analyze these images never seen before. Our contributions lie in the development of effective, robust and generic methods to classify images of healthy and pathological patients. Our first classification system is based on a rich and local characterization of the images, an ensemble of random trees approach for classification and a rejection mechanism, providing the medical expert with tools to enhance the reliability of the system. Due to the complexity of alveoscopy images and to the lack of expertize on the pathological cases (unlike healthy cases), we adopt the one-class learning paradigm which allows to learn a classifier from healthy data only. We propose a one-class approach taking advantage of combining and randomization mechanisms of ensemble methods to respond to common issues such as the curse of dimensionality. Our method is shown to be effective, robust to the dimension, competitive and even better than state-of-the-art methods on various public datasets. It has proved to be particularly relevant to our medical problem
Desir, Chesner. "Classification Automatique d'Images, Application à l'Imagerie du Poumon Profond." Phd thesis, Université de Rouen, 2013. http://tel.archives-ouvertes.fr/tel-00879356.
Full textZhang, Jing. "Prise en compte de l’esthétique dans la gestion des gammes de luminance des images." Electronic Thesis or Diss., Littoral, 2024. http://www.theses.fr/2024DUNK0704.
Full textAesthetic analysis of digital images enhances the visual content's aesthetic quality. By analyzing aesthetic features that influence visual perception through image data, computers can perform tasks like assisted image editing, aesthetic quality enhancement, and filtering for the best image. This thesis merges aesthetic image analysis with high dynamic range (HDR) imaging. We consider both the properties of HDR and the aesthetic characteristics of images during HDR image processing. The aim is to maximize the preservation of the original aesthetic features of images when adjusting HDR image display results, thereby achieving a pleasant visual experience. In this thesis, we propose a composition leading lines reconstruction method and two HDR image auto-adjustment methods. Regarding automatic adjustment of HDR images, we are developing a model based on a neural network to predict the adjustment curve of HDR images, and a model using a convolutional neural network to estimate the exposure adjustment value, by analyzing potential features of HDR images. Both methods automatically enhance the aesthetic quality perception of HDR images on HDR display devices by training neural networls to learn expert editing parameters from an HDR database. In order to analyze the aesthetics of image composition, we propose to reconstruct the leading lines of the image. Just like color, lighting, or the grain of the image, the leading lines among the aesthetic features that need to be analyzed. The proposed method identified implicit leading lines in the image through a line regrouping algorithm. We initially carried out an inter-expert consistency analysis to demonstrate the feasibility of our method. In addition, we propose a metric for comparing the two sets of leading lines
Desbordes, Paul. "Méthode de sélection de caractéristiques pronostiques et prédictives basée sur les forêts aléatoires pour le suivi thérapeutique des lésions tumorales par imagerie fonctionnelle TEP." Thesis, Normandie, 2017. http://www.theses.fr/2017NORMR030/document.
Full textRadiomics proposes to combine image features with those extracted from other modalities (clinical, genomic, proteomic) to set up a personalized medicine in the management of cancer. From an initial exam, the objective is to anticipate the survival rate of the patient or the treatment response probability. In medicine, classical statistical methods are generally used, such as theMann-Whitney analysis for predictive studies and analysis of Kaplan-Meier survival curves for prognostic studies. Thus, the increasing number of studied features limits the use of these statistics. We have focused our works on machine learning algorithms and features selection methods. These methods are resistant to large dimensions as well as non-linear relations between features. We proposed two features selection strategy based on random forests. Our methods allowed the selection of subsets of predictive and prognostic features on 2 databases (oesophagus and lung cancers). Our algorithms showed the best classification performances compared to classical statistical methods and other features selection strategies studied
Ogier, du Terrail Jean. "Réseaux de neurones convolutionnels profonds pour la détection de petits véhicules en imagerie aérienne." Thesis, Normandie, 2018. http://www.theses.fr/2018NORMC276/document.
Full textThe following manuscript is an attempt to tackle the problem of small vehicles detection in vertical aerial imagery through the use of deep learning algorithms. The specificities of the matter allows the use of innovative techniques leveraging the invariance and self similarities of automobiles/planes vehicles seen from the sky.We will start by a thorough study of single shot detectors. Building on that we will examine the effect of adding multiple stages to the detection decision process. Finally we will try to come to grips with the domain adaptation problem in detection through the generation of better looking synthetic data and its use in the training process of these detectors
Khiali, Lynda. "Fouille de données à partir de séries temporelles d’images satellites." Thesis, Montpellier, 2018. http://www.theses.fr/2018MONTS046/document.
Full textNowadays, remotely sensed images constitute a rich source of information that can be leveraged to support several applications including risk prevention, land use planning, land cover classification and many other several tasks. In this thesis, Satellite Image Time Series (SITS) are analysed to depict the dynamic of natural and semi-natural habitats. The objective is to identify, organize and highlight the evolution patterns of these areas.We introduce an object-oriented method to analyse SITS that consider segmented satellites images. Firstly, we identify the evolution profiles of the objects in the time series. Then, we analyse these profiles using machine learning methods. To identify the evolution profiles, we explore all the objects to select a subset of objects (spatio-temporal entities/reference objects) to be tracked. The evolution of the selected spatio-temporal entities is described using evolution graphs.To analyse these evolution graphs, we introduced three contributions. The first contribution explores annual SITS. It analyses the evolution graphs using clustering algorithms, to identify similar evolutions among the spatio-temporal entities. In the second contribution, we perform a multi-annual cross-site analysis. We consider several study areas described by multi-annual SITS. We use the clustering algorithms to identify intra and inter-site similarities. In the third contribution, we introduce à semi-supervised method based on constrained clustering. We propose a method to select the constraints that will be used to guide the clustering and adapt the results to the user needs.Our contributions were evaluated on several study areas. The experimental results allow to pinpoint relevant landscape evolutions in each study sites. We also identify the common evolutions among the different sites. In addition, the constraint selection method proposed in the constrained clustering allows to identify relevant entities. Thus, the results obtained using the unsupervised learning were improved and adapted to meet the user needs
Cuingnet, Rémi. "Contributions à l'apprentissage automatique pour l'analyse d'images cérébrales anatomiques." Phd thesis, Université Paris Sud - Paris XI, 2011. http://tel.archives-ouvertes.fr/tel-00602032.
Full textRamadier, Lionel. "Indexation et apprentissage de termes et de relations à partir de comptes rendus de radiologie." Thesis, Montpellier, 2016. http://www.theses.fr/2016MONTT298/document.
Full textIn the medical field, the computerization of health professions and development of the personal medical file (DMP) results in a fast increase in the volume of medical digital information. The need to convert and manipulate all this information in a structured form is a major challenge. This is the starting point for the development of appropriate tools where the methods from the natural language processing (NLP) seem well suited.The work of this thesis are within the field of analysis of medical documents and address the issue of representation of biomedical information (especially the radiology area) and its access. We propose to build a knowledge base dedicated to radiology within a general knowledge base (lexical-semantic network JeuxDeMots). We show the interest of the hypothesis of no separation between different types of knowledge through a document analysis. This hypothesis is that the use of general knowledge, in addition to those specialties, significantly improves the analysis of medical documents.At the level of lexical-semantic network, manual and automated addition of meta information on annotations (frequency information, pertinence, etc.) is particularly useful. This network combines weight and annotations on typed relationships between terms and concepts as well as an inference mechanism which aims to improve quality and network coverage. We describe how from semantic information in the network, it is possible to define an increase in gross index built for each records to improve information retrieval. We present then a method of extracting semantic relationships between terms or concepts. This extraction is performed using lexical patterns to which we added semantic constraints.The results show that the hypothesis of no separation between different types of knowledge to improve the relevance of indexing. The index increase results in an improved return while semantic constraints improve the accuracy of the relationship extraction
Vétil, Rebeca. "Artificial Intelligence Methods to Assist the Diagnosis of Pancreatic Diseases in Radiology." Electronic Thesis or Diss., Institut polytechnique de Paris, 2024. http://www.theses.fr/2024IPPAT014.
Full textWith its increasing incidence and its five- year survival rate (9%), pancreatic cancer could be- come the third leading cause of cancer-related deaths by 2025. These figures are primarily attributed to late diagnoses, which limit therapeutic options. This the- sis aims to assist radiologists in diagnosing pancrea- tic cancer through artificial intelligence (AI) tools that would facilitate early diagnosis. Several methods have been developed. First, a method for the automatic segmentation of the pancreas on portal CT scans was developed. To deal with the specific anatomy of the pancreas, which is characterized by an elonga- ted shape and subtle extremities easily missed, the proposed method relied on local sensitivity adjust- ments using geometrical priors. Then, the thesis tack- led the detection of pancreatic lesions and main pan- creatic duct (MPD) dilatation, both crucial indicators of pancreatic cancer. The proposed method started with the segmentation of the pancreas, the lesion and the MPD. Then, quantitative features were extracted from the segmentations and leveraged to predict the presence of a lesion and the dilatation of the MPD. The method was evaluated on an external test cohort comprising hundreds of patients. Continuing towards early diagnosis, two strategies were explored to de- tect secondary signs of pancreatic cancer. The first approach leveraged large databases of healthy pan- creases to learn a normative model of healthy pan- creatic shapes, facilitating the identification of anoma- lies. To this end, volumetric segmentation masks were embedded into a common probabilistic shape space, enabling zero-shot and few-shot abnormal shape de- tection. The second approach leveraged two types of radiomics: deep learning radiomics (DLR), extracted by deep neural networks, and hand-crafted radiomics (HCR), derived from predefined formulas. The propo- sed method sought to extract non-redundant DLR that would complement the information contained in the HCR. Results showed that this method effectively de- tected four secondary signs of pancreatic cancer: ab- normal shape, atrophy, senility, and fat replacement. To develop these methods, a database of 2800 exa- minations has been created, making it one of the lar- gest for AI research on pancreatic cancer
Deshpande, Hrishikesh. "Dictionary learning for pattern classification in medical imaging." Thesis, Rennes 1, 2016. http://www.theses.fr/2016REN1S032/document.
Full textMost natural signals can be approximated by a linear combination of a few atoms in a dictionary. Such sparse representations of signals and dictionary learning (DL) methods have received a special attention over the past few years. While standard DL approaches are effective in applications such as image denoising or compression, several discriminative DL methods have been proposed to achieve better image classification. In this thesis, we have shown that the dictionary size for each class is an important factor in the pattern recognition applications where there exist variability difference between classes, in the case of both the standard and discriminative DL methods. We validated the proposition of using different dictionary size based on complexity of the class data in a computer vision application such as lips detection in face images, followed by more complex medical imaging application such as classification of multiple sclerosis (MS) lesions using MR images. The class specific dictionaries are learned for the lesions and individual healthy brain tissues, and the size of the dictionary for each class is adapted according to the complexity of the underlying data. The algorithm is validated using 52 multi-sequence MR images acquired from 13 MS patients
Rodriguez, Colmeiro Ramiro German. "Towards Reduced Dose Positron Emission Tomography Imaging Using Sparse Sampling and Machine Learning." Thesis, Troyes, 2021. http://www.theses.fr/2021TROY0015.
Full textThis thesis explores the reduction of the patient radiation dose in screening Positron Emission Tomography (PET) studies. It analyses three aspects of PET imaging, which can reduce the patient dose: the data acquisition, the image reconstruction and the attenuation map generation. The first part of the thesis is dedicated to the PET scanner technology. Two optimization techniques are developed for a novel low-cost and low-dose scanner, the AR-PET scanner. First a photomultiplier selection and placement strategy is created, improving the energy resolution. The second work focuses on the localization of gamma events on solid scintillation crystals. The method is based on neural networks and a single flood acquisition, resulting in an increased detector’s sensitivity. In the second part, the PET image reconstruction on mesh support is studied. A mesh-based reconstruction algorithm is proposed which uses a series of 2D meshes to describe the 3D radiotracer distribution. It is shown that with this reconstruction strategy the number of sample points can be reduced without loosing accuracy and enabling parallel mesh optimization. Finally the attenuation map generation using deep neural networks is explored. A neural network is trained to learn the mapping from non attenuation corrected FDG PET images to a synthetic Computerized Tomography. With these approaches, this thesis lays a base for a low-cost and low-dose PET screening system, dispensing the need of a computed tomography image in exchange of an artificial attenuation map
Martinez, Herrera Sergio Ernesto. "Imagerie multispectrale pour améliorer la détection des lésions précancéreuses en endoscopie digestive." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLV055/document.
Full textThe evolution of gastritis into precancerous lesions follows a cascade of multiplestages. The modifications of the pathological tissues display low variations with respect to normal mucosa from a macroscopical point of view. Even though some features could be identified, they remain strongly subjective. The current gold standard for diagnosis of gastric diseases is divided in two procedures. The first one is gastroendoscopy where the stomach is visually explored under white light. The second one is biopsy collection for histological analysis. This procedure has a high probability of establishing the correct diagnosis but it strongly depends on the accurate collection of samples from damaged tissues. This doctoral work focuses on the study of gastric mucosa by multispectral imaging. The main contribution is the clinical study of multispectral imaging to differentiate pathologies poorly diagnosed or that can only be diagnosed by histological analysis. For this purpose, we performed (1) ex vivo studies in a mice model of infection of Helicobacter pylori in order to identify the wavelengths which could be used for diagnosis. (2) We propose two prototypes compatible with current gastroendoscopes to acquire multispectral images from gastric tissue: the first one is based on a filter wheel and the second one on a multispectral camera with seven channels. Additionally, (3) we present a methodology to identify pathological tissues, which is based on statistical features extracted from the acquired spectra, ranked according to their discriminative power and a supervised classification, where we search for the best performance of three classification algorithms: Nearest Neighbor, Neural Networks and Support Vector Machine with a rigorous performance evaluation by using leave one patient out cross validation. The results demonstrate the relevance of multispectral imaging as an additional tool for an objective diagnosis
Drago, Laetitia. "Analyse globale de la pompe à carbone biologique à partir de données en imagerie quantitative." Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS562.
Full textThe biological carbon pump (BCP) plays a central role in the global ocean carbon cycle, transporting carbon from the surface to the deep ocean and sequestering it for long periods. This work aims to analyse two key players of the BCP: zooplankton and particles. To this end, we use in situ imaging data from the Underwater Vision Profiler (UVP5) to investigate two primary axes: 1) the global distribution of zooplankton biomass and 2) carbon export in the context of a North Atlantic spring bloom. Our objectives includes a quantification of global zooplankton biomass, enhancing our comprehension of the BCP via morphological analysis of particles, and assessing and comparing the gravitational flux of detrital particles during a the North Atlantic spring bloom using high-resolution UVP5 data. With the help of UVP5 imagery and machine learning through habitat models using boosted regression trees, we investigate the global distribution of zooplankton biomass and its ecological implications. The results show maximum zooplankton biomass values around 60°N and 55°S and minimum values within the oceanic gyres, with a global biomass dominated by crustaceans and rhizarians. By employing machine learning techniques on globally homogeneous data, this study provides taxonomical insights into the distribution of 19 large zooplankton groups (1-50 mm equivalent spherical diameter). This first protocol estimates global, spatially resolved zooplankton biomass and community composition from in situ imaging observations of individual organisms. In addition, within the unique context of the EXPORTS 2021 campaign, we analyse UVP5 data obtained by deploying three instruments in a highly retentive eddy. After clustering the 1,720,914 images using Morphocluster, a semi-autonomous classification software, we delve into the characteristics of the marine particles, studying their morphology through an oblique framework that follows a plume of detrital particles between the surface and 800 m depth. The results of the plume following approach show that, contrary to expectations, aggregates become unexpectedly larger, denser, more circular and more complex with depth. In contrast, the evolution of fecal pellets is more heterogeneous and shaped by zooplankton activity. Such results challenge previous expectations and may require a reassessment of our view of sinking aggregates and fecal pellets. We also studied concentration and carbon flux dynamics using a more traditional 1D framework where we explore the three key elements in flux estimation from in situ imaging data by comparing UVP5 and sediment trap flux estimates: size range covered, sinking rate and carbon content. According to the current literature, neutrally buoyant sediment traps (NBST) and surface-tethered traps (STT) usually cover a size range from 10 µm to approximately 2 mm. In our study, we have found that by expanding the UVP size range to 10 µm and limiting it to 2 mm, a more consistent comparison can be made between UVP5-generated flux and sediment trap fluxes (obtained by colleagues). However, it is worth noting that there remains a large flux contribution above this size threshold, necessitating further investigation of its implications through the use of complementary approaches such as the use of sediment traps with larger openings. This manuscript not only advances our knowledge, but also addresses critical challenges in estimating zooplankton biomass and particle dynamics during export events. The findings of this study open up new avenues for future research on the biological carbon pump and deepen our understanding of marine ecosystems
Alashkar, Taleb. "3D dynamic facial sequences analysis for face recognition and emotion detection." Thesis, Lille 1, 2015. http://www.theses.fr/2015LIL10109/document.
Full textIn this thesis, we have investigated the problems of identity recognition and emotion detection from facial 3D shapes animations (called 4D faces). In particular, we have studied the role of facial (shapes) dynamics in revealing the human identity and their exhibited spontaneous emotion. To this end, we have adopted a comprehensive geometric framework for the purpose of analyzing 3D faces and their dynamics across time. That is, a sequence of 3D faces is first split to an indexed collection of short-term sub-sequences that are represented as matrix (subspace) which define a special matrix manifold called, Grassmann manifold (set of k-dimensional linear subspaces). The geometry of the underlying space is used to effectively compare the 3D sub-sequences, compute statistical summaries (e.g. sample mean, etc.) and quantify densely the divergence between subspaces. Two different representations have been proposed to address the problems of face recognition and emotion detection. They are respectively (1) a dictionary (of subspaces) representation associated to Dictionary Learning and Sparse Coding techniques and (2) a time-parameterized curve (trajectory) representation on the underlying space associated with the Structured-Output SVM classifier for early emotion detection. Experimental evaluations conducted on publicly available BU-4DFE, BU4D-Spontaneous and Cam3D Kinect datasets illustrate the effectiveness of these representations and the algorithmic solutions for identity recognition and emotion detection proposed in this thesis
Slama, Rim. "Geometric approaches for 3D human motion analysis : application to action recognition and retrieval." Thesis, Lille 1, 2014. http://www.theses.fr/2014LIL10078/document.
Full textIn this thesis, we focus on the development of adequate geometric frameworks in order to model and compare accurately human motion acquired from 3D sensors. In the first framework, we address the problem of pose/motion retrieval in full 3D reconstructed sequences. The human shape representation is formulated using Extremal Human Curve (EHC) descriptor extracted from the body surface. It allows efficient shape to shape comparison taking benefits from Riemannian geometry in the open curve shape space. As each human pose represented by this descriptor is viewed as a point in the shape space, we propose to model the motion sequence by a trajectory on this space. Dynamic Time Warping in the feature vector space is then used to compare different motions. In the second framework, we propose a solution for action and gesture recognition from both skeleton and depth data acquired by low cost cameras such as Microsoft Kinect. The action sequence is represented by a dynamical system whose observability matrix is characterized as an element of a Grassmann manifold. Thus, recognition problem is reformulated as a point classification on this manifold. Here, a new learning algorithm based on the notion of tangent spaces is proposed to improve recognition task. Performances of our approach on several benchmarks show high recognition accuracy with low latency
Zhu, Fei. "Kernel nonnegative matrix factorization : application to hyperspectral imagery." Thesis, Troyes, 2016. http://www.theses.fr/2016TROY0024/document.
Full textThis thesis aims to propose new nonlinear unmixing models within the framework of kernel methods and to develop associated algorithms, in order to address the hyperspectral unmixing problem.First, we investigate a novel kernel-based nonnegative matrix factorization (NMF) model, that circumvents the pre-image problem inherited from the kernel machines. Within the proposed framework, several extensions are developed to incorporate common constraints raised in hypersepctral images analysis. In order to tackle large-scale and streaming data, we next extend the kernel-based NMF to an online fashion, by keeping a fixed and tractable complexity. Moreover, we propose a bi-objective NMF model as an attempt to combine the linear and nonlinear unmixing models. The decompositions of both the conventional NMF and the kernel-based NMF are performed simultaneously. The last part of this thesis studies a supervised unmixing model, based on the correntropy maximization principle. This model is shown robust to outlier bands. Two correntropy-based unmixing problems are addressed, considering different constraints in hyperspectral unmixing problem. The alternating direction method of multipliers (ADMM) is investigated to solve the related optimization problems
Galibourg, Antoine. "Estimation de l'âge dentaire chez le sujet vivant : application des méthodes d'apprentissage machine chez les enfants et les jeunes adultes." Electronic Thesis or Diss., Toulouse 3, 2022. http://thesesups.ups-tlse.fr/5355/.
Full textStatement of the problem: In the living individual, the estimation of dental age is a parameter used in orthopedics or dentofacial orthodontics or in pediatrics to locate the individual on its growth curve. In forensic medicine, the estimation of dental age allows to infer the chronological age for a regression or a classification task. There are physical and radiological methods. While the latter are more accurate, there is no universal method. Demirjian created the most widely used radiological method almost 50 years ago, but it is criticized for its accuracy and for using reference tables based on a French-Canadian population sample. Objective: Artificial intelligence, and more particularly machine learning, has allowed the development of various tools with a learning capacity on an annotated database. The objective of this thesis was to compare the performance of different machine learning algorithms first against two classical methods of dental age estimation, and then between them by adding additional predictors. Material and method: In a first part, the different methods of dental age estimation on living children and young adults are presented. The limitations of these methods are exposed and the possibilities to address them with the use of machine learning are proposed. Using a database of 3605 panoramic radiographs of individuals aged 2 to 24 years (1734 girls and 1871 boys), different machine learning methods were tested to estimate dental age. The accuracies of these methods were compared with each other and with two classical methods by Demirjian and Willems. This work resulted in an article published in the International Journal of Legal Medicine. In a second part, the different machine learning methods are described and discussed. Then, the results obtained in the article are put in perspective with the publications on the subject in 2021. Finally, a perspective of the results of the machine learning methods in relation to their use in dental age estimation is made. Results: The results show that all machine learning methods have better accuracy than the conventional methods tested for dental age estimation under the conditions of their use. They also show that the use of the maturation stage of third molars over an extended range of use to 24 years does not allow the estimation of dental age for a legal issue. Conclusion: Machine learning methods fit into the overall process of automating dental age determination. The specific part of deep learning seems interesting to investigate for dental age classification tasks