Dissertations / Theses on the topic 'Méthodes de déconvolution'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Méthodes de déconvolution.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Guerchaoui, Atman. "Etude comparative des principales méthodes de déconvolution en prospection sismique." Grenoble INPG, 1988. http://www.theses.fr/1988INPG0041.
Nguyễn, Hoài Nam. "Méthodes et algorithmes de segmentation et déconvolution d'images pour l'analyse quantitative de Tissue Microarrays." Thesis, Rennes 1, 2017. http://www.theses.fr/2017REN1S104/document.
This thesis aims at developing dedicated methods for quantitative analysis of Tissue Microarray (TMA) images acquired by fluorescence scanners. We addressed there issues in biomedical image processing, including segmentation of objects of interest (i.e. tissue samples), correction of acquisition artifacts during scanning process and improvement of acquired image resolution while taking into account imaging modality and scanner design. The developed algorithms allow to envisage a novel automated platform for TMA analysis, which is highly required in cancer research nowadays. On a TMA slide, multiple tissue samples which are collected from different donors are assembled according to a grid structure to facilitate their identification. In order to establish the link between each sample and its corresponding clinical data, we are not only interested in the localization of these samples but also in the computation of their array (row and column) coordinates according to the design grid because the latter is often very deformed during the manufacturing of TMA slides. However, instead of directly computing array coordinates as existing approach, we proposed to reformulate this problem as the approximation of the deformation of the theoretical TMA grid using “thin plate splines” given the result of tissue sample localization. We combined a wavelet-based detection and a ellipse-based segmentation to eliminate false alarms and thus improving the localization result of tissue samples. According to the scanner design, images are acquired pixel by pixel along each line, with a change of scan direction between two subsequent lines. Such scanning system often suffers from pixel mis-positioning (jitter) due to imperfect synchronization of mechanical and electronic components. To correct these scanning artifacts, we proposed a variational method based on the estimation of pixel displacements on subsequent lines. This method, inspired from optical flow methods, consists in estimating a dense displacement field by minimizing an energy function composed of a nonconvex data fidelity term and a convex regularization term. We used half-quadratic splitting technique to decouple the original problem into two small sub-problems: one is convex and can be solved by standard optimization algorithm, the other is non-convex but can be solved by a complete search. To improve the resolution of acquired fluorescence images, we introduced a method of image deconvolution by considering a family of convex regularizers. The considered regularizers are generalized from the concept of Sparse Variation which combines the L1 norm and Total Variation (TV) to favors the co-localization of high-intensity pixels and high-magnitude gradient. The experiments showed that the proposed regularization approach produces competitive deconvolution results on fluorescence images, compared to those obtained with other approaches such as TV or the Schatten norm of Hessian matrix
Sintes, Christophe. "Déconvolution bathymétrique d'images sonar latéral par des méthodes interférométriques et de traitement de l'image." Rennes 1, 2002. http://www.theses.fr/2002REN10066.
Vivet, Laurent. "Amélioration de la résolution des méthodes d'échographie ultrasonore en contrôle non destructif par déconvolution adaptative." Paris 11, 1989. http://www.theses.fr/1989PA112256.
Thibon, Louis. "Méthodes d'augmentation de résolution en microscopie optique exploitant le modelage de faisceau laser et la déconvolution." Doctoral thesis, Université Laval, 2019. http://hdl.handle.net/20.500.11794/34695.
Laser scanning microscopy is limited in lateral resolution by the diffraction of light. Superresolution methods have been developed since the 90s to overcome this limitation. However, superresolution is generally achieved at the cost of a greater complexity (high power lasers, very long acquisition times, specic uorophores) and limitations on the observable samples. In some cases, such as Structured Illumination Microscopy (SIM) and Switching Laser Modes (SLAM), a more modest improvement in resolution is obtained with a reduced complexity and fewer limitations. We propose here methods which improve the resolution while minimizing the experimental constraints and keeping most of the advantages of classical microscopy. First, we show that we can improve by twenty percent the resolution of confocal microscopy by using Bessel-Gauss beams, and by having the right pinhole size (1 Airy Unit), compared to conventional Gaussian beam based confocal microscopy. The advantages of this strategy include simplicity of installation and use, linear polarization compatibility, possibility to combine it with other resolution enhancement and superresolution strategies. We demonstrate the resolution enhancement capabilities of Bessel-Gauss beams both theoretically and experimentally on nano-spheres and biological tissue samples with a resolution of 0.39. We achieved these resolutions without any residual artifacts coming from the Bessel-Gauss beam side lobes. We also show that the resolution enhancement of Bessel-Gauss beams leads to a better statistical colocalization analysis with fewer false positive results than when using Gaussian beams. We have also used Bessel-Gauss beams of different orders to further improve the resolution by combining them in SLAM microscopy achieving a resolution of 0.17 (90 nm with a wavelength of 532 nm). In a second step, we propose a method to improve the resolution of confocal microscopy by combining different laser modes and deconvolution. Two images of the same eld are acquired with the confocal microscope using different laser modes and are used as inputs to a deconvolution algorithm. The two laser modes have different Point Spread Functions and thus provide complementary information leading to an image with enhanced resolution compared to using a single confocal image as input to the same deconvolution algorithm. By changing the laser modes to Bessel-Gauss beams, we were able to improve the effciency of the deconvolution algorithm and to obtain images with a residual Point Spread Function having a width smaller than 100 nm. The proposed method requires only a few add-ons to the classic confocal or two photon microscopes. Finally, we propose a three dimensional tomography reconstruction method using Bessel-Gauss beams as projection tools in two-photon microscopy. While focussing Bessel-Gauss beams at an angle in two photon microscopy, we can obtain a series of projections that can be used for tomography reconstruction. The aim is to test the practicality of the methods allowing to reconstruct a volume while using fewer images than plane by plane acquisitions as in classic two-photon microscopy.
Cassereau, Didier. "Nouvelles méthodes et applications de la propagation transitoire dans les milieux fluides et solides." Paris 7, 1988. http://www.theses.fr/1988PA077224.
Luczak, Anaelle. "Méthodes sismiques pour la surveillance des grandes structures du génie civil." Thesis, Nantes, 2018. http://www.theses.fr/2018NANT4096/document.
While conventional seismic studies use energetic impulse sources, recent years have seen the emergence of analyses based on the use of ambient seismic noise. These analyses rely on inter-correlation or deconvolution operations, in order to reconstruct signals that can be considered as, under certain conditions, the Green functions of the medium between two recording stations. In this thesis are developed methods for the monitoring of sea dikes and geological storage massifs, either from listening to ambient noise when environmental conditions are favourable, or from the recording of signals emitted by a controlled vibratory source. Each of these two types of structures presents specific constraints in the implementation of field measurements and in the method of analysis of the data acquired. This work is aimed at showing the adequacy of the calculation methods according to the acquisition system put in place, the type of waves studied (body or surface waves, depending on the application) and the studied observable (either the propagation velocity or the attenuation). The variation of these parameters is presented on different time scales, in order to detect either fast and punctual phenomena induced by an external forcing, or long-term phenomena, such as the aging of the structure
Combe, Pascal. "Application des méthodes de déconvolution à la caractérisation ultrasonore de l'isolant de câbles électriques, et analyse temps-fréquence des réponses acoustiques." Grenoble INPG, 1988. http://www.theses.fr/1988INPG0075.
Mena, Bernard. "Détection et analyse de sursauts gamma à l'aide du télescope SIGMA : caractérisation et calibrations de l'expérience, différentes méthodes de déconvolution des données." Toulouse 3, 1990. http://www.theses.fr/1990TOU30049.
Hassouna, Mohammad. "Développement et validation des méthodes spectroscopiques d'absorbance UV et de fluorescence appliquées à la caractérisation spatiotemporelle de la matière organique du sol extractible à l'eau (MOEE)." Aix-Marseille 1, 2006. http://theses.univ-amu.fr.lama.univ-amu.fr/2006AIX11042.pdf.
Soulez, Ferréol. "Une approche problèmes inverses pour la reconstruction de données multi-dimensionnelles par méthodes d'optimisation." Phd thesis, Université Jean Monnet - Saint-Etienne, 2008. http://tel.archives-ouvertes.fr/tel-00379735.
L'approche « problèmes inverses » consiste à rechercher les causes à partir des effets ; c'est-à-dire estimer les paramètres décrivant un système d'après son observation. Pour cela, on utilise un modèle physique décrivant les liens de causes à effets entre les paramètres et les observations. Le terme inverse désigne ainsi l'inversion de ce modèle direct. Seulement si, en règle générale, les mêmes causes donnent les mêmes effets, un même effet peut avoir différentes causes et il est souvent nécessaire d'introduire des a priori pour restreindre les ambiguïtés de l'inversion. Dans ce travail, ce problème est résolu en estimant par des méthodes d'optimisations, les paramètres minimisant une fonction de coût regroupant un terme issu du modèle de formation des données et un terme d'a priori.
Nous utilisons cette approche pour traiter le problème de la déconvolution aveugle de données multidimensionnelles hétérogène ; c'est-à-dire de données dont les différentes dimensions ont des significations et des unités différentes. Pour cela nous avons établi un cadre général avec un terme d'a priori séparable, que nous avons adapté avec succès à différentes applications : la déconvolution de données multi-spectrales en astronomie, d'images couleurs en imagerie de Bayer et la déconvolution aveugle de séquences vidéo bio-médicales (coronarographie, microscopie classique et confocale).
Cette même approche a été utilisée en holographie numérique pour la vélocimétrie par image de particules (DH-PIV). Un hologramme de micro-particules sphériques est composé de figures de diffraction contenant l'information sur la la position 3D et le rayon de ces particules. En utilisant un modèle physique de formation de l'hologramme, l'approche « problèmes inverses » nous a permis de nous affranchir des problèmes liées à la restitution de l'hologramme (effet de bords, images jumelles...) et d'estimer les positions 3D et le rayon des particules avec une précision améliorée d'au moins un facteur 5 par rapport aux méthodes classiques utilisant la restitution. De plus, nous avons pu avec cette méthode détecter des particules hors du champs du capteur élargissant ainsi le volume d'intérêt d'un facteur 16.
Lelong, Adrien. "Méthodes de diagnostic filaire embarqué pour des réseaux complexes." Thesis, Lille 1, 2010. http://www.theses.fr/2010LIL10121/document.
Research works presented in this thesis rely on on line diagnosis of wire networks. It consists indetecting and locating intermittent or permanent electrical faults, on a system's network while this system is running. Such a diagnosis is based on the principle of reflectometry which is used for off line diagnosis until then. The aim is the analysis and improvement of reflectometry methods and the implementation of related processing in order to automate and to embed it in the target system for a real time execution. The first contribution refers to the use of multicarrier signals so as to minimize interferences between the running target system and the reflectometry module. Pulse deconvolution algorithms are required for this purpose. These algorithms are also used for high resolution processing described subsequently. A low computational cost semi-blind deconvolution method is proposed among others. Distributed reflectometry, consisting in the simultaneous injection of signals at several points of the network, is then studied. An innovative filtering method called "selective average" is proposed as a solution to the problem of interferences due to the simultaneous injection of the modules. Finally several considerations on the implementation and automation are studied. An innovative intermittent fault detection algorithm for noisy environment is also proposed
Moreau, Frédérique. "Méthodes de traitement de données géophysiques par transformée en ondelettes." Phd thesis, Université Rennes 1, 1995. http://tel.archives-ouvertes.fr/tel-00656040.
Baudour, Alexis. "Détection de filaments dans des images 2D et 3D : modélisation, étude mathématique et algorithmes." Phd thesis, Université de Nice Sophia-Antipolis, 2009. http://tel.archives-ouvertes.fr/tel-00507520.
Mazet, Vincent. "Développement de méthodes de traitement de signaux spectroscopiques : estimation de la ligne de base et du spectre de raies." Phd thesis, Université Henri Poincaré - Nancy I, 2005. http://tel.archives-ouvertes.fr/tel-00011477.
Dans un premier temps est proposée une méthode déterministe qui permet d'estimer la ligne de base des spectres par le polynôme qui minimise une fonction-coût non quadratique (fonction de Huber ou parabole tronquée). En particulier, les versions asymétriques sont particulièrement bien adaptées pour les spectres dont les raies sont positives. Pour la minimisation, on utilise l'algorithme de minimisation semi-quadratique LEGEND.
Dans un deuxième temps, on souhaite estimer le spectre de raies : l'approche bayésienne couplée aux techniques MCMC fournit un cadre d'étude très efficace. Une première approche formalise le problème en tant que déconvolution impulsionnelle myope non supervisée. En particulier, le signal impulsionnel est modélisé par un processus Bernoulli-gaussien à support positif ; un algorithme d'acceptation-rejet mixte permet la simulation de lois normales tronquées. Une alternative intéressante à cette approche est de considérer le problème comme une décomposition en motifs élémentaires. Un modèle original est alors introduit ; il a l'intérêt de conserver l'ordre du système fixe. Le problème de permutation d'indices est également étudié et un algorithme de ré-indexage est proposé.
Les algorithmes sont validés sur des spectres simulés puis sur des spectres infrarouge et Raman réels.
Vallée, Martin. "Etude cinématique de la rupture sismique en champ lointain : méthodes et résolution." Phd thesis, Grenoble 1, 2003. http://tel.archives-ouvertes.fr/tel-00745005.
Gillet, Guillaume. "Prédiction de la conformité des matériaux d'emballage par intégration de méthodes de déformulation et de modélisation du coefficient de partage." Thesis, Vandoeuvre-les-Nancy, INPL, 2008. http://www.theses.fr/2008INPL081N/document.
Plastic packagings are formulated with additives, which can migrate from materials into foodstuffs. According to European directive 2002/72/EC, the ability of plastic materials to be used in contact with food can be demonstrated using modelling tools. Their use is however limited due to availability of some data, like the formulation of materials and partition coefficients of substances between plastics and food. On the one hand this work aims to develop the ability of laboratories to identify and quantify the main substances in plastic materials, and on the other hand it aims to develop a new method to predict partition coefficients between polymers and food simulants. Four formulations of both HDPE and PS were chosen and used during the work. Standard extraction methods and quantification methods using HPLC-UV-ELSD and GC-FID were compared. A new deconvolution process applied on infrared spectra of extracts was developed to identify and quantify additives contained in HDPE. Activity coefficients in both phases were approximated through a generalized off-lattice Flory-Huggins formulation applied to plastic materials and to liquids simulating food products. Potential contact energies were calculated with an atomistic semi-empirical forcefield. The simulations demonstrated that plastic additives have a significant chemical affinity, related to the significant contribution of the positional entropy, for liquids consisting in small molecules. Finally, decision trees, which combine both experimental and modelling approaches to demonstrate the compliance of plastic materials, were discussed
Cherni, Afef. "Méthodes modernes d'analyse de données en biophysique analytique : résolution des problèmes inverses en RMN DOSY et SM." Thesis, Strasbourg, 2018. http://www.theses.fr/2018STRAJ055/document.
This thesis aims at proposing new approaches to solve the inverse problem in biophysics. Firstly, we study the DOSY NMR experiment: a new hybrid regularization approach has been proposed with a novel PALMA algorithm (http://palma.labo.igbmc.fr/). This algorithm ensures the efficient analysis of real DOSY data with a high precision for all different type. In a second time, we study the mass spectrometry application. We have proposed a new dictionary based approach dedicated to proteomic analysis using the averagine model and the constrained minimization approach associated with a sparsity inducing penalty. In order to improve the accuracy of the information, we proposed a new SPOQ method based on a new penalization, solved with a new Forward-Backward algorithm with a variable metric locally adjusted. All our algorithms benefit from sounded convergence guarantees, and have been validated experimentally on synthetics and real data
Simonetti, Claude-Alexandre. "Développement d'un spectromètre/débitmètre neutrons transportable." Electronic Thesis or Diss., Normandie, 2023. http://www.theses.fr/2023NORMC262.
The DONEUT (DOsimètre NEUtrons) project in which this thesis is included, has to answer to the following issue for the civilian an military nuclear industries : a more precise determination of the neutron impact on the human body, fundamental in radiation protection.To fulfill theses requirements, which consist of measuring an ambient or personal dose equivalent rateS (H*(10) or Hp(10)) from 1 µSv/h to ~10 mSv/h in less than 10 minutes, a transportable (mass ≤ 15 kg) multi detectors cylindric prototype has been developed, with 32 thermal neutron detectors placed at different depths.This device is able to rebuild a full neutron spectrum from 0 to 20 MeV through unfolding computer codes like MAXED and GRAVEL, and the results are in good agreement with those obtained with the NNS spectrometer (Nested Neutron Spectrometer), which was our reference, with a good gamma/neutron discrimination
Ben, Kahla Haithem. "Sur des méthodes préservant les structures d'une classe de matrices structurées." Thesis, Littoral, 2017. http://www.theses.fr/2017DUNK0463/document.
The classical linear algebra methods, for calculating eigenvalues and eigenvectors of a matrix, or lower-rank approximations of a solution, etc....do not consider the structures of matrices. Such structures are usually destroyed in the numerical process. Alternative structure-preserving methods are the subject of an important interest mattering to the community. This thesis establishes a contribution in this field. The SR decomposition is usually implemented via the symplectic Gram-Schmidt algorithm. As in the classical case, a loss of orthogonality can occur. To remedy this, we have proposed two algorithms RSGSi and RMSGSi, where the reorthogonalization of a current set of vectors against the previously computed set is performed twice. The loss of J-orthogonality has significantly improved. A direct rounding error analysis of symplectic Gram-Schmidt algorithm is very hard to accomplish. We managed to get around this difficulty and give the error bounds on the loss of the J-orthogonality and on the factorization. Another way to implement the SR decomposition is based on symplectic Householder transformations. An optimal choice of free parameters provided an optimal version of the algorithm SROSH. However, the latter may be subject to numerical instability. We have proposed a new modified version SRMSH, which has the advantage of being numerically more stable. By a detailes study, we are led to two new variants numerically more stables : SRMSH and SRMSH2. In order to build a SR algorithm of complexity O(n³), where 2n is the size of the matrix, a reduction to the condensed matrix form (upper J-Hessenberg form) via adequate similarities is crucial. This reduction may be handled via the algorithm JHESS. We have shown that it is possible to perform a reduction of a general matrix, to an upper J-Hessenberg form, based only on the use of symplectic Householder transformations. The new algorithm, which will be called JHSH algorithm, is based on an adaptation of SRSH algorithm. We are led to two news variants algorithms JHMSH and JHMSH2 which are significantly more stable numerically. We found that these algortihms behave quite similarly to JHESS algorithm. The main drawback of all these algorithms (JHESS, JHMSH, JHMSH2) is that they may encounter fatal breakdowns or may suffer from a severe form of near-breakdowns, causing a brutal stop of the computations, the algorithm breaks down, or leading to a serious numerical instability. This phenomenon has no equivalent in the Euclidean case. We sketch out a very efficient strategy for curing fatal breakdowns and treating near breakdowns. Thus, the new algorithms incorporating this modification will be referred to as MJHESS, MJHSH, JHM²SH and JHM²SH2. These strategies were then incorporated into the implicit version of the SR algorithm to overcome the difficulties encountered by the fatal breakdown or near-breakdown. We recall that without these strategies, the SR algorithms breaks. Finally ans in another framework of structured matrices, we presented a robust algorithm via FFT and a Hankel matrix, based on computing approximate greatest common divisors (GCD) of polynomials, for solving the problem pf blind image deconvolution. Specifically, we designe a specialized algorithm for computing the GCD of bivariate polynomials. The new algorithm is based on the fast GCD algorithm for univariate polynomials , of quadratic complexity O(n²) flops. The complexitiy of our algorithm is O(n²log(n)) where the size of blurred images is n x n. The experimental results with synthetically burred images are included to illustrate the effectiveness of our approach
Sallard, Julien. "Étude d'une méthode de déconvolution adaptée aux images ultrasonores." Grenoble INPG, 1999. http://www.theses.fr/1999INPG0034.
Colicchio, Bruno. "Déconvolution adaptative en microscopie tridimensionnelle de fluorescence." Mulhouse, 2004. http://www.theses.fr/2004MULH0779.
The 3D fluorescence microscope has become the method of choice in biological sciences for living cells study. However, the data acquired with conventional 3D fluorescence microscope are not quantitatively significant for spatial distribution or volume evaluation of fluorescent areas in reason of distortions. Deconvolution is a solution. Direct methods have been proposed, but knowledge of users for the tuning of critical parameters are required, because of the regularization needs due to the ill posed nature of the problem. The first part of this work present the automated tuning of direct methods regularization parameter. The aim is to permit the use of deconvolution by non specialists, giving constant results. The automated direct methods were applied on cytology and cytogenetic 3D data. The presented methods require a sharp characterization of the instrument, so an analysis of the influence of variation of the optical setup on the deconvolution result is presented. The last part of the work present concern the second tendency of deconvolution algorithms, by taking into account the space-variant response case. The solution is computed by a Monte-Carlo process, random guesses following a probability distribution function linked to the bias error of the estimation of the observed image and the raw observed image. The solution is obtained by the minimization of an error criterion in image space with neighborhood constraints in object space
Gu, Yi. "Estimation sous contrainte et déconvolution autodidacte." Paris 11, 1989. http://www.theses.fr/1989PA112063.
Reese, Daniel. "La modélisation des oscillations d'étoiles en rotation rapide." Phd thesis, Université Paul Sabatier - Toulouse III, 2006. http://tel.archives-ouvertes.fr/tel-00123615.
Rosec, Olivier. "Déconvolution aveugle multicapteur en sismique réflexion marine très haute résolution." Brest, 2000. http://www.theses.fr/2000BRES2014.
Nsiri, Benayad. "Identification et Déconvolution aveugle : Application aux signaux de sismique-réflexion sous-marine." Brest, 2004. http://www.theses.fr/2004BRES2022.
In seismic deconvolution, a blind approach must be considered when the reflectivity sequence, the source wevelet signal and the noise power level are unknown. Blind deconvolution aims to determine the wavelet source and the reflectivity sequence. In this thesis, we mainly focused our work on blind deconvolution in the maximum likelihood and a posterior via SEM algorithm and Bayesian approach using MCMC method. A well-known difficulty is the sensitivity of these algorithms to the wavelet initialization. A new methods is proposed to solve this problem. It consists in detecting the wavelet local optima, before using the maximum kurtosis criterion in order to choose the wavelet global optimum. In some experiments of practical interest where the wavelet is quite long, wavelet estimators generally have high variances. We propose a new method that overcomes this problem within the framework of classical blind deconvolution techniques. In this context, a two-step approach is proposed
Margueritte, Laure. "Développement d’une méthode de déconvolution pharmacophorique pour la découverte accélérée d’antipaludiques chez les Rhodophytes." Thesis, Strasbourg, 2018. http://www.theses.fr/2018STRAF048/document.
Malaria is responsible for 445 000 deaths in 2016. The emergence and the spread of resistant P. falciparum to artemisinin-based combinations is a major health problem in South-East Asia. Research must continue to find compounds with a novel mechanism of action. The apicoplast is an interesting target. It is a Plasmodium organelle derived from a secondary endosymbiosis with a red alga. Red algae could be a special source of new antiplasmodial compounds targeting the isoprenoid biosynthesis pathway in the apicoplast. This thesis focuses on bioactive secondary metabolites identification via a new analytical strategy. A computer program called Plasmodesma was created and achieves an automatic differential analysis of 1H-1H and 1H-13C 2D NMR spectra. Bioactive compounds are isolated and identified by hyphenated HPLC-SPE-NMR. The presence of different classes of compounds including sterols could explain the antiplasmodial activity in studied the red algae species
Traullé, Benjamin. "Techniques d’échantillonnage pour la déconvolution aveugle bayésienne." Electronic Thesis or Diss., Toulouse, ISAE, 2024. http://www.theses.fr/2024ESAE0004.
These thesis works address two main challenges in the field of Bayesian blind deconvolution using Markov chain Monte Carlo (MCMC) methods. Firstly, in Bayesian blind deconvolution, it is common to use Gaussian-type priors. However, these priors do not solve the scale ambiguity problem. The latter poses difficulties in the convergence of classical MCMC algorithms, which exhibit slow scale sampling, and complicates the design of scale-free estimators. To overcome this limitation, a von Mises–Fisher prior is proposed, which alleviates the scale ambiguity. This approach has already demonstrated its regularization effect in other inverse problems, including optimization-based blind deconvolution. The advantages of this prior within MCMC algorithms are discussed compared to conventional Gaussian priors, both theoretically and experimentally, especially in low dimensions. However, the multimodal nature of the posterior distribution still poses challenges and decreases the quality of the exploration of the state space, particularly when using algorithms such as the Gibbs sampler. These poor mixing properties lead to suboptimal performance in terms of inter-mode and intra-mode exploration and can limit the usefulness of Bayesian estimators at this stage. To address this issue, we propose an original approach based on the use of a reversible jump MCMC (RJMCMC) algorithm, which significantly improves the exploration of the state space by generating new states in high probability regions identified in a preliminary stage. The effectiveness of the RJMCMC algorithm is empirically demonstrated in the context of highly multimodal posteriors, particularly in low dimensions, for both Gaussian and von Mises–Fisher priors. Furthermore, the observed behavior of RJMCMC in increasing dimensions provides support for the applicability of this approach for sampling multimodal distributions in the context of Bayesian blind deconvolution
Deslandes, Antoine. "Développement d'une méthode de calcul de profils d'absorption par déconvolution : application à l'étude de l'absorption d'antibiotiques." Paris 11, 1995. http://www.theses.fr/1995PA114843.
Gauthier, Marianne. "Etude de l’influence de l’entrée artérielle tumorale par modélisation numérique et in vitro en imagerie de contraste ultrasonore. : application clinique pour l’évaluation des thérapies ciblées en cancérologie." Thesis, Paris 11, 2011. http://www.theses.fr/2011PA11T088.
Dynamic contrast-enhanced ultrasonography (DCE-US) is currently used as a functional imaging technique for evaluating anti-angiogenic therapies. A mathematical model has been developed by the UPRES EA 4040, Paris-Sud university and the Gustave Roussy Institute to evaluate semi-quantitative microvascularization parameters directly from time-intensity curves. But DCE-US evaluation of such parameters does not yet take into account physiological variations of the patient or even the way the contrast agent is injected as opposed to other functional modalities (dynamic magnetic resonance imaging or perfusion scintigraphy). The aim of my PhD was to develop a deconvolution process dedicated to the DCE-US imaging, which is currently used as a routine method in other imaging modalities. Such a process would allow access to quantitatively-defined microvascularization parameters since it would provide absolute evaluation of the tumor blood flow, the tumor blood volume and the mean transit time. This PhD has been led according to three main goals. First, we developed a deconvolution method involving the creation of a quantification tool and validation through studies of the microvascularization parameter variability. Evaluation and comparison of intra-operator variabilities demonstrated a decrease in the coefficients of variation from 30% to 13% when microvascularization parameters were extracted using the deconvolution process. Secondly, we evaluated sources of variation that influence microvascularization parameters concerning both the experimental conditions and the physiological conditions of the tumor. Finally, we performed a retrospective study involving 12 patients for whom we evaluated the benefit of the deconvolution process: we compared the evolution of the quantitative and semi-quantitative microvascularization parameters based on tumor responses evaluated by the RECIST criteria obtained through a scan performed after 2 months. Deconvolution is a promising process that may allow an earlier, more robust evaluation of anti-angiogenic treatments than the DCE-US method in current clinical use
Letierce, François. "Approche calculatoire pour la déconvolution en aveugle : application à l'imagerie SIMS." Thesis, Evry-Val d'Essonne, 2007. http://www.theses.fr/2007EVRY0038.
Secondary Ion Mass Spectrometry (SIMS) creates images of atomic distributions on a sample's surface. The point spread function (PSF) is unknown. Blind deconvolution is used to remove the associated blur. This ill-conditionned problem is solved by constraining its solution (regularization). The optimum degree of regularization depends on a parameter to be determined. This parameter is found, as well as those of the PSF, by the generalized cross validation method. A calibration phase reduces the search space for the PSF parameters. The gaussian model used for the PSF is exploited to accelerate the computations. The image is deconvolved by solving a large linear system with the conjugate gradient method. A preconditionner making use of the PSF separability (isotropic or anisotropic) speeds up convergence
Larue, Anthony. "Blancheur et non-gaussianité pour la déconvolution aveugle de données bruitées : application aux signaux sismiques." Grenoble INPG, 2006. https://tel.archives-ouvertes.fr/tel-00097161.
This thesis deals with the blind deconvolution of noisy data. We consider the case of seismic data. The inversion of the model need to select higher order statistics according to the distribution of the signals. To solve that, we use the assumptions of whiteness or of nongaussianity. We propose blind déconvolution algorithm in time domain and frequency domain. We measure whiteness by mutual information rate and nongaussianity with the negentropy. Afterwards, we study the sensitivity of the different algorithm with respect to a white Gaussian additive on the data. Theoretically and in practice on real and synthetic data, non-gaussianity appears as the method which provides the better trade off between déconvolution quality and noise amplification
Merhi, Bleik Josephine. "Modeling, estimation and simulation into two statistical models : quantile regression and blind deconvolution." Thesis, Compiègne, 2019. http://www.theses.fr/2019COMP2506.
This thesis is dedicated to the estimation of two statistical models: the simultaneous regression quantiles model and the blind deconvolution model. It therefore consists of two parts. In the first part, we are interested in estimating several quantiles simultaneously in a regression context via the Bayesian approach. Assuming that the error term has an asymmetric Laplace distribution and using the relation between two distinct quantiles of this distribution, we propose a simple fully Bayesian method that satisfies the noncrossing property of quantiles. For implementation, we use Metropolis-Hastings within Gibbs algorithm to sample unknown parameters from their full conditional distribution. The performance and the competitiveness of the underlying method with other alternatives are shown in simulated examples. In the second part, we focus on recovering both the inverse filter and the noise level of a noisy blind deconvolution model in a parametric setting. After the characterization of both the true noise level and inverse filter, we provide a new estimation procedure that is simpler to implement compared with other existing methods. As well, we consider the estimation of the unknown discrete distribution of the input signal. We derive strong consistency and asymptotic normality for all our estimates. Including a comparison with another method, we perform a consistent simulation study that demonstrates empirically the computational performance of our estimation procedures
Machkour, Deshayes Nadia. "Méthode de déconvolution appliquée à l'étude de la densité surfacique d'un arc électrique de coupure basse tension à partir de mesures magnétiques." Clermont-Ferrand 2, 2003. http://www.theses.fr/2003CLF22456.
Akhdar, Oussama. "Conception d'une méthode de déconvolution pour l'estimation des angles d'arrivée sur une antenne : Application au sondage spatio-temporel du canal de propagation." Limoges, 2009. https://aurore.unilim.fr/theses/nxfile/default/596a9344-e103-4ea0-b609-44fa032ba0d3/blobholder:0/2009LIMO4028.pdf.
This thesis focused on the study and implementation of a spatio-temporal sounder and dealt with the characterization of the radio channel. Firstly, the propagation phenomena are recalled and a state of the art on the different spatial-temporal measurement techniques and their corresponding processing methods is presented. An original method for spatial characterization is designed, implementing a new algorithm based on the antenna rotation and corresponding deconvolution procedure. The application of this method allowed us achieving a spatial sounding of a propagation channel using a patch antenna with 70° of radiation beam width. The developed spatial measurement technique is later associated to a temporal measurement technique (sliding correlation), leading to a spatio-temporal channel sounding. The achieved measurements allowed us integrating the channel response in a numerical transmission simulator in order to perform realistic simulations of the propagation channel. The set of different results, obtained during the sounding, establish a starting point for another set of more intensive measurements, leading to realistic models of spatio-temporal channels
Arnaout, Mohamad Abed Al Rahman. "Caractérisation d'une cellule de mesure électro-acoustique-pulsée pour la qualification électrostatique des diélectriques spatiaux : modélisation électro-acoustique et traitement du signal." Toulouse 3, 2011. http://thesesups.ups-tlse.fr/1398/.
Dielectric materials are frequently used in satellite structures as a thermal blanket. Subjected to an electron irradiation - space environment - they can cause in-orbit satellite anomalies. One of these aspects is the charge accumulation due to the flux of space charged particles, and particularly to electrons. This accumulation increases the local electric field in the material bulk and can lead to an Electrostatic Surface Discharge - ESD. This phenomenon could cause serious damage to the satellite structure or performance. In order to have a better control on the discharge it is necessary to clarify; the nature, position and quantity of stored charges with time and to understand the dynamics of the charge transport in solid dielectrics. The Pulsed-Electro Acoustic - PEA method allows us to obtain these features, like the spatial distribution of space charges. One of the weaknesses of this current technique is spatial resolution, about 10 µm. Dielectric materials used in satellite structures have a thickness of 50 and 75 µm. This work aims at improving the spatial resolution for the PEA method. Whatever measurement principle considered, the best spatial resolution achievable is 10µm. This is a drawback when considering rather thin insulating layers (order of tens of microns), as the case in some capacitors or films on outer parts of satellites. Also, a better resolution (1µm) is expected to provide a better description of charge generation in insulation at metal dielectric interfaces or under low energy electron beams
Masucci, Antonia Maria. "Moments method for random matrices with applications to wireless communication." Thesis, Supélec, 2011. http://www.theses.fr/2011SUPL0011/document.
In this thesis, we focus on the analysis of the moments method, showing its importance in the application of random matrices to wireless communication. This study is conducted in the free probability framework. The concept of free convolution/deconvolution can be used to predict the spectrum of sums or products of random matrices which are asymptotically free. In this framework, we show that the moments method is very appealing and powerful in order to derive the moments/asymptotic moments for cases when the property of asymptotic freeness does not hold. In particular, we focus on Gaussian random matrices with finite dimensions and structured matrices as Vandermonde matrices. We derive the explicit series expansion of the eigenvalue distribution of various models, as noncentral Wishart distributions, as well as correlated zero mean Wishart distributions. We describe an inference framework so flexible that it is possible to apply it for repeated combinations of random ma- trices. The results that we present are implemented generating subsets, permutations, and equivalence relations. We developped a Matlab routine code in order to perform convolution or deconvolution numerically in terms of a set of input moments. We apply this inference framework to the study of cognitive networks, as well as to the study of wireless networks with high mobility. We analyze the asymptotic moments of random Vandermonde matrices with entries on the unit circle. We use them and polynomial expansion detectors in order to design a low complexity linear MMSE decoder to recover the signal transmitted by mobile users to a base station or two base stations, represented by uniform linear arrays
Perrone, Mario. "Contribution à la déconvolution de signaux monodimensionnels : méthode de restauration fondée sur l'analyse de forme : application à la sommation synchrone de signaux biomédicaux." Nice, 1994. http://www.theses.fr/1994NICE4709.
The aim of this study is the jitter deconvolution from an averaged repetitive signal for biomedical applications. The analysis and the modelling of deconvolution problems of monodimensional signals allow to explain the problem and to point out all theoretical and practical difficulties which occur in convolution. A review of different metthods is proposed and discussd with their application fields. Here is explained a new original deconvolution method based on shape analysis. This methods belongs to the positive signals restaurations class which the new basis is the shape variation. A comparison with regularization method is studied, and some results in simulation are promising. The repetitive nature of certain biomedical signals, such the cardiac signal, need synchronous averging to increase the signal to noise a ratio in order to obtain more informations. Furthermore, with a jitter, we show that the signal averaging can ben written as a convolution product. In this case, a time delay estimation between two signals realizations is applied to enhance the averaged signals. The P wave of EKG, particularly to see and to synchronize, is restored on the one hand with resynchronization method, on other hand with two methods to make signals averaging deconvolution: a regularization method, and our method based on shape variation. The results obtained in simultion and in real case show all the interest of deconvolution to improve the averaged signal
Mugnier, Laurent. "Problèmes inverses en Haute Résolution Angulaire." Habilitation à diriger des recherches, Université Paris-Diderot - Paris VII, 2011. http://tel.archives-ouvertes.fr/tel-00654835.
Nguyen, Xuan Truong. "Étude des matériaux irradiés sous faisceau d'électrons par méthode électro-acoustique pulsée (PEA)." Toulouse 3, 2014. http://thesesups.ups-tlse.fr/2361/.
Dielectric materials are frequently used as electrical insulators in spatial applications. Due to their dielectric nature, these dielectrics are likely to accumulate electric charges during their service. Under certain critical conditions, these internal or surface space charges can lead to an electrostatic surface discharge. To understand these phenomena, an experimental device has been developed in the laboratory. This device allows us to simulate the electronic irradiation conditions encountered in space. The aim of our study is to characterize the electrical behavior of insulating materials irradiated by electron beam, to investigate charge storage and transport phenomena and anticipate electrostatic discharges. In this work, the device based on the Pulsed Electro-Acoustic (PEA) technique has been chosen. It has been implanted in the irradiation chamber. It allows us to obtain the spatial distribution of charges injected between two periods of irradiation and during relaxation. However the PEA method offers a limited resolution and doesn't allow the detection of injected charges when they are too close to the surface. First, we performed a parameters signal processing analysis that we will call the spreading factor and the resolution factor. The preliminary study post-irradiation in air of experimental measurements showed that the the resolution factor choice is important for the analysis and interpretation of the signal when the space charge is localized near the surface. Then, a comparison to the spreading parameter used in some deconvolution technique was established. In the second time, space charge distribution measurements in vacuum have been carried out on Poly Tetra Fluoro Ethylene (PTFE) films irradiated by an electron beam in the range [10-100] keV. The results obtained were compared to those theoretical results. This work allows us to consider the necessary improvements for the determination of in-situ space charge
Nguyen, Xuan Truong. "Etude de matériaux diélectriques irradiés sous faisceau d'électrons par méthode Electro-Acoustique Pulsée (PEA)." Phd thesis, Institut National Polytechnique de Toulouse - INPT, 2014. http://tel.archives-ouvertes.fr/tel-00926826.
Ygouf, Marie. "Nouvelle méthode de traitement d'images multispectrales fondée sur un modèle d'instrument pour la haut contraste : application à la détection d'exoplanètes." Phd thesis, Université de Grenoble, 2012. http://tel.archives-ouvertes.fr/tel-00843202.
Masucci, Antonia Maria. "La méthode des moments pour les matrices aléatoires avec application à la communication sans fil." Phd thesis, Supélec, 2011. http://tel.archives-ouvertes.fr/tel-00805578.
Irnaka, Theodosius Marwan. "3D elastic full waveform inversion for subsurface characterization.Study of a shallow seismic multicomponent field data." Thesis, Université Grenoble Alpes, 2021. http://www.theses.fr/2021GRALU004.
Full Waveform Inversion (FWI) is an iterative data fitting procedure between the observed data and the synthetic data. The synthetic data is calculated by solving the wave equation. FWI aims at reconstructing the detailed information of the subsurface physical properties. FWI has been rapidly developed in the past decades, thanks to the increase of the computational capability and the development of the acquisition technology. FWI also has been applied in a broad scales including the global, lithospheric, crustal, and near surface scale.In this manuscript, we investigate the inversion of a multicomponent source and receiver near-surface field dataset using a viscoelastic full waveform inversion algorithm for a shallow seismic target. The target is a trench line buried at approximately 1 m depth. We present the pre-processing of the data, including a matching filter correction to compensate for different source and receiver coupling conditions during the acquisition, as well as a dedicated multi-step workflow for the reconstruction of both P-wave and S-wave velocities. Our implementation is based on viscoelastic modeling using a spectral element discretization to accurately account for the wave propagation's complexity in this shallow region. We illustrate the inversion stability by starting from different initial models, either based on dispersion curve analysis or homogeneous models consistent with first arrivals. We recover similar results in both cases. We also illustrate the importance of taking into account the attenuation by comparing elastic and viscoelastic results. The 3D results make it possible to recover and locate precisely the trench line in terms of interpretation. They also exhibit another trench line structure, in a direction forming an angle at 45 degrees with the direction of the targeted trench line. This new structure had been previously interpreted as an artifact in former 2D inversion results. The archaeological interpretation of this new structure is still a matter of discussion.We also perform three different experiments to study the effect of multicomponent data on this FWI application. The first experiment is a sensitivity kernel analysis of several wave packets (P-wave, S-wave, and surface wave) on a simple 3D model based on a Cartesian based direction of source and receiver. The second experiment is 3D elastic inversion based on synthetic (using cartesian direction's source) and field data (using Galperin source) with various component combinations. Sixteen component combinations are analyzed for each case. In the third experiment, we perform the acquisition's decimation based on the second experiment. We demonstrate a significant benefit of multicomponent data FWI in terms of model and data misfit through those experiments. In a shallow seismic scale, the inversions with the horizontal components give a better depth reconstruction. Based on the acquisition's decimation, inversion using heavily decimated 9C seismic data still produce similar results compared to the inversion using 1C seismic of a dense acquisition
Deloule, Sybelle. "Développement d'une méthode de caractérisation spectrale des faisceaux de photons d'énergies inférieures à 150 keV utilisés en dosimétrie." Thesis, Paris 11, 2014. http://www.theses.fr/2014PA112273/document.
In the field of dosimetry, the knowledge of the whole photon fluence spectrum is an essential parameter. In the low-to-medium energy range (i.e. E<150 keV), the LNHB possess 5 X-ray tubes and iodine-125 brachytherapy seeds, both emitting high fluence rates. The performance of calculation (either Monte Carlo codes or deterministic software) is flawed by increasing uncertainties on fundamental parameters at low energies, and modelling issues. Therefore, direct measurement using a high purity germanium is preferred, even though it requires a time-consuming set-up and mathematical methods to infer impinging spectrum from measured ones (such as stripping, model-fitting or Bayesian inference…). Concerning brachytherapy, the knowledge of the seed’s parameters has been improved. Moreover, various calculated X-ray tube fluence spectra have been compared to measured ones, after unfolding. The results of all these methods have then be assessed, as well as their impact on dosimetric parameters
Beaudoin, Normand. "Méthode mathématique et numérique de haute précision pour le calcul des transformées de Fourier, intégrales, dérivées et polynômes splines de tout ordre ; Déconvolution par transformée de Fourier et spectroscopie photoacoustique à résolution temporelle." Thèse, Université du Québec à Trois-Rivières, 1999. http://depot-e.uqtr.ca/6708/1/000659516.pdf.
Touhami, Younès. "Identification spatio-temporelle d'une source de chaleur dans un milieu diffusif par résolution d'un problème inverse." Aix-Marseille 1, 1996. http://www.theses.fr/1996AIX11059.
Pham, Mai-Quyen. "Seismic wave field restoration using spare representations and quantitative analysis." Thesis, Paris Est, 2015. http://www.theses.fr/2015PESC1028/document.
This thesis deals with two different problems within the framework of convex and non convex optimization. The first one is an application to multiple removal in seismic data with adaptive filters and the second one is an application to blind deconvolution problem that produces characteristics closest to the Earth layers. More precisely : unveiling meaningful geophysical information from seismic data requires to deal with both random and structured “noises”. As their amplitude may be greater than signals of interest (primaries), additional prior information is especially important in performing efficient signal separation. We address here the problem of multiple reflections, caused by wave-field bouncing between layers. Since only approximate models of these phenomena are available, we propose a flexible framework for time-varying adaptive filtering of seismic signals, using sparse representations, based on inaccurate templates. We recast the joint estimation of adaptive filters and primaries in a new convex variational formulation. This approach allows us to incorporate plausible knowledge about noise statistics, datas parsity and slow filter variation in parsimony-promoting wavelet transforms. The designed primal-dual algorithm solves a constrained minimization problem that alleviates standard regularization issues in finding hyper parameters. The approach demonstrates significantly good performance in low signal-to-noise ratio conditions, both for simulatedand real field seismic data. In seismic exploration, a seismic signal (e.g. primary signal) is often represented as the results of a convolution between the “seismic wavelet” and the reflectivity series. The second goal of this thesis is to deconvolve them from the seismic signal which is presented in Chapter 6. The main idea of this work is to use an additional premise that the reflections occur as sparsely restricted, for which a study on the “sparsity measure”is considered. Some well known methods that fall in this category are proposed such as[Sacchi et al., 1994; Sacchi, 1997]. We propose a new penalty based on a smooth approximation of the l1/l2 function that makes a difficult non convex minimization problem. We develop a proximal-based algorithm to solve variational problems involving this function and we derive theoretical convergence results. We demonstrate the effectiveness of our method through a comparison with a recent alternating optimization strategy dealing with the exact l1/l2 term
Meresescu, Alina-Georgiana. "Inverse Problems of Deconvolution Applied in the Fields of Geosciences and Planetology." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS316/document.
The inverse problem field is a domain at the border between applied mathematics and physics that encompasses the solutions for solving mathematical optimization problems. In the case of 1D deconvolution, the discipline provides a formalism to designing solutions in the frames of its two main approaches: regularization based inverse problems and bayesian based inverse problems. Under the data deluge, geosciences and planetary sciences require more and more complex algorithms for obtaining pertinent information. In this thesis, we solve three 1D deconvolution problems under constraints with regularization based inverse problem methodology: in hydrology, in seismology and in spectroscopy. For every of the three problems, we pose the direct problem, the inverse problem, and we propose a specific algorithm to reach the solution. Algorithms are defined but also the different strategies to determine the hyper-parameters. Furthermore, tests on synthetic data and on real data are presented and commented from the point of view of the inverse problem formulation and that of the application field. Finally, the proposed algorithms aim at making approachable the use of inverse problem methodology for the Geoscience community
Vissouvanadin, Soubaretty Bertrand. "Matériaux de câble à isolation synthétique pour des applications au transport d'énergie HVDC." Toulouse 3, 2011. http://thesesups.ups-tlse.fr/1271/.
This work deals with the development of High Voltage Direct Current cables based Cross-linked polyethylene (XLPE). The development of such a cable is confronted to the problem of space charge build-up which is linked to the nature of the semiconducting electrodes and the insulating material. In the framework of this thesis, the pulsed electro-acoustic (PEA) method has been used to study space charge build-up in both plaque and model cables. The impact of cross-linking by-products on the heterocharges accumulation has been highlighted by investigations on XLPE conditioning. A model based on heterogeneous polarization has been established to describe heterocharges build-up. Results obtained on different insulating formulations have shown a clear impact of additives regarding space charge build-up. Furthermore, study of semi-conducting layers has enabled identifying the effect of the different compounds such as carbon black, polymer matrix on charges build-up. For model cables, a deconvolution method of raw PEA signal which takes into account the cylindrical geometry, the attenuation and the dispersion of acoustic waves has been developed. Measurements on non-treated model cables have shown a displacement of the point where the field is at maximum from the inner to the outer semi-conducting electrode. This is due to a massive heterocharges build-up adjacent to the external electrode. The impact of thermal gradient on space charge build-up has also been addressed in this work