Dissertationen zum Thema „Résolution directe et inverse des problèmes“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit Top-35 Dissertationen für die Forschung zum Thema "Résolution directe et inverse des problèmes" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.
Sassi, Mohamed. „Résolution des problèmes direct et inverse pour la détermination et le contrôle des frontières mobiles“. Lyon, INSA, 1997. http://www.theses.fr/1997ISAL0080.
Der volle Inhalt der QuelleThe objective we have set ourselves for this these is to solve two problems dealing with the « moving boundary problems » : Direct problem : In this part, we have establish a relation called « Moknine equation», it gives us the ability to produce the exact solution of the moving boundary problems, the procedure consists of dividing the formulation into two ones, at the first the solution without accounting the discontinuity at the moving boundary is easily calculated, and eventually the final solution that corresponds to the discontinuity is concluded. To better illustrate the methodology, the case of solid liquid phase change problem is considered. Inverse problem: in order to govern a solidification process, the thermal parameters at the moving boundary are prescribed (velocity, thermal flux and the interface form), and the beat flux at the fix boundaries to makes the desired behaviour is estimated, the technical used to solve this inverse boundary problem is the space marching method. As a consequence, of the difficulties accounted to apply the Weber and M. Raynaud schemes due to the moving boundary, a new scheme is proposed. It is applied in the case of ID and 2D cases. It’s fiability is demonstrate through a numerical simulation
Manoochehrnia, Pooyan. „Characterisatiοn οf viscοelastic films οn substrate by acοustic micrοscοpy. Direct and inverse prοblems“. Electronic Thesis or Diss., Normandie, 2024. http://www.theses.fr/2024NORMLH38.
Der volle Inhalt der QuelleIn the framework of this PhD thesis, the characterisation of the thick and thin films deposited on asubstrate has been done using acoustic microscopy via direct and inverse problem-solving algorithms.Namely the Strohm’s method is used for direct problem-solving while a variety of mathematical modelsincluding Debye series model (DSM), transmission line model (TLM) and spectral method using ratiobetween multiple reflections model (MRM) have been used to solve inverse-problem. A specificapplication of acoustic microscopy has been used consisting of mounting the plane-wave high frequency(50 MHz and 200MHz) transducers instead of use of the traditional focus transducers used for acousticimaging as well as using full-wave A-scan which could be well extended to bulk analysis of consecutivescans. Models have been validated experimentally by a thick film made of epoxy-resin with thicknessof about 100μm and a thin film made of polish of about 8μm. The characterised parameters includemechanical parameters (e.g. density and thickness) as well as viscoelastic parameters (e.g. acousticlongitudinal velocity and acoustic attenuation) and occasionally transducer phase-shift
Sabouroux, Pierre. „Résolution de problèmes directs et inverses en électromagnétisme. Approche expérimentale“. Habilitation à diriger des recherches, Université de Provence - Aix-Marseille I, 2006. http://tel.archives-ouvertes.fr/tel-00358355.
Der volle Inhalt der Quelle- des champs électromagnétiques diffractés ou rayonnés
- des caractéristiques électromagnétiques de matériaux
dans le domaine des hyperfréquences
Gassa, Narimane. „Méthodes numériques pour la résolution de problèmes cliniques en électrophysiologie cardiaque“. Electronic Thesis or Diss., Bordeaux, 2024. http://www.theses.fr/2024BORD0045.
Der volle Inhalt der QuelleCardiovascular disease is the world’s leading cause of death. They represent a group of conditions that affect the heart and blood vessels, including coronary heart disease, heart failure, arrhythmias and valvular disease, among others. Ectopic arrhythmias, such as premature ventricular contractions (PVCs), involve abnormal electrical impulses that disrupt the regular rhythm of the heart, leading to premature contractions. While occasional PVCs may be benign, frequent or complex occurrences can indicate underlying heart issues. Understanding and addressing cardiac arrhythmias are crucial for managing cardiovascular health and preventing more severe complications. Therefore, there’s a need for accurate diagnostic tools and targeted interventions, such as cardiac ablations, to address these conditions effectively. Our research is focused on the field of heart electrophysiology, where we employ multiscale mathematical models from ion channels via cells to tissues and organs. The prime objective is to leverage numerical methods in order to improve patient care in cardiac medicine, specifically for the non-invasive characterization of ectopic arrhythmias. For this purpose, we delved into the study of electrocardiographic imaging (ECGI), a well-established technique that has evolved over the years and shows significant potential to advance safe cardiac mapping. Despite its limitations, some of which we have also investigated, ECGI remains a valuable tool in our exploration of improved mapping methods. In our pursuit of more innovative and accurate solutions, we introduced a novel approach that shifts from the conventional ECGI methodology offering a more tailored and patientspecific workflow. This new method revolves around the use of personalized propagation models with a trade-off between good accuracy and computational efficiency making it feasible for the workflow to be integrated into a clinical time frame
Aires, Filipe. „Problèmes inverses et réseaux de neurones : application à l'interféromètre haute résolution IASI et à l'analyse de séries temporelles“. Paris 9, 1999. https://portail.bu.dauphine.fr/fileviewer/index.php?doc=1999PA090006.
Der volle Inhalt der QuelleCointepas, Yann. „Modélisation homotopique et segmentation 3D du cortex cérébral à partir d'IRM pour la résolution des problèmes directs et inverses en EEG et en MEG“. Phd thesis, Télécom ParisTech, 1999. http://tel.archives-ouvertes.fr/tel-00005652.
Der volle Inhalt der QuelleCuintepas, Yann. „Modélisation homotopique et segmentation tridimensionnelles du cortex cérébral à partir d'irm pour la résolution des problèmes directs et inverses en eeg et en meg“. Paris, ENST, 1999. http://www.theses.fr/1999ENST0025.
Der volle Inhalt der QuelleCornaggia, Rémi. „Développement et utilisation de méthodes asymptotiques d'ordre élevé pour la résolution de problèmes de diffraction inverse“. Electronic Thesis or Diss., Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLY012.
Der volle Inhalt der QuelleThe purpose of this work was to develop new methods to address inverse problems in elasticity,taking advantage of the presence of a small parameter in the considered problems by means of higher-order asymptoticexpansions.The first part is dedicated to the localization and size identification of a buried inhomogeneity Bᵗʳᵘᵉ in a 3Delastic domain. In this goal, we focused on the study of functionals ��(Bₐ) quantifying the misfit between Bᵗʳᵘᵉ and a trial homogeneity Bₐ. Such functionals are to be minimized w.r.t. some or all the characteristics of the trial inclusion Bₐ (location, size, mechanical properties ...) to find the best agreement with Bᵗʳᵘᵉ. To this end, we produced an expansion of �� with respect to the size a of Bₐ, providing a polynomial approximation easier to minimize. This expansion, established up to O(a⁶) in a volume integral equations framework, is justified by an estimate of the residual. A suited identification procedure is then given and supported by numerical illustrations for simple obstacles in full-space ℝ³.The main purpose of this second part is to characterize a microstructured two-phases layered1D inclusion of length L, supposing we already know its low-frequency transmission eigenvalues (TEs). Those are computed as the eigen values of the so-called interior transmission problem (ITP). To provide a convenient invertible model, while accounting for the microstructure effects, we then relied on homogenized approximations of the exact ITP for the periodic inclusion. Focusing on the leading-order homogenized ITP, we first provide a straightforward method tore cover the macroscopic parameters (L and material contrast) of such inclusion. To access to the period of themicrostructure, higher-order homogenization is finally addressed, with emphasis on the need for suitable boundary conditions
Mugnier, Laurent. „Problèmes inverses en Haute Résolution Angulaire“. Habilitation à diriger des recherches, Université Paris-Diderot - Paris VII, 2011. http://tel.archives-ouvertes.fr/tel-00654835.
Der volle Inhalt der QuelleBen, Salem Nabil. „Modélisation directe et inverse de la dispersion atmosphérique en milieux complexes“. Thesis, Ecully, Ecole centrale de Lyon, 2014. http://www.theses.fr/2014ECDL0023.
Der volle Inhalt der QuelleThe aim of this study is to develop an inverse atmospheric dispersion model for crisis management in urban areas and industrial sites. The inverse modes allows for the reconstruction of the characteristics of a pollutant source (emission rate, position) from concentration measurements, by combining a direct dispersion model and an inversion algorithm, and assuming as known both site topography and meteorological conditions. The direct models used in these study, named SIRANE and SIRANERISK, are both operational "street network" models. These are based on the decomposition of the urban atmosphere into two sub-domains: the urban boundary layer and the urban canopy, represented as a series of interconnected boxes. Parametric laws govern the mass exchanges between the boxes under the assumption that the pollutant dispersion within the canopy can be fully simulated by modelling three main bulk transfer phenomena: channelling along street axes, transfers at street intersections and vertical exchange between a street canyon and the overlying atmosphere. The first part of this study is devoted to a detailed validation of these direct models in order to test the parameterisations implemented in them. This is achieved by comparing their outputs with wind tunnel experiments of the dispersion of steady and unsteady pollutant releases in idealised urban geometries. In the second part we use these models and experiments to test the performances of an inversion algorithm, named REWind. The specificity of this work is twofold. The first concerns the application of the inversion algorithm - using as input data instantaneous concentration signals registered at fixed receptors and not only time-averaged or ensemble averaged concentrations. - in urban like geometries, using an operational urban dispersion model as direct model. The application of the inverse approach by using instantaneous concentration signals rather than the averaged concentrations showed that the ReWind model generally provides reliable estimates of the total pollutant mass discharged at the source. However, the algorithm has some difficulties in estimating both emission rate and position of the source. We also show that the performances of the inversion algorithm are significantly influenced by the cost function used to the optimization, the number of receptors and the parameterizations adopted in the direct atmospheric dispersion model
Gaultier, Clément. „Conception et évaluation de modèles parcimonieux et d'algorithmes pour la résolution de problèmes inverses en audio“. Thesis, Rennes 1, 2019. http://www.theses.fr/2019REN1S009/document.
Der volle Inhalt der QuelleToday's challenges in the context of audio and acoustic signal processing inverse problems are multiform. Addressing these problems often requires additional appropriate signal models due to their inherent ill-posedness. This work focuses on designing and evaluating audio reconstruction algorithms. Thus, it shows how various sparse models (analysis, synthesis, plain, structured or “social”) are particularly suited for single or multichannel audio signal reconstruction. The core of this work notably identifies the limits of state-of-the-art methods evaluation for audio declipping and proposes a rigourous large-scale evaluation protocol to determine the more appropriate methods depending on the context (music or speech, moderately or highly degraded signals). Experimental results demonstrate substantial quality improvements for some newly considered testing configurations. We also show computational efficiency of the different methods and considerable speed improvements. Additionally, a part of this work is dedicated to the sound source localization problem. We address it with a “virtually supervised” machine learning technique. Experiments show with this method promising results on distance and direction of arrival estimation
Aiboud, Fazia. „Méthodes approchées pour la résolution de problèmes inverses : identificaiton paramétrique et génération de formes“. Thesis, Clermont-Ferrand 2, 2013. http://theses.bu.uca.fr/nondiff/2013CLF22399_AIBOUD.pdf.
Der volle Inhalt der QuelleThis work is realized in collaboration between LIMOS laboratory and IBC Company. Thiscompany develops a generic simulation and modeling framework, which generates multi-scales biological systems models, particularly, synthetic human organs. This framework combines different models with two description scales: macroscopic with differential equations to model the organ operation and microscopic to model cellular interactions by cellular automata. The aim of this work is to propose methods to solve two problems: parameter identification of differential equations and rules determination of cellular automata which modeled the organ structure. Proposed methods to microscopic modeling are based on metaheuristics (simulated annealing and genetic algorithm). These methods determine the parameters of differential equations from a set of temporal observations. Heuristics (greedy algorithm and deterministic local search) and metaheuristics (genetic algorithm, iterated local search and simulated annealing) are proposed in the case of macroscopic modeling. These methods determine the transition function and the number of generations of the cellular automaton generating a shape as near as a targeted shape. Different approaches, encoding, criteria, neighborhood systems and interaction system are proposed. Experimentations are realized to generate some type of shapes (full, symmetric and any shape)
Fritsch, Jean-François. „Propagation des ondes dans les guides partiellement enfouis : résolution du problème direct et imagerie par méthode de type échantillonnage“. Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAE001.
Der volle Inhalt der QuelleThis work is about the non destructive testing of partially buried or immersed slendered structures such as a steel cable partially buried in concrete or a steel plate partially immersed in liquid sodium. Such structures can be seen as the junction of two closed waveguides. In order to perform computing, the open part of the structure is truncated in the transverse direction with PMLs. As a result, a partially buried waveguide can be treated as the junction of two closed waveguides, in one of which the propagation of waves is governed by an equation involving complex coefficients due to the presence of the PMLs. This observation has lead us to tackle first the simpler case of the junction of two closed acoustic waveguides. For this simple case, we have proposed a strategy to solve the inverse problems based on the one hand on the introduction of the so-called reference fields, which are the total field response of the structure without defects to an incident field coming frome both half-guides, and on the other hand on the use of the reciprocity of the Green function of the structure without defect. Following this strategy, we have obtained an efficient modal formulation of the LSM which has enabled us to retrieve defects. In this simple case, we have taken advantage of the completeness of the modes to analyze the forward and inverse problems. The loss of the completeness of the modes in the half-guide truncated in the transverse direction with PMLs has led us to study the forward problem with Kondratiev theory. The tools introduced for the junction of two closed waveguides have been adapted to solve the inverse problem. Finally, we have tackled the more complex, but more realsitic case of an elastic waveguide partially immersed in a fluid. For this difficult case, we have developped adapted computing tools adapted and extended the tools introduced before solving the inverse problem
Cherni, Afef. „Méthodes modernes d'analyse de données en biophysique analytique : résolution des problèmes inverses en RMN DOSY et SM“. Thesis, Strasbourg, 2018. http://www.theses.fr/2018STRAJ055/document.
Der volle Inhalt der QuelleThis thesis aims at proposing new approaches to solve the inverse problem in biophysics. Firstly, we study the DOSY NMR experiment: a new hybrid regularization approach has been proposed with a novel PALMA algorithm (http://palma.labo.igbmc.fr/). This algorithm ensures the efficient analysis of real DOSY data with a high precision for all different type. In a second time, we study the mass spectrometry application. We have proposed a new dictionary based approach dedicated to proteomic analysis using the averagine model and the constrained minimization approach associated with a sparsity inducing penalty. In order to improve the accuracy of the information, we proposed a new SPOQ method based on a new penalization, solved with a new Forward-Backward algorithm with a variable metric locally adjusted. All our algorithms benefit from sounded convergence guarantees, and have been validated experimentally on synthetics and real data
Ferrier, Renaud. „Stratégies de résolution numérique pour des problèmes d'identification de fissures et de conditions aux limites“. Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLN034/document.
Der volle Inhalt der QuelleThe goal of this thesis is to study and to develop some methods in order to solve two types of identification problems in the framework of elliptical equations. As those problems are known to be particularly unstable, the proposed methods are accompanied with regularization procedures, that ensure that the obtained solutions keep a physical meaning.Firstly, we study the resolution of the Cauchy problem (boundary conditions identification) by the Steklov-Poincaré method. We start by proposing some improvements based on the used Krylov solver, especially by introducing a regularization method that consists in truncating the Ritz values decomposition of the operator in question. We study afterwards the estimation of uncertainties by the mean of techniques stemming from Bayesian inversion. Finally, we aim at solving more demanding problems, namely a time-transient problem, a non-linear case, and we give some elements to carry out resolutions on geometries that have a very high number of degrees of freedom, with help of domain decomposition.As for the problem of crack identification by the reciprocity gap method, we firstly propose and numerically test some ways to stabilize the resolution (use of different test-functions, a posteriori minimization of the gradients or Tikhonov regularization). Then we present an other variant of the reciprocity gap method, that is applicable on cases for which the measurements are incomplete. This method, based on a Petrov-Galerkin approach, is confronted, among others, with an experimental case. Finally, we investigate some ideas that allow to extend the reciprocity gap method for the identification of non-plane cracks
Giraud, François. „Analyse des modèles particulaires de Feynman-Kac et application à la résolution de problèmes inverses en électromagnétisme“. Phd thesis, Université Sciences et Technologies - Bordeaux I, 2013. http://tel.archives-ouvertes.fr/tel-00834920.
Der volle Inhalt der QuelleCantalloube, Faustine. „Détection et caractérisation d'exoplanètes dans des images à grand contraste par la résolution de problème inverse“. Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAY017/document.
Der volle Inhalt der QuelleDirect imaging of exoplanets provides valuable information about the light they emit, their interactions with their host star environment and their nature. In order to image such objects, advanced data processing tools adapted to the instrument are needed. In particular, the presence of quasi-static speckles in the images, due to optical aberrations distorting the light from the observed star, prevents planetary signals from being distinguished. In this thesis, I present two innovative image processing methods, both based on an inverse problem approach, enabling the disentanglement of the quasi-static speckles from the planetary signals. My work consisted of improving these two algorithms in order to be able to process on-sky images.The first one, called ANDROMEDA, is an algorithm dedicated to point source detection and characterization via a maximum likelihood approach. ANDROMEDA makes use of the temporal diversity provided by the image field rotation during the observation, to recognize the deterministic signature of a rotating companion over the stellar halo. From application of the original version on real data, I have proposed and qualified improvements in order to deal with the non-stable large scale structures due to the adaptative optics residuals and with the remaining level of correlated noise in the data. Once ANDROMEDA became operational on real data, I analyzed its performance and its sensitivity to the user-parameters proving the robustness of the algorithm. I also conducted a detailed comparison to the other algorithms widely used by the exoplanet imaging community today showing that ANDROMEDA is a competitive method with practical advantages. In particular, it is the only method that allows a fully unsupervised detection. By the numerous tests performed on different data set, ANDROMEDA proved its reliability and efficiency to extract companions in a rapid and systematic way (with only one user parameter to be tuned). From these applications, I identified several perspectives whose implementation could significantly improve the performance of the pipeline.The second algorithm, called MEDUSAE, consists in jointly estimating the aberrations (responsible for the speckle field) and the circumstellar objects by relying on a coronagraphic image formation model. MEDUSAE exploits the spectral diversity provided by multispectral data. In order to In order to refine the inversion strategy and probe the most critical parameters, I applied MEDUSAE on a simulated data set generated with the model used in the inversion. To investigate further the impact of the discrepancy between the image model used and the real images, I applied the method on realistic simulated images. At last, I applied MEDUSAE on real data and from the preliminary results obtained, I identified the important input required by the method and proposed leads that could be followed to make this algorithm operational to process on-sky data
Napal, Kevish. „Sur l'utilisation de méthodes d'échantillonnages et des signatures spectrales pour la résolution de problèmes inverses en diffraction“. Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLX102.
Der volle Inhalt der QuelleThis thesis is a contribution to inverse scattering theory. We are more specifically interested in the non-destructive testing of heterogeneous materials such as composite materials by using acoustic waves. Monitoring this type of materials in an industrial environment is of major importance, but their complex structure makes this task difficult. The so-called sampling methods seem very promising to address this issue. We develop these techniques to detect the appearance of defects from far field data. The defects considered are impenetrable Neumann obstacles. We distinguish two categories of them, each requiring a specific treatment: cracks and obstacles with non empty interior.Thanks to the two complementary factorizations of the far field operator that we establish, we show that it is possible to approach the solution of the Interior Transmission Problem (ITP) from the data. The ITP is a system of partial differential equations that takes into account the physical parameters of the material being surveyed. We show that it is then possible to detect an anomaly by comparing the solutions of two different ITPs, one associated with measurements made before the defect appeared and the other one associated with measurements made after. The validity of the described method requires avoiding particular frequencies, which are the elements of the ITP spectrum for which this problem is not well posed. We show that this spectrum is an infinite set, countable and without finite accumulation points.In the last chapter, we use the recent notion of artificial backgrounds to image crack networks embedded in a homogeneous background. This approach allows us to design a transmission problem with the choice of the artificial background, for instance made of an obstacle. The associated spectrum is then sensitive to the presence of cracks inside the artificial obstacle. This allows to quantify locally the crack density. However, the computation of the spectrum requires data at several frequencies and is expensive in terms of calculations. We propose an alternative method using only data at fixed frequency and which consists in working with the solutions of the ITP instead of it's spectrum
Fontchastagner, Julien. „Résolution du problème inverse de conception d'actionneurs électromagnétiques par association de méthodes déterministes d'optimisation globale avec des modèles analytiques et numériques“. Phd thesis, Toulouse, INPT, 2007. https://hal.science/tel-02945546v1.
Der volle Inhalt der QuelleThe work presented in this thesis brings a new methodology to solve the inverse problem of electromagnetic actuators design. After treating the general aspects of the problem, we will choose to solve it with deterministic methods of global optimization, which do not require any starting point, use each kind of variables and ensure to obtain the global optimum. Being hitherto used only with simple models, we apply them with analytical models based on an analytical resolution of the magnetic field, using less restrictive hypotheses. They are so more complex, and require we extend our optimization algorithm. A full finite elements software was then created, and equipped with a procedure which permit the evaluation of the average torque in the particular case of a magnet machine. The initial problem was reformulated and solve by integrating the numerical constraint of couple, the analytical model being used as guide with the new algorithm
Fontchastagner, Julien. „Résolution du problème inverse de conception d'actionneurs électromagnétiques par association de méthodes déterministes d'optimisation globale avec des modèles analytiques et numériques“. Phd thesis, Toulouse, INPT, 2007. http://oatao.univ-toulouse.fr/7621/1/fontchastagner.pdf.
Der volle Inhalt der QuelleAlonzo, Flavien. „Méthodes numériques pour le Glioblastome Multiforme et pour la résolution de problèmes inverses autour des systèmes de réaction-diffusion“. Electronic Thesis or Diss., Ecole centrale de Nantes, 2022. http://www.theses.fr/2022ECDN0059.
Der volle Inhalt der QuelleGlioblastoma Multiforme is the most frequent and deadliest brain tumour. Mathematics stand as an innovative tool to enhance patient care in the context of personalized medicine. This PhD showcases two major contribution to this theme. A first contribution works on the modelling and simulating of a realistic spreading of the tumour cells in Glioblastoma Multiforme from a patient’s diagnosis. This work models tumour induced angiogenesis. A numerical scheme and algorithmare used to ensure positivity of solutions. Finally, simulations are compared to empirical knowledge from Medicine. A second contribution is on parameter estimation for reaction-diffusion models. The developed method solves inverse problems by solving two partial differential equation systems with a functional constraint, without using statistical tools. Numerical resolution of such problems is given and showcased on two examples of models with synthetic data. This method enables to calibrate parameters from a model using sparse data in time
Abboud, Feriel. „Restoration super-resolution of image sequences : application to TV archive documents“. Thesis, Paris Est, 2017. http://www.theses.fr/2017PESC1038/document.
Der volle Inhalt der QuelleThe last century has witnessed an explosion in the amount of video data stored with holders such as the National Audiovisual Institute whose mission is to preserve and promote the content of French broadcast programs. The cultural impact of these records, their value is increased due to commercial reexploitation through recent visual media. However, the perceived quality of the old data fails to satisfy the current public demand. The purpose of this thesis is to propose new methods for restoring video sequences supplied from television archive documents, using modern optimization techniques with proven convergence properties. In a large number of restoration issues, the underlying optimization problem is made up with several functions which might be convex and non-necessarily smooth. In such instance, the proximity operator, a fundamental concept in convex analysis, appears as the most appropriate tool. These functions may also involve arbitrary linear operators that need to be inverted in a number of optimization algorithms. In this spirit, we developed a new primal-dual algorithm for computing non-explicit proximity operators based on forward-backward iterations. The proposed algorithm is accelerated thanks to the introduction of a preconditioning strategy and a block-coordinate approach in which at each iteration, only a "block" of data is selected and processed according to a quasi-cyclic rule. This approach is well suited to large-scale problems since it reduces the memory requirements and accelerates the convergence speed, as illustrated by some experiments in deconvolution and deinterlacing of video sequences. Afterwards, a close attention is paid to the study of distributed algorithms on both theoretical and practical viewpoints. We proposed an asynchronous extension of the dual forward-backward algorithm, that can be efficiently implemented on a multi-cores architecture. In our distributed scheme, the primal and dual variables are considered as private and spread over multiple computing units, that operate independently one from another. Nevertheless, communication between these units following a predefined strategy is required in order to ensure the convergence toward a consensus solution. We also address in this thesis the problem of blind video deconvolution that consists in inferring from an input degraded video sequence, both the blur filter and a sharp video sequence. Hence, a solution can be reached by resorting to nonconvex optimization methods that estimate alternatively the unknown video and the unknown kernel. In this context, we proposed a new blind deconvolution method that allows us to implement numerous convex and nonconvex regularization strategies, which are widely employed in signal and image processing
Monnier, Jean-Baptiste. „Quelques contributions en classification, régression et étude d'un problème inverse en finance“. Phd thesis, Université Paris-Diderot - Paris VII, 2011. http://tel.archives-ouvertes.fr/tel-00650930.
Der volle Inhalt der QuelleAbou, Khachfe Refahi. „Résolution numérique de problèmes inverses 2D non linéaires de conduction de la chaleur par la méthode des éléments finis et l'algorithme du gradient conjugué : validation expérimentale“. Nantes, 2000. http://www.theses.fr/2000NANT2045.
Der volle Inhalt der QuelleOmar, Oumayma. „Sur la résolution des problèmes inverses pour les systèmes dynamiques non linéaires. Application à l’électrolocation, à l’estimation d’état et au diagnostic des éoliennes“. Thesis, Grenoble, 2012. http://www.theses.fr/2012GRENT083/document.
Der volle Inhalt der QuelleThis thesis mainly concerns the resolution of dynamic inverse problems involvingnonlinear dynamical systems. A set of techniques based on the use of trains of pastmeasurements saved on a sliding window was developed. First, the measurements areused to generate a family of graphical signatures, which is a classification tool, in orderto discriminate between different values of variables to be estimated for a given nonlinearsystem. This technique was applied to solve two problems : the electrolocationproblem of a robot with electrical sense and the problem of state estimation in nonlineardynamical systems. Besides these two applications, receding horizon inversion techniquesdedicated to the fault diagnosis problem of a wind turbine proposed as an internationalbenchmark were developed. These techniques are based on the minimization of quadraticcriteria based on knowledge-based models
Biret, Maëva. „Contribution à la résolution de problèmes inverses sous contraintes et application de méthodes de conception robuste pour le dimensionnement de pièces mécaniques de turboréacteurs en phase avant-projets“. Thesis, Paris 6, 2016. http://www.theses.fr/2016PA066294/document.
Der volle Inhalt der QuelleThe aim of this PhD dissertation is to propose a new approach to improve and accelerate preliminary design studies for turbofan engine components. This approach consists in a comprehensive methodology for robust design under constraints, following three stages : dimension reduction and metamodeling, robust design under constraints and finally inverse problem solving under constraints. These are the three main subjects of this PhD dissertation. Dimension reduction is an essential pre-processing for any study. Its aim is to keep only inputs with large effects on a selected output. This selection reduces the size of the domain on which is performed the study which reduces its computational cost and eases the (qualitative) understanding of the system of interest. Metamodeling also contributes to these two objectives by replacing the time-consuming computer code by a faster metamodel which approximates adequately the relationship between system inputs and the studied output. Robust design under constraints is a bi-objectives optimization where different uncertainty sources are included. First, uncertainties must be collected and modeled. Then a propagation method of uncertainties in the computation code must be chosen in order to estimate moments (mean and standard deviation) of output distribution. Optimization of these moments are the two robust design objectives. Finally, a multi-objectives optimization method has to be chosen to find a robust optimum under constraints. The development of methods to solve ill-posed inverse problems is the innovative part of this PhD dissertation. These problems can have infinitely many solutions constituting non convex or even disjoint sets. Inversion is considered here as a complement to robust design in the case where the obtained optimum doesn't satisfy one of the constraints. Inverse methods then enable to solve this problem by finding several input datasets which satisfy all the constraints and a condition of proximity to the optimum. The aim is to reach a target value of the unsatisfied constraint while respecting other system constraints and the optimum proximity condition. Applied to preliminary design of high pressure compressor, this methodology contributes to the improvement and acceleration of studies currently characterized by a numerous of loopbacks which are expensive in terms of cpu-time and human resources
Ramos, Fernando Manuel. „Résolution d'un problème inverse multidimensionnel de diffusion par la méthode des éléments analytiques et par le principe de l'entropie maximale : contribution à la caractérisation de défauts internes“. Toulouse, ENSAE, 1992. http://www.theses.fr/1992ESAE0015.
Der volle Inhalt der QuelleRenzi, Cédric. „Identification Expérimentale de Sources vibratoires par Résolution du problème Inverse modélisé par un opérateur Eléments Finis local“. Phd thesis, INSA de Lyon, 2011. http://tel.archives-ouvertes.fr/tel-00715820.
Der volle Inhalt der QuelleLaurent, Philippe. „Méthodes d'accéleration pour la résolution numérique en électrolocation et en chimie quantique“. Thesis, Nantes, Ecole des Mines, 2015. http://www.theses.fr/2015EMNA0122/document.
Der volle Inhalt der QuelleThis thesis tackle two different topics.We first design and analyze algorithms related to the electrical sense for applications in robotics. We consider in particular the method of reflections, which allows, like the Schwartz method, to solve linear problems using simpler sub-problems. These ones are obtained by decomposing the boundaries of the original problem. We give proofs of convergence and applications. In order to implement an electrolocation simulator of the direct problem in an autonomous robot, we build a reduced basis method devoted to electrolocation problems. In this way, we obtain algorithms which satisfy the constraints of limited memory and time resources. The second topic is an inverse problem in quantum chemistry. Here, we want to determine some features of a quantum system. To this aim, the system is ligthed by a known and fixed Laser field. In this framework, the data of the inverse problem are the states before and after the Laser lighting. A local existence result is given, together with numerical methods for the solving
Cloquet, Christophe. „Optimiser l'utilisation des données en reconstruction TEP: modélisation de résolution dans l'espace image et contribution à l'évaluation de la correction de mouvement“. Doctoral thesis, Universite Libre de Bruxelles, 2011. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209887.
Der volle Inhalt der QuelleLorsque le tableau clinique présenté par un patient n'est pas clair, de nombreuses techniques d'imagerie médicale permettent d'affiner le diagnostic, de préciser le pronostic et de suivre l'évolution des maladies au cours du temps. Ces mêmes techniques sont également utilisées en recherche fondamentale pour faire progresser la connaissance du fonctionnement normal et pathologique du corps humain. Il s'agit par exemple de l'échographie, de l'imagerie par résonance magnétique, de la tomodensitométrie à rayons X ou encore de la tomographie par émission de positrons (TEP).
Certaines de ces techniques mettent en évidence le métabolisme de molécules, comme le glucose et certains acides aminés. C'est le cas de la tomographie par émission de positrons, dans laquelle une petite quantité de molécules marquées avec un élément radioactif est injectée au patient. Ces molécules se concentrent de préférence dans les endroits du corps humain où elles sont utilisées. Instables, les noyaux radioactifs se désintègrent en émettant un anti-électron, encore appelé positron. Chaque positron s'annihile ensuite à proximité du lieu d'émission avec un électron du corps du patient, provoquant l'émission simultanée de deux photons de haute énergie dans deux directions opposées. Après avoir traversé les tissus, ces photons sont captés par un anneau de détecteurs entourant le patient. Sur base de l'ensemble des événements collectés, un algorithme de reconstruction produit enfin une image de la distribution du traceur radioactif.
La tomographie par émission de positrons permet notamment d'évaluer l'efficacité du traitement des tumeurs avant que la taille de celles-ci n'ait changé, ce qui permet d'aider à décider de poursuivre ou non le traitement en cours. En cardiologie, cette technique permet de quantifier la viabilité du muscle cardiaque après un infarctus et aide ainsi à évaluer la pertinence d'une intervention chirurgicale.
Plusieurs facteurs limitent la précision des images TEP. Parmi ceux-ci, on trouve l'effet de volume partiel et le mouvement du coeur.
L'effet de volume partiel mène à des images floues, de la même manière qu'un objectif d'appareil photo incorrectement mis au point produit des photographies floues. Deux possibilités s'offrent aux photographes pour éviter cela :soit améliorer la mise au point de leur objectif, soit retoucher les images après les avoir réalisées ;améliorer la mise au point de l'objectif peut s'effectuer dans l'espace des données (ajouter une lentille correctrice avant l'objectif) ou dans l'espace des images (ajouter une lentille correctrice après l'objectif).
Le mouvement cardiaque provoque également une perte de netteté des images, analogue à l'effet de flou sur une photographie d'une voiture de course réalisée avec un grand temps de pose. Classiquement, on peut augmenter la netteté d'une image en diminuant le temps de pose. Cependant, dans ce cas, moins de photons traversent l'objectif et l'image obtenue est plus bruitée.
On pourrait alors imaginer obtenir de meilleurs images en suivant la voiture au moyen de l'appareil photo.
De cette manière, la voiture serait à la fois nette et peu corrompue par du bruit, car beaucoup de photons pourraient être détectés.
En imagerie TEP, l'effet de volume partiel est dû à de nombreux facteurs dont le fait que le positron ne s'annihile pas exactement à l'endroit de son émission et que le détecteur frappé par un photon n'est pas toujours correctement identifié. La solution passe par une meilleure modélisation de la physique de l'acquisition au cours de la reconstruction, qui, en pratique est complexe et nécessite d'effectuer des approximations.
La perte de netteté due au mouvement du coeur est classiquement traitée en figeant le mouvement dans plusieurs images successives au cours d'un battement cardiaque. Cependant, une telle solution résulte en une diminution du nombre de photons, et donc en une augmentation du bruit dans les images. Tenir compte du mouvement de l'objet pendant la reconstruction TEP permettrait d'augmenter la netteté en gardant un bruit acceptable. On peut également penser à superposer différentes images recalées au moyen du mouvement.
Au cours de ce travail, nous avons étudié des méthodes qui tirent le meilleur parti possible des informations fournies par les événements détectés. Pour ce faire, nous avons choisi de baser nos reconstructions sur une liste d'événements contenant la position exacte des détecteurs et le temps exact d'arrivée des photons, au lieu de l'histogramme classiquement utilisé.
L'amélioration de résolution passe par la connaissance de l'image d'une source ponctuelle radioactive produite par la caméra.
À la suite d'autres travaux, nous avons mesuré cette image et nous l'avons modélisée, pour la première fois, au moyen d'une fonction spatialement variable, non-gaussienne et asymétrique. Nous avons ensuite intégré cette fonction dans un algorithme de reconstruction, dans l'espace image. C'est la seule possibilité pratique dans le cas d'acquisitions en mode liste. Nous avons ensuite comparé les résultats obtenus avec un traitement de l'image après la reconstruction.
Dans le cadre de la correction de mouvement cardiaque, nous avons opté pour l'étude de la reconstruction simultanée de l'image et du déplacement, sans autres informations externes que les données TEP et le signal d'un électrocardiogramme. Nous avons ensuite choisi d'étudier la qualité de ces estimateurs conjoints intensité-déplacement au moyen de leur variance. Nous avons étudié la variance minimale que peut atteindre un estimateur conjoint intensité-mouvement, sur base des données TEP uniquement, au moyen d'un outil appelé borne de Cramer-Rao. Dans ce cadre, nous avons étudié différentes manières existantes d'estimer la borne de Cramer-Rao et nous avons proposé une nouvelle méthode d'estimation de la borne de Cramer-Rao adaptée à des images de grande dimension. Nous avons enfin mis en évidence que la variance de l'algorithme classique OSEM était supérieure à celle prédite par la borne de Cramer-Rao. En ce qui concerne les estimateurs combinés intensité-déplacement, nous avons observé la diminution de la variance minimale possible sur les intensités lorsque le déplacement était paramétrisé sur des fonctions spatiales lisses.
Ce travail est organisé comme suit. Le chapitre théorique commence par brosser brièvement le contexte historique de la tomographie par émission de positrons. Nous avons souhaité insister sur le fait que l'évolution des idées n'est romantique et linéaire qu'à grande échelle. Nous abordons ensuite la description physique de l'acquisition TEP. Dans un deuxième chapitre, nous rappelons quelques éléments de la théorie de l'estimation et de l'approximation et nous traitons des problèmes inverses en général et de la reconstruction TEP en particulier.
La seconde partie aborde le problème du manque de netteté des images et la solution que nous avons choisi d'y apporter :une modélisation dans l'espace image de la réponse impulsionnelle de la caméra, en tenant compte de ses caractéristiques non gaussienne, asymétrique et spatialement variable. Nous présentons également le résultat de la comparaison avec une déconvolution post-reconstruction. Les résultats présentés dans ce chapitre ont fait l'objet d'une publication dans la revue Physics in Medicine and Biology.
Dans un troisième volet, nous abordons la correction de mouvement. Une premier chapitre brosse le contexte de la correction de mouvement en TEP et remet en perspective les différentes méthodes existantes, dans un cadre bayésien unificateur.
Un second chapitre aborde ensuite l'estimation de la qualité des images TEP et étudie en particulier la borne de Cramer-Rao.
Les résultats obtenus sont enfin résumés et replacés dans leur contexte dans une conclusion générale.
Doctorat en Sciences de l'ingénieur
info:eu-repo/semantics/nonPublished
Dejean-Viellard, Catherine. „Etude des techniques de régularisation en radiothérapie conformationnelle avec modulation d'intensité et évaluation quantitative des distributions de dose optimales“. Toulouse 3, 2003. http://www.theses.fr/2003TOU30195.
Der volle Inhalt der QuellePrieux, Vincent. „Imagerie sismique des milieux visco-acoustiques et visco-élastiques à deux dimensions par stéréotomographie et inversion des formes d'ondes : applications au champ pétrolier de Valhall“. Phd thesis, Université de Nice Sophia-Antipolis, 2012. http://tel.archives-ouvertes.fr/tel-00722408.
Der volle Inhalt der QuelleBrossier, Romain. „Imagerie sismique à deux dimensions des milieux visco-élastiques par inversion des formes d'ondes : développements méthodologiques et applications“. Phd thesis, Université de Nice Sophia-Antipolis, 2009. http://tel.archives-ouvertes.fr/tel-00451138.
Der volle Inhalt der QuelleChen, Zhouye. „Reconstruction of enhanced ultrasound images from compressed measurements“. Thesis, Toulouse 3, 2016. http://www.theses.fr/2016TOU30222/document.
Der volle Inhalt der QuelleThe interest of compressive sampling in ultrasound imaging has been recently extensively evaluated by several research teams. Following the different application setups, it has been shown that the RF data may be reconstructed from a small number of measurements and/or using a reduced number of ultrasound pulse emissions. According to the model of compressive sampling, the resolution of reconstructed ultrasound images from compressed measurements mainly depends on three aspects: the acquisition setup, i.e. the incoherence of the sampling matrix, the image regularization, i.e. the sparsity prior, and the optimization technique. We mainly focused on the last two aspects in this thesis. Nevertheless, RF image spatial resolution, contrast and signal to noise ratio are affected by the limited bandwidth of the imaging transducer and the physical phenomenon related to Ultrasound wave propagation. To overcome these limitations, several deconvolution-based image processing techniques have been proposed to enhance the ultrasound images. In this thesis, we first propose a novel framework for Ultrasound imaging, named compressive deconvolution, to combine the compressive sampling and deconvolution. Exploiting an unified formulation of the direct acquisition model, combining random projections and 2D convolution with a spatially invariant point spread function, the benefit of this framework is the joint data volume reduction and image quality improvement. An optimization method based on the Alternating Direction Method of Multipliers is then proposed to invert the linear model, including two regularization terms expressing the sparsity of the RF images in a given basis and the generalized Gaussian statistical assumption on tissue reflectivity functions. It is improved afterwards by the method based on the Simultaneous Direction Method of Multipliers. Both algorithms are evaluated on simulated and in vivo data. With regularization techniques, a novel approach based on Alternating Minimization is finally developed to jointly estimate the tissue reflectivity function and the point spread function. A preliminary investigation is made on simulated data
Cornaggia, Rémi. „Développement et utilisation de méthodes asymptotiques d'ordre élevé pour la résolution de problèmes de diffraction inverse“. Thesis, 2016. http://www.theses.fr/2016SACLY012/document.
Der volle Inhalt der QuelleThe purpose of this work was to develop new methods to address inverse problems in elasticity,taking advantage of the presence of a small parameter in the considered problems by means of higher-order asymptoticexpansions.The first part is dedicated to the localization and size identification of a buried inhomogeneity $BTrue$ in a 3Delastic domain. In this goal, we focused on the study of functionals $Jbb(Br)$ quantifying the misfit between $BTrue$and a trial homogeneity $Br$. Such functionals are to be minimized w.r.t. some or all the characteristics of the trialinclusion $Br$ (location, size, mechanical properties ...) to find the best agreement with $BTrue$. To this end, weproduced an expansion of $Jbb$ with respect to the size $incsize$ of $Br$, providing a polynomial approximationeasier to minimize. This expansion, established up to $O(incsize^6)$ in a volume integral equations framework, isjustified by an estimate of the residual. A suited identification procedure is then given and supported by numericalillustrations for simple obstacles in full-space $Rbb^3$.The main purpose of this second part is to characterize a microstructured two-phases layered1D inclusion of length $ltot$, supposing we already know its low-frequency transmission eigenvalues (TEs). Thoseare computed as the eigenvalues of the so-called interior transmission problem (ITP). To provide a convenient invertiblemodel, while accounting for the microstructure effects, we then relied on homogenized approximations of the exact ITPfor the periodic inclusion. Focusing on the leading-order homogenized ITP, we first provide a straightforward method torecover the macroscopic parameters ($ltot$ and material contrast) of such inclusion. To access to the period of themicrostructure, higher-order homogenization is finally addressed, with emphasis on the need for suitable boundaryconditions