Dissertations / Theses on the topic 'Fusion de profondeur de champ'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Fusion de profondeur de champ.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Ocampo, Blandon Cristian Felipe. "Patch-Based image fusion for computational photography." Electronic Thesis or Diss., Paris, ENST, 2018. http://www.theses.fr/2018ENST0020.
Full textThe most common computational techniques to deal with the limited high dynamic range and reduced depth of field of conventional cameras are based on the fusion of images acquired with different settings. These approaches require aligned images and motionless scenes, otherwise ghost artifacts and irregular structures can arise after the fusion. The goal of this thesis is to develop patch-based techniques in order to deal with motion and misalignment for image fusion, particularly in the case of variable illumination and blur.In the first part of this work, we present a methodology for the fusion of bracketed exposure images for dynamic scenes. Our method combines a carefully crafted contrast normalization, a fast non-local combination of patches and different regularization steps. This yields an efficient way of producing contrasted and well-exposed images from hand-held captures of dynamic scenes, even in difficult cases (moving objects, non planar scenes, optical deformations, etc.).In a second part, we propose a multifocus image fusion method that also deals with hand-held acquisition conditions and moving objects. At the core of our methodology, we propose a patch-based algorithm that corrects local geometric deformations by relying on both color and gradient orientations.Our methods were evaluated on common and new datasets created for the purpose of this work. From the experiments we conclude that our methods are consistently more robust than alternative methods to geometric distortions and illumination variations or blur. As a byproduct of our study, we also analyze the capacity of the PatchMatch algorithm to reconstruct images in the presence of blur and illumination changes, and propose different strategies to improve such reconstructions
Aissaoui, Amel. "Reconnaissance bimodale de visages par fusion de caractéristiques visuelles et de profondeur." Thesis, Lille 1, 2014. http://www.theses.fr/2014LIL10030/document.
Full textThis work lies in the domain of face recognition. The objective is to automatically decide about a person identity by analyzing his/her facial features. We introduce a 2D-3D bimodal approach that combines visual and depth features in order to provide better recognition accuracy and robustness than classical monomodal approaches. First, a 3D acquisition method dedicated to faces, based onstereoscopic reconstruction, is proposed. It is based on an active shape model to take into account the topology of the face. Then, a novel descriptor named DLBP (Depth Local Binary Patterns) is defined in order to characterize the depth information. This descriptor extends to the depth images the traditional LBP originally used for texture description. Finally, a two-stage fusion strategy isproposed, that combines the modalities using both early and late fusions. The experiments conducted with different public datasets, as well as with a new dataset elaborated specifically for the evaluation purposes, allowed to validate the contributions introduced throughout this work. In particular, results have shown the quality of the data obtained using the reconstruction method, and also a gain in precision obtained by using the DLBP descriptor and the two-stage fusion
Desaulniers, Pierre. "Augmentation de la profondeur de champs par encoddage du front d'onde." Master's thesis, Université Laval, 2008. http://www.theses.ulaval.ca/2008/25227/25227.pdf.
Full textHadhri, Tesnim. "Single view depth estimation from train images." Master's thesis, Université Laval, 2021. http://hdl.handle.net/20.500.11794/70388.
Full textDepth prediction is the task of computing the distance of different points in the scene from the camera. Knowing how far away a given object is from the camera would make it possible to understand its spatial representation. Early methods have used stereo pairs of images to extract depth. To have a stereo pair of images, we need a calibrated pair of cameras. However, it is simpler to have a single image as no calibration or synchronization is needed. For this reason, learning-based methods, which estimate depth from monocular images, have been introduced. Early solutions of learning-based problems have used ground truth depth for training, usually acquired from sensors such as Kinect or Lidar. Acquiring depth ground truth is expensive and difficult which is why self-supervised methods, which do not acquire such ground truth for fine-tuning, has appeared and have shown promising results for single image depth estimation. In this work, we propose to estimate depth maps for images taken from the train driver viewpoint. To do so, we propose to use geometry constraints and rails standard parameters to extract the depth map inside the rails, to provide it as a supervisory signal to the network. To this end, we first gathered a train sequences dataset and determined their focal lengths to compute the depth map inside the rails. Then we used this dataset and the computed focal lengths to finetune an existing model “Monodepth2” trained previously on the Kitti dataset. We show that the ground truth depth map provided to the network solves the problem of depth of the rail tracks which otherwise appear as standing objects in front of the camera. It also improves the results of depth estimation of train sequences.
Cormier, Geoffroy. "Analyse statique et dynamique de cartes de profondeurs : application au suivi des personnes à risque sur leur lieu de vie." Thesis, Rennes 1, 2015. http://www.theses.fr/2015REN1S146.
Full textIn France, fall is the first death cause for people aged 75 and more, and the second death cause for people aged 65 and more. It is considered that falls generate about 1 to 2 billion euros health costs per year. The human and social-economical issue is crucial, knowing that for the mentioned populations, fall risk is multiplied by 20 after a first fall; that the death risk is multiplied by 4 in the year following a fall; that per year, 30% of the people aged 65 and more and 50% of the people aged 85 and more are subject to falls; and that it is estimated that more than 30% of the French population whill be older than 65 years old by 2050. This thesis proposes a ground lying event detection device which bases on the real time analysis of depth maps, and also proposes an improvement of the device, which uses an additional thermal sensor. Depth maps and thermal images ensure the device is independent from textures and lighting conditions of the observed scenes, and guarantee that the device respects the privacy of those who pass into its field of view, since nobody can be recognized in such images. This thesis also proposes several methods to detect the ground plane in a depth map, the ground plane being a geometrical reference for the device. A psycho-social inquiry was conducted, and enabled the evaluation of the a priori acceptability of the proposed device. This inquiry demonstrated the good acceptability of the proposed device, and resulted in recommendations on points to be improved and on pitfalls to avoid. Last, a method to separate and track objects detected in a depth map is proposed, the measurement of the activity of observed individuals being a long term objective for the device
Terral, Philippe. "Structure du champ magnétique interstellaire dans le disque et le halo de notre galaxie." Thesis, Toulouse 3, 2016. http://www.theses.fr/2016TOU30234/document.
Full textCharacterization of the interstellar magnetic field of our Galaxy is a major challenge for astrophysics. A better understanding of its properties, particularly its structure, would be valuable in many research areas, from cosmic-ray studies to Galactic dynamics and including interstellar medium evolution and star formation. Recent radio observations uncovered common characteristics in the magnetic structure of nearby galaxies similar to the MilkyWay. In face-on galaxies, magnetic field lines appear to form a spiral pattern similar to that observed in the optical. In edge-on galaxies, magnetic field lines appear to be parallel to the galactic plane in the disc and X-shaped in the halo. One may naturally wonder whether such an X-shape structure is also present in the halo of our own Galaxy. The purpose of the work performed during my three years as a Ph.D. student was to try and provide some answers to this question. There are two major difficulties : on one hand, our location within the Milky Way does not mate it to have a global view of its large-scale magnetic structure; on the other hand, the magnetic field is not directly observable, so it is necessary to implement indirect techniques, based on the effect the magnetic field can have on a given observable, to estimate some characteristics of the magnetic field. My own work is based on Faraday rotation. I first built an observational reference map of the Faraday depth of our Galaxy associated with the large-scale magnetic field. To that end, I had to develop a simple model of the turbulent magnetic field in order to substract its contribution to the Galactic Faraday depth from that of the total magnetic field. I then constructed theoretical maps of Galactic Faraday depth based on a set of analytical models of the large-scale magnetic field that are consistent with various (theoretical and observational) constraints and depend on a reasonable number of free parameters. Finally I fitted the values of these parameters through a challenging optimization phase. My manuscript is divided into four main chapters. In Chapter 1, I present the context of my work as well as various general results useful for my study. In Chapter 2 I review all the elements required for my modeling, with emphasis on the set of analytical models used. In Chapter 3, I describe my simulation and optimization procedures. In Chapter 4 I present my results. In this final chapter, I derive the parameter values of the different field models that lead to the best fit to the observations, I try to identify the role of each parameter and its impact on the theoretical map, and I discuss the different geometries allowed in the various cases. Finally, I show that the fit to the observational map is slightly better with a bisymmetric halo field than with an axisymmetric halo field, and that an X-shape pattern in polarization maps naturally arises in the first case whereas the field appears to remain mainly horizontal in the second case
Papadopoulos, Orfanos Dimitri. "Numerisation geometrique automatique a l'aide d'un capteur 3d de precision a profondeur de champ reduite." Paris, ENST, 1997. http://www.theses.fr/1997ENST0002.
Full textPapadopoulos-Orfanos, Dimitri. "Numérisation géométrique automatique à l'aide d'un capteur 3D de précision à profondeur de champ réduite /." Paris : École nationale supérieure des télécommunications, 1997. http://catalogue.bnf.fr/ark:/12148/cb36168752q.
Full textPannetier, Benjamin. "Fusion de données pour la surveillance du champ de bataille." Phd thesis, Université Joseph Fourier (Grenoble), 2006. http://tel.archives-ouvertes.fr/tel-00377247.
Full textCôté, François. "Imagerie de neurones en profondeur par fibre optique avec champ de vue variable et imagerie à grand champ volumétrique rapide avec sectionnement optique HiLo." Master's thesis, Université Laval, 2020. http://hdl.handle.net/20.500.11794/38294.
Full textImaging cells and axons in deep brain with minimal damage while keeping a sizable field of view remains a challenge, because it is difficult to optimize one without sacrificing the other. We propose a scanning method reminiscent of laser scanning microscopy to get a reasonable field of view with minimal damage deep in the brain. By using micro-optics at the tip of our 125 µm-diameter singlemode fiber inside a 250 µm capillary, we can create a focal spot on the side of the fiber at a distance of approximately 60 µm. The focal spot has a 2 µm diameter and can be scanned at up to 30 hertz by a custom scanning device over a 90 degree angular sweep on a single line. A piezoelectric actuator moves up and down the fiber to achieve a cylindrical scanning pattern. By having this side illumination, there is no need for surgical exposure of the tissue, making our method simple and easy to achieve. The field of view is controlled by the angular and vertical sweeps, unrelated to the fiber diameter. Furthermore, by modifying the length of the grin lens, we could directly increase or decrease the field of view of our optical system, without any change on the probe diameter. We have succeeded in imaging microglia in the midbrain of a CX3CR1-GFP mouse. The system is also ready for calcium imaging on single pixel lines. Imaging whole mouse brains can provide a wealth of information for understanding neuronal development at both the microscopic and macroscopic scale. Furthermore, visualizing entire brain samples allow us to better conceptualize how different diseases affect the brain as a whole, rather than only investigating a certain structure. Currently, two main challenges exist in achieving whole mouse brain imaging: 1) Long image acquisition sessions (on the order of several hours) and 2) Big data creation and management due to the large, high-resolution image volumes created. To overcome these challenges, we present a fast 1-photon system with a slightly decreased resolution allowing whole brain, optically sectioned imaging on the order of minutes by using a mathematical algorithm termed “HiLo”. Our large field of view (25 mm2 ) allows us to see an entire newborn mouse brain in a single snapshot with a resolution of about 2 µm in lateral direction and 4 µm in axial direction. This resolution still allows visualization of cells and some large axonal projections. This technological advancement will first and foremost allow us to rapidly image large volume samples and store them in a smaller format without losing the integral information, which is mainly stained-cell quantity and location. Secondly, the design will allow for increased successful high-resolution imaging by screening ...
Thériault, Gabrielle. "Développement d'un microscope à grande profondeur de champ pour l'imagerie fonctionnelle de neurones dans des échantillons épais." Doctoral thesis, Université Laval, 2015. http://hdl.handle.net/20.500.11794/25740.
Full textOne of the greatest challenges of modern neuroscience that will lead to a better understanding and earlier diagnostics of brain sickness is to decipher the details of neuronal interactions in the living brain. To achieve this goal, we must be capable of observing populations of living cells in their original matrix with a good resolution, both spatial and temporal. Two-photon microscopy offers the right tools for this since it presents with a spatial resolution in the order of the micron. Unfortunately, this very good three-dimensional resolution lowers the temporal resolution because the optical sectioning caused by the microscope's small depth of field forces us to scan thick samples repeatedly when acquiring data from a large volume. In this doctoral project, we have designed, built and characterized a two-photon microscope with an extended depth of field with the goal of simplifying the functional imaging of neurons in thick samples. To increase the laser scanning microscope's depth of field, we shaped the laser beam entering the optical system in such a way that a needle of light is generated inside the sample instead of a spot. We modify the laser beam with an axicon, a cone-shaped lens that transforms a gaussian beam into a quasi non-diffractive beam called Bessel-Gauss beam. The excitation beam therefore maintains the same transverse resolution at different depths inside the sample, eliminating the need for many scans in order to probe the entire volume of interest. In this thesis, we demonstrate that the extended depth of field microscope effectively works as we designed it, and we use it to image calcium dynamics in a three-dimensional network of live neurons. We also present the different advantages of our system in comparison with standard two-photon microscopy.
Hamdi, Feriel. "Interaction champ électrique cellule : conception de puces microfluidiques pour l'appariement cellulaire et la fusion par champ électrique pulsé." Phd thesis, Université Paris Sud - Paris XI, 2013. http://tel.archives-ouvertes.fr/tel-00926219.
Full textHamdi, Feriel. "Interaction champ électrique cellule : conception de puces microfluidiques pour l’appariement cellulaire et la fusion par champ électrique pulsé." Thesis, Paris 11, 2013. http://www.theses.fr/2013PA112286/document.
Full textCell fusion is a method to generate a hybrid cell combing the specific properties of its progenitor cells. Initially developed for antibody production, it is now also investigated for cancer immunotherapy. Electrofusion consists on the production of hybridoma using electric pulses. Compared to viral or chemical methods, electrofusion shows higher yields and this system is contaminant free. Actually, electrofusion is investigated in electroporation cuvettes, where the electric field is not precisely controllable and cell placement impossible, resulting in low binuclear hibridoma yields. To improve the fusion quality and yield, cell capture and pairing are necessary.Our objective was the development and realization of biochips involving microelectrodes and microfluidic channels to place and pair cells prior to electrofusion. A first trapping structure based on insulators and the use of dielectrophoresis has been achieved. In order to perform fluidic experiments, a biocompatible irreversible packaging was developed. Then, the experimental medium was optimized for electrofusion. Confronting the biological experiments and the numerical simulations, we showed that the application of electric pulses leads to a decrease of the cytoplasmic conductivity. The microstructure was validated by cell electrofusion. A yield of 55%, with a membrane fusion duration of 6 s has been achieved. Secondly, we proposed two trapping microstructures for high density electrofusions. The first one is based on a fluidic trapping while the second one uses dielectrophoresis, free of electric wiring, thanks to conductive pads. Up to 75% of paired cells were successfully electrofused with the conductive pads. More than 97% of the hybridoma were binuclear. The trapping being reversible, the hybridoma can be collected for further analysis
Devisme, Céline. "Etude de l'influence des disparités horizontales et verticales sur la perception de la profondeur en champ visuel périphérique." Phd thesis, Université Pierre et Marie Curie - Paris VI, 2008. http://tel.archives-ouvertes.fr/tel-00276430.
Full textManakli, Serdar. "Amélioration de la profondeur de champ de la lithographie CMOS sub 0,1um par des méthodes de double exposition." Grenoble INPG, 2004. http://www.theses.fr/2004INPG0122.
Full textIn microlithography, the reduction of the dimensions is generally achieved with a shorter exposure wavelength or a larger numerical aperture. But the acceleration of the integrated circuit miniaturisation challenges lithographers to push over the optical lithography limits by resolving structures below the exposure wavelength. The standard process used today for the mass production of the integrated circuits uses a 193nm wavelength. This process presents severe limitations for the sub 0,1um CMOS technologies. In this way, the goal of this study is to propose other lithographical solutions for the future generations. Among the differents process enhancing the sub wavelength resolution proposed over the last ten years, we are fundamentally interested by the double exposure techniques improving mainly the depth of focus, which is one of the microlithography key parameter. At the outcome of this work, we have proposed two methods improving the performance of the interconnection holes and those of poly lines. For the interconnection holes, the well suited solution is the FLEX symetrical exposure in energy and in focus. With this technique, the depth of focus is in average multiplied by two compared to the standard process. The second proposition is the outcome of a study done preliminarly on the understanding of the influence of the different optical parametres as the isofocal and the distribution of the diffracted ordres in the pupil plane on the depth of the focus. This study gives us the opportunity to develop a new lithographical method of double exposure called CODE (COmplentary Double Expoure). The first encouraging results show that this technique could be a possible alternative for the present 90nm and future 65nm technologies
Nguyen, Frederic. "TRANSPORT DANS UN PLASMA DE FUSION EN PRESENCE D'UN CHAMP MAGNETIQUE CHAOTIQUE." Phd thesis, Université Paris-Diderot - Paris VII, 1992. http://tel.archives-ouvertes.fr/tel-00011403.
Full textnumérique Mastoc, la topologie de la connexion magnétique sur la paroi est détenninée précisément. Il est ainsi possible de décrire le transport des particules et de l'énergie depuis le plasma confiné jusqu'aux éléments de paroi. Cette étude éclaire certaines des principales observations de l'expérience Tore Supra en configuration divertor ergodique : l'étalement du dépôt de puissance sur les différents éléments de la paroi sans concentration anormale, la robustesse de cette configuration vis-à-vis de défauts d'alignement, les structures visibles en lumière Ha lors de réattachement de plasma. Afin d'étudier le transport des ions impuretés, une approche variationnelle par minimum de production d'entropie a été développée. Ce principe variationnel est appliqué au calcul de la diffusion néoclassique des ions impuretés dans le champ électrique radial moyen. Ce champ électrique déconfine les ions si le profil de pression n'est pas équilibré par une force de Laplace, c'est-à-dire si le plasma est bloqué en rotation, poloïdale et toroïdale, par une perturbationmagnétiqueou une force friction.
Nguyen, Frédéric. "Transport dans un plasma de fusion en présence d'un champ magnétique chaotique." Paris 7, 1992. http://www.theses.fr/1992PA077293.
Full textCaron, Nicolas. "Optimisation par recuit simulé et fabrication de masques de phase pour l'augmentation de la profondeur de champ d'un microscope." Thesis, Université Laval, 2008. http://www.theses.ulaval.ca/2008/25222/25222.pdf.
Full textVaulchier, Louis-Anne de. "Transmission dans l'infrarouge lointain de couches minces d'yba#2cu#3o#7. Determination de la profondeur de penetration du champ electromagnetique." Paris 6, 1995. http://www.theses.fr/1995PA066228.
Full textSong, Xiao. "Optimisation automatisée de scénarios pour le système de champ magnétique poloïdal dans les tokamaks." Thesis, Université Côte d'Azur (ComUE), 2019. http://www.theses.fr/2019AZUR4089.
Full textThis thesis is concerned with developing and applying numerical tools in order to optimize the operation of the poloidal magnetic field (PF) system in tokamaks. The latter consists of a set of coils and power supplies which have the purpose of controlling the plasma shape and position, as well as driving the plasma current. The global context of our work is introduced in Chapter 1. Chapter 2 describes our approach, which consists in applying optimal control methods to the Free-Boundary plasma Equilibrium (FBE) problem, which is composed of a force balance equation in the plasma coupled to Maxwell’s equations in the whole tokamak. The numerical tool employed here is the FEEQS.M code, which can be used either (in the “direct” mode) as a solver of the FBE problem or (in the “inverse” mode) to minimize a certain function under the constraint that the FBE equations be satisfied. Each of these 2 modes (“direct” and “inverse”) subdivides into a “static” mode (which solves only for a given instant) and an “evolution” mode (which solves over a time window). The code is written in Matlab and based on the Finite Elements Method. The non-linear nature of the FBE problem is dealt with by means of Newton iterations, and Sequential Quadratic Programming (SQP) is used for the inverse modes. We stress that the “inverse evolution” mode is a unique feature of FEEQS.M, as far as we know. After describing the FBE problems and the numerical methods and some tests of the FEEQS.M code, we present 2 applications. The first one, described in Chapter 3, concerns the identification of the operating space in terms of plasma equilibrium in the ITER tokamak. This space is limited by the capabilities of the PF system, such as the maximum possible currents, field or forces in the PF coils. We have implemented penalization terms in the “objective” function (i.e. the function to be minimized) of the “inverse static” mode of FEEQS.M in order to take some of these limits into account. This allows calculating in a fast, rigorous and automatic way the operating space, taking these limits into account. This represents a substantial progress compared to “traditional” methods involving much heavier human intervention. The second application, presented in Chapter 4, regards the development of a fast transition from limiter to divertor plasma configuration at the beginning of a pulse in the WEST tokamak, with the motivation of reducing the plasma contamination by tungsten impurities. Here, FEEQS.M is used in “inverse evolution” mode. Data from a WEST experimental pulse is used to set up the simulation. The FEEQS.M calculation then provides optimized waveforms for the PF coils currents and power supplies voltages to perform a fast limiter to divertor transition. These waveforms are first tested on the WEST magnetic control simulator (which embeds FEEQS.M in “direct evolution” mode coupled to a feedback control system identical to the one in the real machine) and then on the real machine. This allowed speeding up the transition from ~ 1 s to 200 ms
Brédy, Jhemson, and Jhemson Brédy. "Prévision de la profondeur de la nappe phréatique d'un champ de canneberges à l'aide de deux approches de modélisation des arbres de décision." Master's thesis, Université Laval, 2019. http://hdl.handle.net/20.500.11794/37875.
Full textLa gestion intégrée de l’eau souterraine constitue un défi majeur pour les activités industrielles, agricoles et domestiques. Dans certains systèmes agricoles, une gestion optimisée de la nappe phréatique représente un facteur important pour améliorer les rendements des cultures et l’utilisation de l'eau. La prévision de la profondeur de la nappe phréatique (PNP) devient l’une des stratégies utiles pour planifier et gérer en temps réel l’eau souterraine. Cette étude propose une approche de modélisation basée sur les arbres de décision pour prédire la PNP en fonction des précipitations, des précédentes PNP et de l'évapotranspiration pour la gestion de l’eau souterraine des champs de canneberges. Premièrement, deux modèles: « Random Forest (RF) » et « Extreme Gradient Boosting (XGB) » ont été paramétrisés et comparés afin de prédirela PNP jusqu'à 48 heures. Deuxièmement, l’importance des variables prédictives a été déterminée pour analyser leur influence sur la simulation de PNP. Les mesures de PNP de trois puits d'observation dans un champ de canneberges, pour la période de croissance du 8 juillet au 30 août 2017, ont été utilisées pour entraîner et valider les modèles. Des statistiques tels que l’erreur quadratique moyenne, le coefficient de détermination et le coefficient d’efficacité de Nash-Sutcliffe sont utilisés pour mesurer la performance des modèles. Les résultats montrent que l'algorithme XGB est plus performant que le modèle RF pour prédire la PNP et est sélectionné comme le modèle optimal. Parmi les variables prédictives, les valeurs précédentes de PNP étaient les plus importantes pour la simulation de PNP, suivie par la précipitation. L’erreur de prédiction du modèle optimal pour la plage de PNP était de ± 5 cm pour les simulations de 1, 12, 24, 36 et 48 heures. Le modèle XGB fournit des informations utiles sur la dynamique de PNP et une simulation rigoureuse pour la gestion de l’irrigation des canneberges.
Integrated ground water management is a major challenge for industrial, agricultural and domestic activities. In some agricultural production systems, optimized water table management represents a significant factor to improve crop yields and water use. Therefore, predicting water table depth (WTD) becomes an important means to enable real-time planning and management of groundwater resources. This study proposes a decision-tree-based modelling approach for WTD forecasting as a function of precipitation, previous WTD values and evapotranspiration with applications in groundwater resources management for cranberry farming. Firstly, two models-based decision trees, namely Random Forest (RF) and Extrem Gradient Boosting (XGB), were parameterized and compared to predict the WTD up to 48-hours ahead for a cranberry farm located in Québec, Canada. Secondly, the importance of the predictor variables was analyzed to determine their influence on WTD simulation results. WTD measurements at three observation wells within acranberry field, for the growing period from July 8, 2017 to August 30, 2017, were used for training and testing the models. Statistical parameters such as the mean squared error, coefficient of determination and Nash-Sutcliffe efficiency coefficient were used to measure models performance. The results show that the XGB algorithm outperformed the RF model for predictions of WTD and was selected as the optimal model. Among the predictor variables, the antecedent WTD was the most important for water table depth simulation, followed by the precipitation. Base on the most important variables and optimal model, the prediction error for entire WTD range was within ± 5 cm for 1-, 12-, 24-, 26-and 48-hour prediction. The XGB model can provide useful information on the WTD dynamics and a rigorous simulation for irrigation planning and management in cranberry fields.
Integrated ground water management is a major challenge for industrial, agricultural and domestic activities. In some agricultural production systems, optimized water table management represents a significant factor to improve crop yields and water use. Therefore, predicting water table depth (WTD) becomes an important means to enable real-time planning and management of groundwater resources. This study proposes a decision-tree-based modelling approach for WTD forecasting as a function of precipitation, previous WTD values and evapotranspiration with applications in groundwater resources management for cranberry farming. Firstly, two models-based decision trees, namely Random Forest (RF) and Extrem Gradient Boosting (XGB), were parameterized and compared to predict the WTD up to 48-hours ahead for a cranberry farm located in Québec, Canada. Secondly, the importance of the predictor variables was analyzed to determine their influence on WTD simulation results. WTD measurements at three observation wells within acranberry field, for the growing period from July 8, 2017 to August 30, 2017, were used for training and testing the models. Statistical parameters such as the mean squared error, coefficient of determination and Nash-Sutcliffe efficiency coefficient were used to measure models performance. The results show that the XGB algorithm outperformed the RF model for predictions of WTD and was selected as the optimal model. Among the predictor variables, the antecedent WTD was the most important for water table depth simulation, followed by the precipitation. Base on the most important variables and optimal model, the prediction error for entire WTD range was within ± 5 cm for 1-, 12-, 24-, 26-and 48-hour prediction. The XGB model can provide useful information on the WTD dynamics and a rigorous simulation for irrigation planning and management in cranberry fields.
Al, Abram Ismail. "Etude sur modèle réduit bidimensionnel du champ de déplacement induit par le creusement d'un tunnel à faible profondeur : interaction avec les ouvrages existants." Lyon, INSA, 1998. http://www.theses.fr/1998ISAL0063.
Full textThe soil displacements were studied by experimentation on models and numerical simulation. An analogical soil of Taylor-Schneebeli type was used in a 2-dimensional model. The experimentation was analysed with the help of numerical image technique which allowed us to identify the displacement field and the deformations in the soil mass and particularly around and above the tunnel as well as on the ground surface. The influence of various parameters is considered: variation in the diameter of the tunnel, depth of the tunnel, presence of the existing structures. The knowledge gained from the displacement field and the boundary condition is used to study the validity of the different constitutive laws describing the sail behaviour. The different parameters used in these laws are obtained by performing biaxial and oedometric tests on Schneebeli rods. The soil behaviour is taken into account by two laws: the first law is an adapted form of perfect elasto-plasticity (Mohr-Coulomb), the second one consists of a hyperbolic part of Duncan type to which is incorporated a criterion of plasticity (M-C). These models are used in plane strain and true 2D. The experimental results are compared to those obtained by modelisation especially concerning the displacement field in the soil mass and the settlement curve. This study helped us to show: • the limits of utilisation of a perfect elasto-plastic law with a single modulus (loading/unloading). • the importance of the dilatancy of the material. • the domain of validation of these simple models where a good agreement between calculation and experimentation was obtained
Authié, Guillaume. "Convection naturelle sous champ magnétique en cavité verticale élancée : application aux couvertures des réacteurs de fusion." Grenoble INPG, 2002. http://www.theses.fr/2002INPG0019.
Full textSune, Jean-Luc. "Estimation de la profondeur d'une scène à partir d'une caméra en mouvement." Grenoble INPG, 1995. http://www.theses.fr/1995INPG0189.
Full textPantet, Anne. "Creusement de galeries à faible profondeur a l'aide d'un tunnelier a pression de boue : Mesures "in situ" et étude théorique du champ de déplacements." Lyon, INSA, 1991. http://www.theses.fr/1991ISAL0088.
Full textThe utilisation of underground space, in particular in urban areas (underground railways in Lyon, Lille and Toulouse, railway tunnels in Villejust, Channel Tunnel, Val de Marne collector), has contributed, since the early 1980's, to the increasing use in France of confined shields. Confined shield technology, although very sophisticated, utilising slurry and earth pressure, does not prevent the settlement caused by the excavation of the tunnels. The purpose of this study was to examine the field of displacement caused by the excavation of a shallow tunnel in granular soft soils. The study comprised four main parts : - a) shield technology, - b) the general investigation of displacement around shield-excavated tunnels, - c) detailed stud of the works at Villejust and Lille, excavated using slurry shield, - d) modelling of excavation and comparison of different theoretical models. The first and second parts, describe the field in which slurry shields are currently used and identify the main causes of displacements during excavation. The third part demonstrates the strong influence of the working methods and the geometry of the project, and the difficulty of estimating displacements. The fourth art shows that the theoretical determination of displacements remains difficult
Andrade, Valente da Silva Michelle. "SLAM and data fusion for autonomous vehicles : from classical approaches to deep learning methods." Thesis, Paris Sciences et Lettres (ComUE), 2019. http://www.theses.fr/2019PSLEM079.
Full textSelf-driving cars have the potential to provoke a mobility transformation that will impact our everyday lives. They offer a novel mobility system that could provide more road safety, efficiency and accessibility to the users. In order to reach this goal, the vehicles need to perform autonomously three main tasks: perception, planning and control. When it comes to urban environments, perception becomes a challenging task that needs to be reliable for the safety of the driver and the others. It is extremely important to have a good understanding of the environment and its obstacles, along with a precise localization, so that the other tasks are well performed. This thesis explores from classical approaches to Deep Learning techniques to perform mapping and localization for autonomous vehicles in urban environments. We focus on vehicles equipped with low-cost sensors with the goal to maintain a reasonable price for the future autonomous vehicles. Considering this, we use in the proposed methods sensors such as 2D laser scanners, cameras and standard IMUs. In the first part, we introduce model-based methods using evidential occupancy grid maps. First, we present an approach to perform sensor fusion between a stereo camera and a 2D laser scanner to improve the perception of the environment. Moreover, we add an extra layer to the grid maps to set states to the detected obstacles. This state allows to track an obstacle overtime and to determine if it is static or dynamic. Sequentially, we propose a localization system that uses this new layer along with classic image registration techniques to localize the vehicle while simultaneously creating the map of the environment. In the second part, we focus on the use of Deep Learning techniques for the localization problem. First, we introduce a learning-based algorithm to provide odometry estimation using only 2D laser scanner data. This method shows the potential of neural networks to analyse this type of data for the estimation of the vehicle's displacement. Sequentially, we extend the previous method by fusing the 2D laser scanner with a camera in an end-to-end learning system. The addition of camera images increases the accuracy of the odometry estimation and proves that we can perform sensor fusion without any sensor modelling using neural networks. Finally, we present a new hybrid algorithm to perform the localization of a vehicle inside a previous mapped region. This algorithm takes the advantages of the use of evidential maps in dynamic environments along with the ability of neural networks to process images. The results obtained in this thesis allowed us to better understand the challenges of vehicles equipped with low-cost sensors in dynamic environments. By adapting our methods for these sensors and performing the fusion of their information, we improved the general perception of the environment along with the localization of the vehicle. Moreover, our approaches allowed a possible comparison between the advantages and disadvantages of learning-based techniques compared to model-based ones. Finally, we proposed a form of combining these two types of approaches in a hybrid system that led to a more robust solution
Falcon, Maimone Rafael. "Co-conception des systemes optiques avec masques de phase pour l'augmentation de la profondeur du champ : evaluation du performance et contribution de la super-résolution." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLO006/document.
Full textPhase masks are wavefront encoding devices typically situated at the aperture stop of an optical system to engineer its point spread function (PSF) in a technique commonly known as wavefront coding. These masks can be used to extend the depth of field (DoF) of imaging systems without reducing the light throughput by producing a PSF that becomes more invariant to defocus; however, the larger the DoF the more blurred the acquired raw image so that deconvolution has to be applied on the captured images. Thus, the design of the phase masks has to take into account image processing in order to reach the optimal compromise between invariance of PSF to defocus and capacity to deconvolve the image. This joint design approach has been introduced by Cathey and Dowski in 1995 and refined in 2002 for continuous-phase DoF enhancing masks and generalized by Robinson and Stork in 2007 to correct other optical aberrations.In this thesis we study the different aspects of phase mask optimization for DoF extension, such as the different performance criteria and the relation of these criteria with the different mask parameters. We use the so-called image quality (IQ), a mean-square error based criterion defined by Diaz et al., to co-design different phase masks and evaluate their performance. We then compare the relevance of the IQ criterion against other optical design metrics, such as the Strehl ratio, the modulation transfer function (MTF) and others. We focus in particular on the binary annular phase masks, their performance for various conditions, such as the desired DoF range, the number of optimization parameters, presence of aberrations and others.We use then the analysis tools used for the binary phase masks for continuous-phase masks that appear commonly in the literature, such as the polynomial-phase masks. We extensively compare these masks to each other and the binary masks, not only to assess their benefits, but also because by analyzing their differences we can understand their properties.Phase masks function as a low-pass filter on diffraction limited systems, effectively reducing aliasing. On the other hand, the signal processing technique known as superresolution uses several aliased frames of the same scene to enhance the resolution of the final image beyond the sampling resolution of the original optical system. Practical examples come from the works made during a secondment with the industrial partner KLA-Tencor in Leuven, Belgium. At the end of the manuscript we study the relevance of using such a technique alongside phase masks for DoF extension
DJORDJEVIC, SOPHIE. "Profondeur de penetration du champ electromagnetique dans des couches minces d'yba 2cu 3o 7 et effet du temps de vie : etude par transmission dans l'infrarouge lointain." Paris 6, 1998. http://www.theses.fr/1998PA066468.
Full textVoisin, Yvon. "Détermination d'un critère pour la mise au point automatique des caméras pour des scènes à faible profondeur de champ : contribution à la mise au point des microscopes." Besançon, 1993. http://www.theses.fr/1993BESA2016.
Full textHarms, Fabrice. "Imagerie des tissus à haute résolution en profondeur par tomographie de cohérence optique plein champ : approches instrumentales et multimodales pour l'application au diagnostic per-opératoire du cancer." Thesis, Paris 6, 2015. http://www.theses.fr/2015PA066702/document.
Full textAmong medical imaging techniques, optical imaging methods have been significantly developped during the past decades. More specifically, among recently proposed optical imaging techniques, Full-Field Optical Coherence Tomography – or FFOCT – provides unique capabilities, in particular regarding resolution and instrumental simplicity, which allows to consider its application to cancer diagnosis. This thesis demonstrates the design and implementation of new FFOCT devices for use in a clinical context, targeting improvement and optimization of the technique. Two major development parts have been realized : A translational part, comprising the development of a FFOCT microscope adapted to a clinical use for intraoperative diagnosis of cancer on tissue biopsies, and the assessment of its diagnosis performance for several clinical cases : the intraoperative diagnosis of breast tissue, of brain resections, and the preoperative qualification of corneal grafts. A research part - mainly instrumental - targeting the improvement of the diagnosis performance of the technique, based on new multimodal (fluorescence contrast, dynamic contrast) and multiscale approaches, or on the miniaturization of the device by developing a handheld rigid endoscope for clinical use
Nardon, Eric. "Contrôle des instabilités de bord par perturbations magnétiques résonnantes." Palaiseau, Ecole polytechnique, 2007. http://www.theses.fr/2007EPXX0022.
Full textChebbo, Manal. "Simulation fine d'optique adaptative à très grand champ pour des grands et futurs très grands télescopes." Thesis, Aix-Marseille, 2012. http://www.theses.fr/2012AIXM4733/document.
Full textRefined simulation tools for wide field AO systems on ELTs present new challenges. Increasing the number of degrees of freedom makes the standard simulation's codes useless due to the huge number of operations to be performed at each step of the AO loop process. The classical matrix inversion and the VMM have to be replaced by a cleverer iterative resolution of the Least Square or Minimum Mean Square Error criterion. For this new generation of AO systems, concepts themselves will become more complex: data fusion coming from multiple LGS and NGS will have to be optimized, mirrors covering all the field of view associated to dedicated mirrors inside the scientific instrument itself will have to be coupled using split or integrated tomography schemes, differential pupil or/and field rotations will have to be considered.All these new entries should be carefully simulated, analysed and quantified in terms of performance before any implementation in AO systems. For those reasons i developed, in collaboration with the ONERA, a full simulation code, based on iterative solution of linear systems with many parameters (sparse matrices). On this basis, I introduced new concepts of filtering and data fusion to effectively manage modes such as tip, tilt and defoc in the entire process of tomographic reconstruction. The code will also eventually help to develop and test complex control laws who have to manage a combination of adaptive telescope and post-focal instrument including dedicated DM
Zanuttini, Antoine. "Du photoréalisme au rendu expressif en image 3D temps réel dans le jeu vidéo : programmation graphique pour la profondeur de champ, la matière, la réflexion, les fluides et les contours." Paris 8, 2012. http://octaviana.fr/document/171326563#?c=0&m=0&s=0&cv=0.
Full textThis study seeks to go beyond the standardized video games aesthetics by adding new depiction techniques for real-time digital imagery. Photorealistic rendering is often limited in control and flexibility for an artist who searches beyond fidelity to real. Achieving credibility and immersion will then often require stylization for the view to be more convincing and aesthetic. Expressive rendering goes further by taking care of the artist's personal vision, while being based on some real life attributes and phenomena, altering them at the same time. We will show that photorealism and expressive rendering join and complete each other under numerous aspects. Three themes related to photorealism will be presented, then some personal original techniques will be introduced. The theme of depth of field will lead us to consider the shape of the virtual camera's lens through the Hexagonal Summed Area Table algorithm. We will then look at material, light and especially ambient and specular reflections and the importance of parallax correction with regards to them. Our third theme will be the rendering of fluid motion and the advection of textures according to the flow for easy and efficient detail addition. These three subjects will then be integrated into expressive rendering and used as expression tools for the artist, through the creation of a dream effect, of screen-space rendered fluids, and of hatched material shading. Finally, we will show our creations specifically dedicated to expressive rendering and stroke stylization
Corbat, Lisa. "Fusion de segmentations complémentaires d'images médicales par Intelligence Artificielle et autres méthodes de gestion de conflits." Thesis, Bourgogne Franche-Comté, 2020. http://www.theses.fr/2020UBFCD029.
Full textNephroblastoma is the most common kidney tumour in children and its diagnosis is based exclusively on imaging. This work, which is the subject of our research, is part of a larger project: the European project SAIAD (Automated Segmentation of Medical Images Using Distributed Artificial Intelligence). The aim of the project is to design a platform capable of performing different automatic segmentations from source images using Artificial Intelligence (AI) methods, and thus obtain a faithful three-dimensional reconstruction. In this sense, work carried out in a previous thesis of the research team led to the creation of a segmentation platform. It allows the segmentation of several structures individually, by methods such as Deep Learning, and more particularly Convolutional Neural Networks (CNNs), as well as Case Based Reasoning (CBR). However, it is then necessary to automatically fuse the segmentations of these different structures in order to obtain a complete relevant segmentation. When aggregating these structures, contradictory pixels may appear. These conflicts can be resolved by various methods based or not on AI and are the subject of our research. First, we propose a fusion approach not focused on AI using the combination of six different methods, based on different imaging and segmentation criteria. In parallel, two other fusion methods are proposed using, a CNN coupled to the CBR for one, and a CNN using a specific existing segmentation learning method for the other. These different approaches were tested on a set of 14 nephroblastoma patients and demonstrated their effectiveness in resolving conflicting pixels and their ability to improve the resulting segmentations
Sevrin, Loic. "Mesure et suivi d'activité de plusieurs personnes dans un Living Lab en vue de l'extraction d'indicateurs de santé et de bien-être." Thesis, Lyon, 2016. http://www.theses.fr/2016LYSE1162/document.
Full textThe ageing of the population is a global phenomenon which comes with an increase of the amount of patients suffering from chronic diseases. It forces to rethink the healthcare by bringing health monitoring and care at home and in the city.Considering the activity as a visible indication of the health status, this thesis seeks to provide technological means to monitor several people's activities in a living lab composed of an apartment and the city around.Indeed, maintaining substantial physical activity, in particular social activity accounts fo an important part of a person's good health status. Hence, it must be studied as well as the ability to perform the activities of daily living.This study enabled the implementation of a platform for collaborative design and full-scale experimentation concerning healthcare at home and in the city: the INL living lab.The latest was the theatre of some first experimentations which highlighted the living lab ability to perform activity data fusion from a set of heterogeneous sensors, and also to evolve by integrating new technologies and services.The studied collaborative scenarios enable a first approach of the collaboration analysis by detecting the simultaneous presence of several people in the same room. These preliminary results are encouraging and will be completed by more precise measurements which will include more sensors in the coming months
Manceau, Jérôme. "Clonage réaliste de visage." Thesis, CentraleSupélec, 2016. http://www.theses.fr/2016SUPL0004/document.
Full text3D face clones can be used in many areas such as Human-Computer Interaction and as pretreatment in applications such as emotion analysis. However, such clones should have well-modeled facial shape while keeping the specificities of individuals and they should be semantic. A clone is semantic when we know the position of the different parts of the face (eyes, nose...). In our technique, we use a RGB-D sensor to get the specificities of individuals and 3D Morphable Face Model to mark facial shape. For the reconstruction of the shape, we reverse the process classically used. Indeed, we first perform fitting and then data fusion. For each depth frame, we keep the suitable parts of data called patches. Depending on the location, we merge either sensor data or 3D Morphable Face Model data. For the reconstruction of the texture, we use shape and texture patches to preserve the person's characteristics. They are detected using the depth frames of a RGB-D sensor. The tests we perform show the robustness and the accuracy of our method
Scamps, Guillaume. "Effet de l'appariement sur la dynamique nucléaire." Caen, 2014. http://www.theses.fr/2014CAEN2009.
Full textPairing correlations is an essential component for the description of the atomic nuclei. The effects of pairing on static property of nuclei are now well known. In this thesis, the effect of pairing on nuclear dynamics is investigated. Theories that includes pairing are benchmarked in a model case. The TDHF+BCS theory turns out to be a good compromise between the physics taken into account and the numerical cost. This TDHF+BCS theory was retained for realistic calculations. Nevertheless, the application of pairing in the BCS approximation may induce new problems due to (1) the particle number symmetry breaking, (2) the non-conservation of the continuity equation. These difficulties are analysed in detail and solutions are proposed. In this thesis, a 3 dimensional TDHF+BCS code is developed to simulate the nuclear dynamic. Applications to giant resonances show that pairing modify only the low lying peaks. The high lying collective components are only affected by the initial conditions. An exhaustive study of the giant quadrupole resonances with the TDHF+BCS theory is performed on more than 700 spherical or deformed nuclei. Is is shown that the TDHF+BCS theory reproduces well the collective energy of the resonance. After validation on the small amplitude limit problem, the approach was applied to study nucleon transfer in heavy ion reactions. A new method to extract transfer probabilities is introduced. It is demonstrated that pairing significantly increases the two-nucleon transfer probability
Ettoumi, Wahb. "Dynamique hamiltonienne et phénomènes de relaxation: d'un modèle champ moyen au confinement magnétique." Phd thesis, Ecole Polytechnique X, 2013. http://tel.archives-ouvertes.fr/tel-00925491.
Full textThomas, Catherine. "Mesures du gradient accélérateur maximum dans des cavités supraconductrices en régime impulsionnel à 3 GHz." Phd thesis, Université Paris Sud - Paris XI, 2000. http://tel.archives-ouvertes.fr/tel-00006564.
Full textColin, Muriel. "Modélisation d'un réflectomètre mode X en vue de caractériser les fluctuations de densité et de champ magnétique : applications aux signaux de Tore Supra." Nancy 1, 2001. http://www.theses.fr/2001NAN10181.
Full textThis work deals with the interaction between a probing wave and plasma fluctuations. For all probing wave polarization in reflectometry expermiments, this interaction can be described by a Mathieu's equation. In order to check the validity domain of our model we have developed softwares with new numerical schems (both in O-mode and X-mode). After these validations, the ratio of the wave amplitude backscattered by density and magnetic fluctuations has been evaluated, and we have confirmed that the density fluctuations are prominent in most of cases in tokamak expermiments. The accuracy of numerical methods is high enough to simulate the reflectometer experiments. The part of coherent fluctuations in 1D is now well determined and a new connection between the spectrum of the phase variations and the turbulence spectrum has been shown
Dey, Nicolas. "Etude de la formation de l'image d'un objet microscopique 3D translucide - Application à la microscopie." Phd thesis, Université du Maine, 2002. http://tel.archives-ouvertes.fr/tel-00003309.
Full textFil, Nicolas. "Caractérisation et modélisation des propriétés d’émission électronique sous champ magnétique pour des systèmes RF hautes puissances sujets à l’effet multipactor." Thesis, Toulouse, ISAE, 2017. http://www.theses.fr/2017ESAE0025/document.
Full textSpace communication payload as well as magnetic confinement fusion devices, among other applications, are affected by multipactor effect. This undesirable phenomenon can appear inside high frequency (HF) components under vacuum and lead to increase the electron density in the vacuum within the system. Multipactor effect can thus disturb the wave signal and trigger local temperature increases or breakdowns. This PhD research aims to improve our understanding and the prediction of the multipactor effect. The multipactor phenomenon is a resonant process which can appear above a certain RF power threshold. To determine this power threshold, experimental tests or/and simulations are commonly used. We have made a study to evaluate the multipactor power threshold sensitivity to the TEEY. Two particular critical parameters have been found: first cross-over energy and the energies between the first cross-over and the maximum energies. In some situations, the HF components are submitted to DC magnetic fields which might affect the electron emission properties and hence the multipactor power threshold. Current multipactor simulation codes don’t take into account the effect of the magnetic field on the TEEY. A new experimental setup specially designed to investigate this effect was developed during this work. Our new experimental setup and the associated TEEY measurement technique were analysed and optimized thanks to measurements and SPIS simulations. We used the setup to study the influence of magnetic field perpendicular to the sample surface on the TEEY of copper. We have demonstrated that the magnetic field affects the copper TEEY, and hence multipactor power threshold
Vu, Dinh Toan. "Unification du système de hauteur et estimation de la structure lithosphérique sous le Vietnam utilisant la modélisation du champ de gravité et du quasigéoïde à haute résolution." Thesis, Toulouse 3, 2021. http://www.theses.fr/2021TOU30050.
Full textThe goal of this work was twofold. The first part was devoted to the research of the size and physical shape of the Earth in Vietnam through the determination of a local gravimetric quasigeoid model. The second part was to better constrain the Earth's interior structure beneath Vietnam by determining the Moho and Lithosphere-Asthenosphere Boundary (LAB) depth models. For the first objective, a high-resolution gravimetric quasigeoid model for Vietnam and its surrounding areas was determined based on new land gravity data in combination with fill-in data where no gravity data existed. The resulting quasigeoid model was evaluated using 812 GNSS/levelling points in the study region. This comparison indicates that the quasigeoid model has a standard deviation of 9.7 cm and 50 cm in mean bias. This new local quasigeoid model for Vietnam represents a significant improvement over the global models EIGEN-6C4 and EGM2008, which have standard deviations of 19.2 and 29.1 cm, respectively, when compared to the GNSS/levelling data. An essential societal and engineering application of the gravimetric quasigeoid is in GNSS levelling, and a vertical offset model for Vietnam and its surrounding areas was determined based on the GNSS/levelling points and gravimetric-only quasigeoid model for this purpose. The offset model was evaluated using cross-validation technique by comparing with GNSS/levelling data. Results indicate that the offset model has a standard deviation of 5.9 cm in the absolute sense. Thanks to this offset model, GNSS levelling can be carried out over most of Vietnam's territory complying to third-order levelling requirements, while the accuracy requirements for fourth-order levelling networks is met for the entire country. To unify the height system towards the International Height Reference Frame (IHRF), the zero-height geopotential value for the Vietnam Local Vertical Datum W_0^LVD was determined based on two approaches: 1) Using high-quality GNSS/levelling data and the estimated gravimetric quasigeoid model, 2) Using the Geodetic Boundary Value Problem (GBVP) approach based on the GOCE global gravity field model enhanced with terrestrial gravity data. This geopotential value can be used to connect the height system of Vietnam with the neighboring countries. Moreover, the GBVP approach was also used for direct determination of the gravity potential on the surface at three GNSS Continuously Operating Reference Station (CORS) stations at epoch 2018.0 in Vietnam. Based on time series of the vertical component derived from these GNSS observations as well as InSAR data, temporal variations in the geopotential were also estimated on these permanent GNSS stations. This enables monitoring of the vertical datum and detect possible deformation. These stations may thus contribute to increase the density of reference points in the IHRF for this region. For the second objective, the local quasigeoid model was first converted to the geoid. Then, high-resolution Moho and LAB depth models were determined beneath Vietnam based on the local isostatic hypothesis using the geoid height derived from the estimated geoid, elevation data and thermal analysis. From new land gravity data, a complete grid and map of gravity anomalies i.e., Free-air, Bouguer and Isostatic was determined for the whole of Vietnam. The Moho depth was also computed based on the gravity inversion using the Bouguer gravity anomaly grid. All new models are computed at 1' resolution. The resulting Moho and LAB depth models were evaluated using available seismic data as well as global and local lithospheric models available in the study region. [...]
Levy, Yoann. "Etude numérique et modélisation des instabilités hydrodynamiques dans le cadre de la fusion par confinement inertiel en présence de champs magnétiques auto-générés." Phd thesis, Université Paris Sud - Paris XI, 2012. http://tel.archives-ouvertes.fr/tel-00742130.
Full textBchir, Aroussia. "Brian De Palma : une esthétique de la violence?" Thesis, Paris 1, 2016. http://www.theses.fr/2016PA01H321.
Full textAn esthetically pleasing approach to Brian De Palma's cinematographic work. The issue revolves around editing aesthetics and image violence. First, the text questions the cutting mode favored by Brian De Palme, stressing the importance of sequence-shot and split screen. The use of sequence-shot is in particular brought by the issue of lack of vision, a phenomenon considered as central. A second point of this thesis is devoted to the study of the characters. Anti-hero characters, drop outs. Emphasis is particularly placed on the female body. And voyeuristic gaze returns to the issue of cutting by Brian De Palma. How does Brian De Palma use eyes to see violence ? What is the meaning of « looking » to Brian De Palm a? How are De Palma's voyeuristic elements constructed from film language ? Brian De Palma's film also promises to be clever and complex, between classicism and modernism. How does Brian De Palma use Hitchcock's work in order to offer a new design? How assaulting the image to get its unseen part ?
Chebbo, Manal. "SIMULATION FINE D'OPTIQUE ADAPTATIVE A TRES GRAND CHAMP POUR DES GRANDS ET FUTURS TRES GRANDS TELESCOPES." Phd thesis, Aix-Marseille Université, 2012. http://tel.archives-ouvertes.fr/tel-00742873.
Full textBarbut, Jean-Marc. "Texturation d'YBa2Cu3O(7-[delta]) par fusion de zone sous champ magnétique : détermination par mesure de courant critique de son diagramme de phase dans le plan [H,[THETA]] à 77 K : mise en évidence par mesures résistives de l'existence en champ nul d'une transition du 1er ordre dans l'état supraconducteur." Grenoble 1, 1994. http://www.theses.fr/1994GRE10049.
Full textPellan, Yves. "Etude de la metastabilite de la transition supraconductrice de films divises d'indium sous champ magnetique parallele et perpendiculaire." Rennes, INSA, 1987. http://www.theses.fr/1987ISAR0007.
Full textNardon, Eric. "Modélisation non-linéaire du transport en présence d'instabilité MHD du plasma périphérique de tokamak." Phd thesis, Ecole Polytechnique X, 2007. http://pastel.archives-ouvertes.fr/pastel-00003137.
Full textVorobiov, Serguei͏̈. "Observations de la méthode du Crabe de 1996 à 2002 avec le télescope à effet Tcherenkov atmosphérique CAT et mise en oeuvre d'une nouvelle méthode d'analyse des gerbes atmosphériques." Palaiseau, Ecole polytechnique, 2004. http://www.theses.fr/2004EPXX0001.
Full text