Tesis sobre el tema "Calcul quantique par mesure"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte los 50 mejores tesis para su investigación sobre el tema "Calcul quantique par mesure".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.
Booth, Robert Ivan. "Measurement-based quantum computation beyond qubits". Electronic Thesis or Diss., Sorbonne université, 2022. http://www.theses.fr/2022SORUS022.
Texto completoMeasurement-based quantum computation (MBQC) is an alternative model for quantum computation, which makes careful use of the properties of the measurement of entangled quantum systems to perform transformations on an input. It differs fundamentally from the standard quantum circuit model in that measurement-based computations are naturally irreversible. This is an unavoidable consequence of the quantum description of measurements, but begets an obvious question: when does an MBQC implement an effectively reversible computation? The measurement calculus is a framework for reasoning about MBQC with the remarkable feature that every computation can be related in a canonical way to a graph. This allows one to use graph-theoretical tools to reason about MBQC problems, such as the reversibility question, and the resulting study of MBQC has had a large range of applications. However, the vast majority of the work on MBQC has focused on architectures using the simplest possible quantum system: the qubit. It remains an open question how much of this work can be lifted to other quantum systems. In this thesis, we begin to tackle this question, by introducing analogues of the measurement calculus for higher- and infinite-dimensional quantum systems. More specifically, we consider the case of qudits when the local dimension is an odd prime, and of continuous-variable systems familiar from the quantum physics of free particles. In each case, a calculus is introduced and given a suitable interpretation in terms of quantum operations. We then relate the resulting models to the standard circuit models, using graph-theoretical tools called "flow" conditions
Lacour, Xavier. "Information Quantique par Passage Adiabatique : Portes Quantiques et Décohérence". Phd thesis, Université de Bourgogne, 2007. http://tel.archives-ouvertes.fr/tel-00180890.
Texto completoprocessus adiabatiques permettant l'implémentation de portes logiques
quantiques, les constituants élémentaires des ordinateurs quantiques, par
l'interaction de champs laser impulsionnels avec des atomes. L'utilisation de
techniques adiabatiques permet des implémentations robustes, i.e. insensibles
aux fluctuations des paramètres expérimentaux. Les processus décrits dans cette
thèse ne nécessitent que le contrôle précis des polarisations et des phases
relatives des champs lasers. Ces processus permettent l'implémentation d'un
ensemble universel de portes quantiques, autorisant l'implémentation de toute
autre porte quantique par combinaisons.
La seconde partie de cette thèse concerne les effets de la décohérence par
déphasage sur le passage adiabatique. La formule de probabilité de transition
d'un système à deux niveaux tenant compte de ces effets décohérents est établie.
Cette formule est valable dans les différents régimes, diabatique et
adiabatique, et permet d'établir les paramètres de trajectoires elliptiques
optimisant le transfert de population.
Ducher, Manoj. "Fractionnement isotopique du zinc à l'équilibre par calcul ab initio". Electronic Thesis or Diss., Paris 6, 2017. http://www.theses.fr/2017PA066318.
Texto completoIsotopic compositions are used to study the biogeochemical cycle of Zn, which is greatly impacted by anthropic activities. However, the interpretation of the measurements performed on natural or synthetic samples requires the knowledge of Zn isotope properties in equilibrium conditions (as reference) and the understanding of the mechanisms that are at the origin of the isotopic composition variations. In this work, we determined by performing quantum calculations, equilibrium Zn isotope fractionation constants in various phases including solids and liquids. We highlighted the crystal-chemical parameters controlling the isotopic properties : Zn interatomic force constant, Zn-first neighbours bond lengths and the electronic charge on atoms involved in the bonding. We carried out a methodological development in order to calculate isotopic properties in liquid phases from molecular dynamics trajectories at a reduced computational cost. We showed through the modelling of aqueous Zn that a reasonable description of van der Waals forces using a non-local exchange correlation functional is required to stabilise the experimentally observed hexaaquo zinc complex over other complexes at room temperature. This work provides a consistent database of equilibrium Zn isotope fractionation constants for experimental works
Gorczyca, Agnès. "Caractérisation de catalyseurs métalliques supportés par spectroscopie XANES, apports du calcul quantique dans l'interprétation des spectres expérimentaux". Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENI062/document.
Texto completoThe study of metallic nanoclusters supported on oxides is of paramount fundamental and technological importance, particularly in the field of energy. The nanoparticles based on platinum supported on gamma alumina are widely used as highly dispersed heterogeneous catalysts especially under reducing hydrogen atmosphere. Their reactivity and selectivity are intimately related to the local geometry and the electronic density of active sites. These are particularly difficult to define, given the very small size of the studied particles (about 0.8 nm in diameter). XANES (X-Ray Absorption Near Edge Structure) spectroscopy requiring synchrotron radiation, is one of the most appropriate tools to study these systems, especially in situ, at the atomic scale. Indeed the XANES spectra are influenced by the geometry and symmetry of the atoms local environment (especially angles between bonds), the degree of oxidation, the bond types involved, and the electronic structure of the system . All these factors are nevertheless difficult to differentiate and even to interpret. It is therefore impossible to infer accurately the structure of the metal particles by experience alone, without any comparison with simulated spectra. The establishment of theoretical models becomes necessary. We are implementing an approach that combines high-resolution XANES experiments in situ and quantum simulations, the latter aimed at proposing relevant structural models to quantify the reactivity of the particles and calculating spectral characteristics for comparison to experiment. The identification of the clusters morphologies, the metal-support interaction and the hydrogen coverage is made possible combining experiments and quantum calculations. The library of existing monometallic Pt particles models supported on Gamma alumina with or without adsorbed hydrogen, is refined. New models considering the two main surface of Gamma alumina, the particle size and hydrogen adsorption are developed. This extended library of models enabled a study of the effect of particle size, morphology, electronic structure, different alumina faces, and the hydrogen coverage on the signature of XANES spectra. This first study of monometallic platinum catalysts, concludes with the discrimination of the morphologies, but especially with the quantification of the hydrogen coverage of the particles for each temperature and hydrogen pressure experimental condition. Then, models of bimetallic Platinum-tin particles supported on the (100) Gamma alumina face are performed with hydrogen adsorption. These models provide insights into the effect of tin on the morphology, the electronic properties and the interaction with the support and hydrogen of these clusters. Different compositions were explored, which provided information on the dilution of platinum by tin. The adsorption of hydrogen was then studied on Pt10Sn3 clusters supported on the (100) face of alumina. Although many parameters are not yet included in these models, the comparison to the experience already provides a first approximation to the description of bimetallic systems
Majdabadino, Massoud. "Contribution au calcul des champs dans les fours microondes chargés". Toulouse, INPT, 1992. http://www.theses.fr/1992INPT039H.
Texto completoOuraou, Ahmimed. "Mesure des fonctions de structure du proton par diffusion inélastique de muons sur cible d'hydrogène et tests de la chromodynamique quantique". Paris 11, 1988. http://www.theses.fr/1988PA112118.
Texto completoROGALEWICZ, GILARD FRANCOISE. "Etude des interactions entre zinc(ii) et acides amines en phase gazeuse par spectrometrie de masse et calcul quantique". Paris 11, 1999. http://www.theses.fr/1999PA112358.
Texto completoMappe, fogaing Irene. "Mesures par spectrométrie laser des flux de N2O et CH4 produits par les sols agricoles et viticoles". Thesis, Reims, 2013. http://www.theses.fr/2013REIMS017/document.
Texto completoSince the industrial revolution, emissions of greenhouse gases (GHG) responsible for global warming, mainly anthropogenic, continue to increase. Among these gases, the main concerned are carbon dioxide (CO2 ), nitrous oxide (N2O ) and methane (CH4 ).In my thesis, we will focus mainly on N2O and CH4 , which despite their smaller quantities in the atmosphere, have a global warming potential higher than the CO2. These anthropogenic gas emissions are sufficient to cause climatic change in the short or medium term. It is therefore necessary to understand the phenomena linked to these emissions.Many European networks such as Euroflux, CarboEuroflux, NitroEurope, CarboEurope GHG-Europe and ICOS have actively contributed to the understanding and quantification of greenhouse gases emissions. However it remains considerable uncertainty about the inter-annual balance sheets of these emissions. To better assimilate the temporal variability of N2O and CH4 emissions, it is necessary to measure continuously over time in terms of ecosystems, soil types, and to have performance measurement tools. The GSMA with its expertise in instrumentation, has developed a spectrometer using a quantum cascade laser, QCLAS (Quantum Cascade Laser Absorption Spectrometer), designed to measure in situ gas flow produced by the soil. As in any experiment, QCLAS measurements may be contaminated by noise. These noises can cause biases in fluxes determination. This is why we will focus on signal proccessing methods such as wavelet transform, singular value decomposition, with the purpose of extracting useful signal informations and significantly improving the signal to noise ratio and the dispersion of measurements. This thesis is organized in three main parts: The first part is devoted first to conventional techniques for gas measurements, where we will introduce the instrument QCLAS. Then, we will examine three usual techniques of flow measurement namely: the technique of closed chambers, Eddy correlation and relaxed Eddy accumulation. The second part will focus on the different procedures and treatment methods to optimize experimental measurements. The last part will focus on the various measurements campaigns made with QCLAS. These applications demonstrate the robustness of QCLAS as well as its ability to perform field measurements
Virchaux, Marc. "Mesure de fonctions de structure par diffusion inelastique de muons sur cible de carbone : tests de la chromodynamique quantique". Paris 7, 1988. http://www.theses.fr/1988PA077207.
Texto completoVirchaux, Marc. "Mesure de fonctions de structure par diffusion inélastique de muons sur cible de carbone tests de la chromodynamique quantique /". Grenoble 2 : ANRT, 1988. http://catalogue.bnf.fr/ark:/12148/cb37619160w.
Texto completoBouchendira, Rym. "Mesure de l'effet de recul de l'atome de rubidium par interférométrie atomique : nouvelle détermination de la constante de structure fine pour tester l'électrodynamique quantique". Paris 6, 2012. http://www.theses.fr/2012PA066146.
Texto completoModica, Vincent. "Développement d'une mesure quantitative de concentration d'espèces dopées par fluorescence induite par laser : application aux conditions moteur". Paris 6, 2006. http://www.theses.fr/2006PA066068.
Texto completoSayrin, Clément. "Préparation et stabilisation d'un champ non classique en cavité par rétroaction quantique". Phd thesis, Université Pierre et Marie Curie - Paris VI, 2011. http://tel.archives-ouvertes.fr/tel-00654082.
Texto completoDucher, Manoj. "Fractionnement isotopique du zinc à l'équilibre par calcul ab initio". Thesis, Paris 6, 2017. http://www.theses.fr/2017PA066318/document.
Texto completoIsotopic compositions are used to study the biogeochemical cycle of Zn, which is greatly impacted by anthropic activities. However, the interpretation of the measurements performed on natural or synthetic samples requires the knowledge of Zn isotope properties in equilibrium conditions (as reference) and the understanding of the mechanisms that are at the origin of the isotopic composition variations. In this work, we determined by performing quantum calculations, equilibrium Zn isotope fractionation constants in various phases including solids and liquids. We highlighted the crystal-chemical parameters controlling the isotopic properties : Zn interatomic force constant, Zn-first neighbours bond lengths and the electronic charge on atoms involved in the bonding. We carried out a methodological development in order to calculate isotopic properties in liquid phases from molecular dynamics trajectories at a reduced computational cost. We showed through the modelling of aqueous Zn that a reasonable description of van der Waals forces using a non-local exchange correlation functional is required to stabilise the experimentally observed hexaaquo zinc complex over other complexes at room temperature. This work provides a consistent database of equilibrium Zn isotope fractionation constants for experimental works
Cheaytani, Jalal. "Calcul par éléments finis des pertes supplémentaires dans les motorisations performantes". Thesis, Lille 1, 2016. http://www.theses.fr/2016LIL10007/document.
Texto completoThe stray load losses (SLL) in electrical machines represent a non-negligible contribution to the total losses and are a key point for an accurate evaluation of the energy efficiency of the considered device. The aim of this work is to investigate the SLL, to determine and quantify their origins using precise models of the studied motors. The SLL model calculation, developed in this thesis, is based on the normalized Input-Output test. This later requires models for the core and harmonic eddy current losses. The choice has been made for calculating the losses in the post- processing step of a finite element code. These models were tested, first, on a permanent magnet synchronous machine (PMSM), where the influence of the carrier harmonics is studied. Then, the SLL were calculated for a 500 kW induction motor and for two 6 kW motors with skewed and non-skewed rotor bars. Several studies have been performed to study the origins of the SLL such as the end-region leakage fluxes, the zig-zag leakage fluxes and the skew leakage fluxes, and quantify their contributions. The comparison, between the simulation results and those measured on the PMSM and both 6 kW motors, shows a good agreement. This demonstrates the ability of an accurate estimation of the core, eddy currents and SLL losses using the proposed post-processing calculation method, for different types of electrical machines under different operating conditions
Favre, Jacques. "Etude des défauts d'irradiation par mesure in-situ de l'effet Hall dans les semi-conducteurs à faible largeur de bande interdite". Palaiseau, Ecole polytechnique, 1989. http://www.theses.fr/1989EPXX0004.
Texto completoMaioli, Paolo. "Détection non destructive d'un atome unique par interaction dispersive avec un champ mésoscopique dans une cavité". Phd thesis, Université Pierre et Marie Curie - Paris VI, 2004. http://tel.archives-ouvertes.fr/tel-00007691.
Texto completoOuraou, Ahmimed. "Mesure des fonctions de structure du proton par diffusion inélastique de muons sur cible d'hydrogène et tests de la chromodynamique quantique". Grenoble 2 : ANRT, 1988. http://catalogue.bnf.fr/ark:/12148/cb376172402.
Texto completoGiroud, Thomas. "Mesure et calcul des contraintes résiduelles dans les pièces injectées en thermoplastiques avec et sans fibres de renfort". Phd thesis, École Nationale Supérieure des Mines de Paris, 2001. http://tel.archives-ouvertes.fr/tel-00392610.
Texto completoHemadou-Artigues, Claude. "Calcul des charges en vol dues aux perturbations atmosphériques par la méthode Statistical Discrete Gust". Toulouse, INSA, 1992. http://www.theses.fr/1992ISAT0021.
Texto completoGiasson, Luc. "Développement des méthodes de calcul et de mesure de la courbe J-R d'un composite polymère particulaire propergol / : par Luc Giasson". Thèse, [Chicoutimi : Rimouski : Université du Québec à Chicoutimi] Université du Québec à Rimouski, 2003. http://theses.uqac.ca.
Texto completoBibliogr.: f. 107-112. Document électronique également accessible en format PDF. CaQCU
Bardet, Ivan. "Émergence de dynamiques classiques en probabilité quantique". Thesis, Lyon, 2016. http://www.theses.fr/2016LYSE1071/document.
Texto completoThis thesis focus on the study of several bridges that exist between classical probabilities and open quantum systems theory. In the first part of the thesis, we consider open quantum systems with classical environment. Thus the environment acts as a classical noise so that the evolution of the system results in a mixing of unitary dynamics. My work consisted in defining a relevant von Neumann algebra on the environment which, in this situation, is commutative. In the general case, we show that this algebra leads to a decomposition of the environment between a classical and a quantum part. In the second part, we forget for a time the environment in order to focus on the emergence of classical stochastic processes inside the system. This situation appears when the quantum Markov semigroup leaves an invariant commutative maximal von Neumann algebra. First, we develop a recipe in order to generate such semigroup, which emphasizes the role of a certain kind of classical dilation. We apply the recipe to prove the existence of a quantum extension for L\'evy processes. Then in the same part of the thesis we study a special kind of classical dynamics that can emerge on a bipartite quantum system, call \emph. Such walks are stochastic but displayed strong quantum behavior. We define a Dirichlet problem associated to these walks and solve it using a variational approch and non-commutative Dirichlet forms. Finally, the last part is dedicated to the study of Environment Induced Decoherence for quantum Markov semigroup on finite von Neumann algebra. We prove that such decoherence always occurs when the semigroup has a faithful invariant state. Then we focus on the fundamental problem of estimating the time of the process. To this end we define adapted non-commutative functional inequalities. The central interest of these definitions is to take into account entanglement effects, which are expected to lower the speed of decoherence
Arcizet, Olivier. "Mesure optique ultrasensible et refroidissement par pression de radiation d´un micro-résonateur mécanique". Phd thesis, Université Pierre et Marie Curie - Paris VI, 2006. http://tel.archives-ouvertes.fr/tel-00175959.
Texto completoOn a également mis en évidence un effet d'auto-refroidissement dû à la modification de la dynamique par la pression de radiation dans une cavité désaccordée. On a observé selon le désaccord un refroidissement et un chauffage du résonateur, qui conduit à forte puissance à une instabilité dynamique.
Ces techniques de refroidissement combinées à de la cryogénie passive devraient permettre de refroidir suffisamment le micro-résonateur pour observer son état quantique fondamental.
On présente enfin une étude expérimentale de l'effet photothermique et une mesure des dilatations induites par l'échauffement lié à l'absorption de lumière dans les traitements optiques.
Platzer, Auriane. "Mécanique numérique en grandes transformations pilotée par les données : De la génération de données sur mesure à une stratégie adaptative de calcul multiéchelle". Thesis, Ecole centrale de Nantes, 2020. http://www.theses.fr/2020ECDN0041.
Texto completoComputational mechanics is a field in which a large amount of data is both consumed and produced. On the one hand, the recent developments of experimental measurement techniques have provided rich data for the identification process of constitutive models used in finite element simulations. On the other hand, multiscale analysis produces a huge amount of discrete values of displacements, strains and stresses from which knowledge is extracted on the overall material behavior. The constitutive model then acts as a bottleneck between upstream and downstream material data. In contrast, Kirchdoerfer and Ortiz (Computer Methods in Applied Mechanics and Engineering, 304, 81-101) proposed a model-free computing paradigm, called data-driven computational mechanics. The material response is then only represented by a database of raw material data (strain-stress pairs). The boundary value problem is thus reformulated as a constrained distance minimization between (i) the mechanical strain-stress state of the body, and (ii) the material database. In this thesis, we investigate the question of material data coverage, especially in the finite strain framework. The data-driven approach is first extended to a geometrically nonlinear setting: two alternative formulations are considered and a finite element solver is proposed for both. Second, we explore the generation of tailored databases using a mechanically meaningful sampling method. The approach is assessed by means of finite element analyses of complex structures exhibiting large deformations. Finally, we propose a prototype multiscale data-driven solver, in which the material database is adaptively enriched
Lachance-Quirion, Dany. "Étude par transport électrique de points quantiques colloïdaux". Thesis, Université Laval, 2012. http://www.theses.ulaval.ca/2012/29510/29510.pdf.
Texto completoVan, Schenk Brill Kees. "Réhabiliter la Résonance Magnétique Nucléaire comme réalisation physique pour des ordinateurs quantiques et Résoudre des équations de Pell simultanées par des techniques de calcul quantique". Phd thesis, Université de Strasbourg, 2010. http://tel.archives-ouvertes.fr/tel-00534864.
Texto completoGolebiowski, Jérôme. "Modélisation d'extractants spécifiques de cations métalliques par des méthodes ab initia et hybrides mécanique quantique / mécanique moléculaire". Nancy 1, 2000. http://www.theses.fr/2000NAN10120.
Texto completoFarah, Jad. "Amélioration des mesures anthroporadiamétriques personnalisées assistées par calcul Monte Carlo : optimisation des temps de calculs et méthodologie de mesure pour l'établissement de la répartition d'activité". Phd thesis, Université Paris Sud - Paris XI, 2011. http://tel.archives-ouvertes.fr/tel-00638153.
Texto completoFarah, Jad. "Amélioration des mesures anthroporadiamétriques personnalisées assistées par calcul Monte Carlo : optimisation des temps de calculs et méthodologie de mesure pour l’établissement de la répartition d’activité". Thesis, Paris 11, 2011. http://www.theses.fr/2011PA112183/document.
Texto completoTo optimize the monitoring of female workers using in vivo spectrometry measurements, it is necessary to correct the typical calibration coefficients obtained with the Livermore male physical phantom. To do so, numerical calibrations based on the use of Monte Carlo simulations combined with anthropomorphic 3D phantoms were used. Such computational calibrations require on the one hand the development of representative female phantoms of different size and morphologies and on the other hand rapid and reliable Monte Carlo calculations. A library of female torso models was hence developed by fitting the weight of internal organs and breasts according to the body height and to relevant plastic surgery recommendations. This library was next used to realize a numerical calibration of the AREVA NC La Hague in vivo counting installation. Moreover, the morphology-induced counting efficiency variations with energy were put into equation and recommendations were given to correct the typical calibration coefficients for any monitored female worker as a function of body height and breast size. Meanwhile, variance reduction techniques and geometry simplification operations were considered to accelerate simulations.Furthermore, to determine the activity mapping in the case of complex contaminations, a method that combines Monte Carlo simulations with in vivo measurements was developed. This method consists of realizing several spectrometry measurements with different detector positioning. Next, the contribution of each contaminated organ to the count is assessed from Monte Carlo calculations. The in vivo measurements realized at LEDI, CIEMAT and KIT have demonstrated the effectiveness of the method and highlighted the valuable contribution of Monte Carlo simulations for a more detailed analysis of spectrometry measurements. Thus, a more precise estimate of the activity distribution is given in the case of an internal contamination
Nasserdine, Mohamed M'Madi. "Mesure de la distribution du champ en chambre réverbérante par la théorie des perturbations : application à l'étude des directions d'arrivée". Thesis, Paris Est, 2015. http://www.theses.fr/2015PESC1026/document.
Texto completoThis work deals with field measurement techniques in large electromagnetic enclosures namely reverberation chambers. Due to the perturbation of the field distribution within a resonant cavity due to the presence of an introduced object, conventional field measurement techniques employing an antenna suffer from a limited accuracy. Therefore we propose a new measurement technique of the electric field distribution based on the perturbation theory; it consists of a measure of the cavity resonant frequency variation when displacing a small perturbing object within the cavity, and leads to the electric field distribution. The choice of the perturbing object shape, dimension and material is discussed with the help of simulation and measurement results in a canonical case in order to adapt the measurement setup to the studied case. This technique is then successfully employed in a reverberation chamber equipped with a mode stirrer, as well as to measure the field within a metallic box placed in the cavity. Using a post-processing based on MUSIC algorithm, this approach has permitted to determine accurately the field directions-of-arrival in the reverberation chamber
Jacob, Dominique R. "Principe de la mesure simultanée de distance et d'épaisseur de dépôts métalliques par capteur à courants de Foucault : conception et réalisation d'un dispositif". Paris 11, 1988. http://www.theses.fr/1988PA112148.
Texto completoThis thesis presents the design of a system, using an eddy current sensor, for measuring the thickness of metallic deposits on metallic sheets. The sensor, allows the thickness measurement without an accurate position of sensor and the sheet. A model which take account of the distribution of the flux in the space and of the physical properties (resistivity, permeability) of the materials has been developed. This model has been experimentally validated. It allows to take account of the temperature and to calculate the dimensions of the sensor. An application concerning the thickness measurement of zinc deposit on a steel sheet is exposed. The informations of the sensor are digitalised and processed by a microcomputer for calculating simultaneously the distance, between the sensor and the covered sheet, and the thickness of the deposit
Marchand, Dominique. "Calcul des corrections radiatives à la diffusion compton virtuelle. Mesure absolue de l'énergie du faisceau d'électrons de Jefferson Lab. (Hall A) par une méthode magnétique : projet ARC". Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 1998. http://tel.archives-ouvertes.fr/tel-00298382.
Texto completoLes expériences de diffusion Compton virtuelle nous permettent d'accéder à de nouvelles observables du proton : les polarisabilités généralisées. L'extraction de ces polarisabilités s'effectuant par comparaison des sections efficaces expérimentale et théorique, il est indispensable de contrôler avec une grande précision les erreurs systématiques et les effets radiatifs liés à l'expérience. Un calcul complet des corrections radiatives internes a donc été mené dans le cadre de l'électrodynamique quantique. Ce calcul inédit tient compte de tous les graphes contribuant à l'ordre alpha^4 au niveau de la section efficace à l'exception de ceux mettant en jeu l'échange de deux photons entre les bras leptonique et hadronique ainsi que ceux relatifs au rayonnement du proton. La méthode de régularisation dimensionnelle a été employée pour le traitement des divergences ultraviolettes et infrarouges. Après utilisation d'une procédure d'addition-soustraction, la compensation infrarouge est vérifiée. Nous avons privilégié le calcul analytique pour les intégrales les plus internes et avons eu ensuite recours à un traitement numérique spécifique. Les résultats présentés correspondent aux différentes cinématiques de l'expérience VCS qui s'est déroulée à TJNAF en 1998.
La méthode de mesure absolue d'énergie que nous avons développée s'appuie sur la déviation magnétique, constituée de huit dipôles identiques, conduisant le faisceau de l'accélérateur au hall A expérimental. L'énergie est déterminée à partir de la mesure absolue de l'angle de déviation du faisceau dans le plan horizontal et de la mesure absolue de l'intégrale de champ magnétique le long de la déviation magnétique. La mesure de l'angle de déviation se décompose en une mesure ponctuelle d'un angle de référence (par une méthode optique d'autocollimation) et en une mesure « en ligne » des déviations angulaires du faisceau par rapport à cet angle de référence (utilisation de quatre profileurs à fil : une paire en amont et une paire en aval de l'arc). L'intégrale de champ absolue le long de la déviation résulte, elle, de la mesure ponctuelle de la somme des intégrales de champ relatives des huit dipôles de l'arc par rapport à un aimant de référence et de la mesure « en ligne » de l'intégrale de champ de cet aimant de référence alimenté en série avec les huit autres de l'arc.
Marchand, Dominique. "Calcul des corrections radiatives a la diffusion compton virtuelle. Mesure absolue de l'energie du faisceau d'electrons de jefferson lab. (hall a) par une methode magnetique : projet arc". Clermont-Ferrand 2, 1998. https://tel.archives-ouvertes.fr/tel-00298382.
Texto completoBARRE, Olivier. "Contribution à l'étude des formulations de calcul de la force magnétique en magnétostatique, approche numérique et validation expérimentale". Phd thesis, Université des Sciences et Technologie de Lille - Lille I, 2003. http://tel.archives-ouvertes.fr/tel-00005921.
Texto completoMétillon, Valentin. "Tomographie par trajectoires d'états délocalisés du champ micro-onde de deux cavités". Thesis, Paris Sciences et Lettres (ComUE), 2019. http://www.theses.fr/2019PSLEE051.
Texto completoQuantum state estimation, or tomography, is a key component of quantum technologies, allowing to characterise quantum operations and to extract information on the results of quantum information processes. The usual tomography techniques rely on ideal, single-shot measurements of the unknown state. In this work, we use a new approach, called trajectory quantum tomography, where the quantum trajectory of each realization of the state is recorded through a series of measurements, including experimental imperfections and decoherence. This strategy increases the extracted amount of information and allows to build new measurements for a set of feasible measurements.Using the tools of cavity quantum eletrodynamics, we have prepared entangled states of microwave photons spread on two separated modes. We have then performed a trajectory tomography of these states, in a large Hilbert space. We have proved that this method allows to estimate the state, to develop faster strategies for extracting information on specific coherences of the state and to compute error bars on the components of the estimated density matrix
De, Sousa Marie-Carmen. "Contribution à l'optimisation de la radioprotection du patient en radiologie : de la mesure en temps réel de la dose en radiologie conventionnelle au calcul du risque de vie entière de décès par cancer radio-induit spécifique par sexe et par âge". Toulouse 3, 2002. http://www.theses.fr/2002TOU30003.
Texto completoCorfdir, Alain. "Analyse de la stabilité d'ouvrages en gabions cellulaires par la théorie du calcul à la rupture". Phd thesis, Marne-la-vallée, ENPC, 1997. http://www.theses.fr/1997ENPC9704.
Texto completoCellular cofferdams are constituted by a shell of steel sheetpiles filled with sand or gravel backfill. They are used in harbour or fluvial locations as earth or water retaining structures. Although they have been in use for more than 80 years, their mechanical behaviour is still poorly understood and accidents still occur even during construction. The use of design methods based on yield design theory can contribute to found the design of cellular cofferdams on rigorous bases. The yield design of cellular cofferdams has some particular characteristics : true 3-dimentional geometry, mixed modelling of the structure (the sheetpiles are modelled as a shell, the backfill as a 3-dimensional ontinuous media). Modelling sheetpiles as a shell makes it possible to consider kinematic fields with flexure strain of the sheetpiles. Flexures strain has been observed on some accidents and some model tests. The present work opens with an introductory chapter dealing with the construction of cellular cofferdams, their applications and design methods. Chapter 2 deals with the modelling of cellular cofferdams. Chapters 3, 4, 5 and 6 deal with the application of static and kinematic methods to a single cofferdam cell and to cellular cofferdams. In chapter 7, the results are compared to data from different sources : filed measurements, case of accident, model tests
Fournier, Pierre-Luc. "Développement d'un modèle de mesure de la performance d'un processus administratif séquencé et non cadencé dans une organisation du réseau de la santé du Québec par le calcul du degré d'articulation". Thèse, Université du Québec à Trois-Rivières, 2012. http://depot-e.uqtr.ca/6918/1/030586134.pdf.
Texto completoMezher, Rawad. "Randomness for quantum information processing". Electronic Thesis or Diss., Sorbonne université, 2019. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2019SORUS244.pdf.
Texto completoThis thesis is focused on the generation and understanding of particular kinds of quantum randomness. Randomness is useful for many tasks in physics and information processing, from randomized benchmarking , to black hole physics , as well demonstrating a so-called quantum speedup , and many other applications. On the one hand we explore how to generate a particular form of random evolution known as a t-design. On the other we show how this can also give instances for quantum speedup - where classical computers cannot simulate the randomness efficiently. We also show that this is still possible in noisy realistic settings. More specifically, this thesis is centered around three main topics. The first of these being the generation of epsilon-approximate unitary t-designs. In this direction, we first show that non-adaptive, fixed measurements on a graph state composed of poly(n,t,log(1/epsilon)) qubits, and with a regular structure (that of a brickwork state) effectively give rise to a random unitary ensemble which is a epsilon-approximate t-design. This work is presented in Chapter 3. Before this work, it was known that non-adaptive fixed XY measurements on a graph state give rise to unitary t-designs , however the graph states used there were of complicated structure and were therefore not natural candidates for measurement based quantum computing (MBQC), and the circuits to make them were complicated. The novelty in our work is showing that t-designs can be generated by fixed, non-adaptive measurements on graph states whose underlying graphs are regular 2D lattices. These graph states are universal resources for MBQC. Therefore, our result allows the natural integration of unitary t-designs, which provide a notion of quantum pseudorandomness which is very useful in quantum algorithms, into quantum algorithms running in MBQC. Moreover, in the circuit picture this construction for t-designs may be viewed as a constant depth quantum circuit, albeit with a polynomial number of ancillas. We then provide new constructions of epsilon-approximate unitary t-designs both in the circuit model and in MBQC which are based on a relaxation of technical requirements in previous constructions. These constructions are found in Chapters 4 and 5
Cohen, Joachim. "Autonomous quantum error correction with superconducting qubits". Thesis, Paris Sciences et Lettres (ComUE), 2017. http://www.theses.fr/2017PSLEE008/document.
Texto completoIn this thesis, we develop several tools in the direction of autonomous Quantum Error Correction (QEC) with superconducting qubits. We design an autonomous QEC scheme based on quantum reservoir engineering, in which transmon qubits are coupled to lossy modes. Through an engineered interaction between these systems, the entropy created by eventual errors is evacuated via the dissipative modes.The second part of this work focus on the recently developed cat codes, through which the logical information is encoded in the large Hilbert space of a harmonic oscillator. We propose a scheme to perform continuous and quantum non-demolition measurements of photon-number parity in a microwave cavity, which corresponds to the error syndrome in the cat code. In our design, we exploit the strongly nonlinear Hamiltonian of a highimpedance Josephson circuit, coupling ahigh-Q cavity storage cavity mode to a low-Q readout one. Last, as a follow up of the above results, we present several continuous and/or autonomous QEC schemes using the cat code. These schemes provide a robust protection against dominant error channels in the presence of multi-photon driven dissipation
Bouhara, Ammar. "Etude theorique et experimentale de la mesure par thermocouples de la temperature dans un flux gazeux instationnaire : application aux gaz d'echappement d'un moteur". Paris 6, 1987. http://www.theses.fr/1987PA066149.
Texto completoKuhn, Matthieu. "Calcul parallèle et méthodes numériques pour la simulation de plasmas de bords". Thesis, Strasbourg, 2014. http://www.theses.fr/2014STRAD023/document.
Texto completoThe main goal of this work is to significantly reduce the computational cost of the scientific application Emedge3D, simulating the edge of tokamaks. Improvements to this code are made on two axes. First, innovations on numerical methods have been implemented. The advantage of semi-implicit time schemes are described. Their inconditional stability allows to consider larger timestep values, and hence to lower the number of temporal iteration required for a simulation. The benefits of a high order (time and space) are also presented. Second, solutions to the parallelization of the code are proposed. This study addresses the more general non linear advection-diffusion problem. The hot spots of the application have been sequentially optimized and parallelized with OpenMP. Then, a hybrid MPI OpenMP parallel algorithm for the memory bound part of the code is described and analyzed. Good scalings are observed up to 384 cores. This Ph. D. thesis is part of the interdisciplinary project ANR E2T2 (CEA/IRFM, University of Aix-Marseille/PIIM, University of Strasbourg/ICube)
Ferrari, Luca Alberto Davide. "Approximations par champs de phases pour des problèmes de transport branché". Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLX049/document.
Texto completoIn this thesis we devise phase field approximations of some Branched Transportation problems. Branched Transportation is a mathematical framework for modeling supply-demand distribution networks which exhibit tree like structures. In particular the network, the supply factories and the demand location are modeled as measures and the problem is cast as a constrained optimization problem. The transport cost of a mass m along an edge with length L is h(m)xL and the total cost of a network is defined as the sum of the contribution on all its edges. The branched transportation case consists with the specific choice h(m)=|m|^α where α is a value in [0,1). The sub-additivity of the cost function ensures that transporting two masses jointly is cheaper than doing it separately. In this work we introduce various variational approximations of the branched transport optimization problem. The approximating functionals are based on a phase field representation of the network and are smoother than the original problem which allows for efficient numerical optimization methods. We introduce a family of functionals inspired by the Ambrosio and Tortorelli one to model an affine transport cost functions. This approach is firstly used to study the problem any affine cost function h in the ambient space R^2. For this case we produce a full Gamma-convergence result and correlate it with an alternate minimization procedure to obtain numerical approximations of the minimizers. We then generalize this approach to any ambient space and obtain a full Gamma-convergence result in the case of k-dimensional surfaces. In particular, we obtain a variational approximation of the Plateau problem in any dimension and co-dimension. In the last part of the thesis we propose two models for general concave cost functions. In the first one we introduce a multiphase field approach and recover any piecewise affine cost function. Finally we propose and study a family of functionals allowing to recover in the limit any concave cost function h
Crépé-Renaudin, S. "Outils et analyse en physique des particules : morceaux choisis La grille de calcul et de stockage pour le LHC : de la mise en place d'un nœud de grille à l'utilisation de la grille par l'expérience ATLAS Mesure de la section efficace de production top-antitop avec l'expérience d0 auprès du Tevatron". Habilitation à diriger des recherches, Université de Grenoble, 2013. http://tel.archives-ouvertes.fr/tel-00919967.
Texto completoMagnin, Loïck. "Two-player interaction in quantum computing : cryptographic primitives & query complexity". Thesis, Paris 11, 2011. http://www.theses.fr/2011PA112275/document.
Texto completoThis dissertation studies two different aspects of two-player interaction in the model of quantum communication and quantum computation.First, we study two cryptographic primitives, that are used as basic blocks to construct sophisticated cryptographic protocols between two players, e.g. identification protocols. The first primitive is ``quantum bit commitment''. This primitive cannot be done in an unconditionally secure way. However, security can be obtained by restraining the power of the two players. We study this primitive when the two players can only create quantum Gaussian states and perform Gaussian operations. These operations are a subset of what is allowed by quantum physics, and plays a central role in quantum optics. Hence, it is an accurate model of communication through optical fibers. We show that unfortunately this restriction does not allow secure bit commitment. The proof of this result is based on the notion of ``intrinsic purification'' that we introduce to circumvent the use of Uhlman's theorem when the quantum states are Gaussian. We then examine a weaker primitive, ``quantum weak coin flipping'', in the standard model of quantum computation. Mochon has showed that there exists such a protocol with arbitrarily small bias. We give a clear and meaningful interpretation of his proof. That allows us to present a drastically shorter and simplified proof.The second part of the dissertation deals with different methods of proving lower bounds on the quantum query complexity. This is a very important model in quantum complexity in which numerous results have been proved. In this model, an algorithm has restricted access to the input: it can only query individual bits. We consider a generalization of the standard model, where an algorithm does not compute a classical function, but generates a quantum state. This generalization allows us to compare the strength of the different methods used to prove lower bounds in this model. We first prove that the ``multiplicative adversary method'' is stronger than the ``additive adversary method''. We then show a reduction from the ``polynomial method'' to the multiplicative adversary method. Hence, we prove that the multiplicative adversary method is the strongest one. Adversary methods are usually difficult to use since they involve the computation of norms of matrices with very large size. We show how studying the symmetries of a problem can largely simplify these computations. Last, using these principles we prove the tight lower bound of the INDEX-ERASURE problem. This a quantum state generation problem that has links with the famous GRAPH-ISOMORPHISM problem
Richmond, Tania. "Implantation sécurisée de protocoles cryptographiques basés sur les codes correcteurs d'erreurs". Thesis, Lyon, 2016. http://www.theses.fr/2016LYSES048/document.
Texto completoThe first cryptographic protocol based on error-correcting codes was proposed in 1978 by Robert McEliece. Cryptography based on codes is called post-quantum because until now, no algorithm able to attack this kind of protocols in polynomial time, even using a quantum computer, has been proposed. This is in contrast with protocols based on number theory problems like factorization of large numbers, for which efficient Shor's algorithm can be used on quantum computers. Nevertheless, the McEliece cryptosystem security is based not only on mathematical problems. Implementation (in software or hardware) is also very important for its security. Study of side-channel attacks against the McEliece cryptosystem have begun in 2008. Improvements can still be done. In this thesis, we propose new attacks against decryption in the McEliece cryptosystem, used with classical Goppa codes, including corresponding countermeasures. Proposed attacks are based on evaluation of execution time of the algorithm or its power consumption analysis. Associate countermeasures are based on mathematical and algorithmic properties of the underlying algorithm. We show that it is necessary to secure the decryption algorithm by considering it as a whole and not only step by step
Zhu, Ni. "GNSS propagation channel modeling in constrained environments : contribution to the improvement of the geolocation service quality". Thesis, Lille 1, 2018. http://www.theses.fr/2018LIL1I057/document.
Texto completoIn the last decades, Global Navigation Satellite System (GNSS)-based positioning systems are increasingly spread in urban environments, where great challenges exist for GNSS because of the local effects. Yet, for new GNSS land applications, knowing the certainty of one's localization is of great importance especially for the liability/safety critical applications. The concept of GNSS integrity, which is defined as a measure of trust to be placed in the correctness of the information supplied by the total system, can help to meet this requirement. The main objective of this PhD research work is to improve the accuracy and integrity performances of GNSS positioning in urban environments. Two research directions were investigated: The first direction consists of GNSS measurement error characterizations in order to improve the positioning accuracy. Several error models existing in the literature are evaluated. A new hybrid model is proposed while involving the digital map. The second direction contributes to the Fault Detection and Exclusion (FDE) techniques so as to improve the GNSS integrity. Different FDE methods are investigated and compared with real GPS data collected in urban canyons. The computation of Horizontal Protection Level (HPL) is added at the next step. A new method of HPL computation by taking into consideration of the potential prior fault is proposed. Then, these two research directions are combined together so that a complete integrity monitoring scheme is constructed. The results with real GPS data collected in urban canyons show that the positioning accuracy and integrity can be improved by the proposed scheme compared to the traditional approaches
Souhassou, Mohamed. "Densité d'électrons dans les composés peptidiques par méthode X-X et calcul théorique : LA (N) acétyl-L-tryptophane-méthylamide et la (N)-acétyl-alpha-béta -dehydrophenylalanine-methylamide, influence de La alpha -beta -déhydrogénation et étude critique des modèles multipolaires". Nancy 1, 1988. http://www.theses.fr/1988NAN10418.
Texto completoBiglione, Jordan. "Simulation et optimisation du procédé d'injection soufflage cycle chaud". Thesis, Lyon, INSA, 2015. http://www.theses.fr/2015ISAL0079/document.
Texto completoSingle stage injection blow molding process, without preform storage and reheat, could be run on a standard injection molding machine, with the aim of producing short series of specific hollow parts. The polypropylene bottles are blown right after being injected. The preform has to remain sufficiently malleable to be blown while being viscous enough to avoid being pierced during the blow molding stage. These constraints lead to a small processing window, and so the process takes place between the melting temperature and the crystallization temperature, where the polypropylene is in his molten state but cool enough to enhance its viscosity without crystallizing. This single stage process introduces temperature gradients, molecular orientation, high stretch rate and high cooling rate. Melt rheometry tests were performed to characterize the polymer behavior in the temperature range of the process, as well as Differential Scanning Calorimetry. A viscous Cross model is used with the thermal dependence assumed by an Arrhenius law. The process is simulated through a finite element code (POLYFLOW) in the Ansys Workbench framework. The geometry allows an axisymmetric approach. The transient simulation is run under anisothermal conditions and viscous heating is taken into account. Thickness measurements using image analysis are done and the simulation results are compared to the experimental ones. The experimental measurements are done by analizing tomography datas. The simulation shows good agreements with the experimental results. The existence of elongational strain as well as shear strain during the blowing after contact with the mold is discussed. An optimization loop is run to determine an optimal initial thickness repartition by the use of a Predictor/Corrector method to minimize a given objective function. Design points are defined along the preform and the optimization modifies the thickness at these locations. This method is compared to the Downhill Simplex Method and shows better efficiency
Balakrishnan, Arjun. "Integrity Analysis of Data Sources in Multimodal Localization System". Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASG060.
Texto completoIntelligent vehicles are a key component in humanity’s vision for safer, efficient, and accessible transportation systems across the world. Due to the multitude of data sources and processes associated with Intelligent vehicles, the reliability of the total system is greatly dependent on the possibility of errors or poor performances observed in its components. In our work, we focus on the critical task of localization of intelligent vehicles and address the challenges in monitoring the integrity of data sources used in localization. The primary contribution of our research is the proposition of a novel protocol for integrity by combining integrity concepts from information systems with the existing integrity concepts in the field of Intelligent Transport Systems (ITS). An integrity monitoring framework based on the theorized integrity protocol that can handle multimodal localization problems is formalized. As the first step, a proof of concept for this framework is developed based on cross-consistency estimation of data sources using polynomial models. Based on the observations from the first step, a 'Feature Grid' data representation is proposed in the second step and a generalized prototype for the framework is implemented. The framework is tested in highways as well as complex urban scenarios to demonstrate that the proposed framework is capable of providing continuous integrity estimates of multimodal data sources used in intelligent vehicle localization