Tesis sobre el tema "Méthode à entropie maximale"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte los 50 mejores tesis para su investigación sobre el tema "Méthode à entropie maximale".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.
Touboul, Jacques. "Extension de la méthode de poursuite de projection et applications". Paris 6, 2011. http://www.theses.fr/2011PA066186.
Texto completoMichel, Julien. "Un principe de maximum d'entropie pour les mesures de Young : applications". Lyon 1, 1993. http://www.theses.fr/1993LYO10225.
Texto completoChampion, Julie. "Sur les algorithmes de projections en entropie relative avec contraintes marginales". Toulouse 3, 2013. http://thesesups.ups-tlse.fr/2036/.
Texto completoThis work is focused on an algorithm of construction of probability measures with prescribed marginal laws, called Iterative Proportional Fitting (IPF). Deriving from statistical problems, this algorithm is based on successive projections on probability spaces for the relative entropy pseudometric of Kullback Leibler. This thesis consists in a survey of the current results on this subject and gives some extensions and subtleties. The first part deals with the study of projections in relative entropy, namely existence, uniqueness criteria, and characterization properties related to closedness of sumspaces. Under certain assumptions, the problem becomes a problem of maximisation of the entropy for graphical marginal constraints. In the second part, we study the iterative procedure IPF. Introduced initially for an estimation problem on contingency tables, it corresponds in a more general setting to an analogue of a classic algorithm of alternating projections on Hilbert spaces. After presenting the IPF properties, we look for convergence results in the finite discrete case, the Gaussian case, and the more general continuous case with two marginals, for which some extensions are given. Then, the thesis focused on Gaussian case with two prescribed marginal, for which we get a rate of convergence using a new formulation of the IPF. Moreover we prove the optimality for the 2-dimensional case
Gamboa, Fabrice. "Méthode du maximum d'entropie sur la moyenne et applications". Paris 11, 1989. http://www.theses.fr/1989PA112346.
Texto completoAn explicit solution for the problem of probability reconstruction when only the averages of random variables are known is given by the maximum entropy method. We use this method to reconstruct a function constrained to a convex set C, (no linear constraint) using a finite number of its generalized moments linear constraint). A sequence of entropy maximization problems is considered. The nth problem consists in the reconstruction of a probability distribution on Cn, the projection of C on Rⁿ whose mean satisfies a constraint approximating the initial linear constraint (generalized moments). When n approaches infinity this gives a solution for the initial problem as the limit of the sequence of means of maximum entropy distributions on Cn. We call this technique the maximum entropy method on the mean (M. E. M) because linear constraints are only on the mean of the distribution to be reconstructed. We mainly study the case where C is a band of continuous functions. We find a reconstruction familly, each element of this family only depends of referenced measures used for the sequence of entropy problems. We show that the M. E. M method is equivalent to a concav criteria maximization. We then use the M. E. M method to construct a numerically computable criteria to solve generalized moments problem on a bounded band of continuous functions. In the last chapter we discuss statistical applications of the method
Mounsif, Mostafa. "Le problème des moments par la méthode de l'entropie maximale". Montpellier 2, 1992. http://www.theses.fr/1992MON20171.
Texto completoUgalde-Saldaña, Edgardo. "Modèles dynamiques pour la turbulence locale". Aix-Marseille 1, 1996. http://www.theses.fr/1996AIX11044.
Texto completoJaeger, Sébastien. "Indicateurs statistiques pour l'analyse de séquences génétiques". Aix-Marseille 2, 2002. http://www.theses.fr/2002AIX22060.
Texto completoChauvet, Guillaume. "Méthodes de Bootstrap en population finie". Phd thesis, Rennes 2, 2007. http://tel.archives-ouvertes.fr/tel-00267689.
Texto completoRuette, Sylvie. "Chaos en dynamique topologique, en particulier sur l'intervalle, mesures d'entropie maximale". Phd thesis, Université de la Méditerranée - Aix-Marseille II, 2001. http://tel.archives-ouvertes.fr/tel-00001144.
Texto completoFriedel, Paul. "Étude par photoémission X et UV du nettoyage et de la passivation en plasma multipolaire d'une surface (100) de semiconducteur III-V". Paris 11, 1987. http://www.theses.fr/1987PA112139.
Texto completoLecompte, Matthieu. "Etude expérimentale des sprays d’huile dans un moteur à combustion interne : Influence de l’écoulement de blow-by et participation à la consommation d’huile". Rouen, 2007. http://www.theses.fr/2007ROUES069.
Texto completoIn the face of customer requirements in terms of oil consumption reduction and pollutant emissions, it becomes important to understand the behavior of oil in the combustion chamber environment. The work carried out in the present Ph. D. Concentrates mainly on the drop-size distribution characterization of oil sprays produced through the ring gap by the blow-by-flow (air flow taking place between the piston and the liner during the compression stroke) and on the determination of their participation to the total lubricant consumption. To achieve this, a two-step experimental study was conducted. An atomizer reproducing the atomizing phenomena through a ring gap was first realized and studied in the laboratory. The drop-size distribution of the produced oil sprays were measured thanks to the use of several measurement techniques as a function of the lubricant flow rate and the upper air pressure. Second, an experimental investigation conducted on a car engine investigated the oil spray drop-size distribution at several locations. Furthermore, for each working condition, measurements of oil consumption in real time were achieved using a SO2 tracer technique. In these two studies, the oil-spray drop-size distributions were successfully modeled by an approach based on the Maximum Entropy Formalism. Among the most important results, this study reports that the drop-size distribution of oil-sprays produced by blow-by depends on the engine regime and load. Furthermore, for some working conditions, this oil-spray production is majority in the return flow whose influence on the total oil consumption was evidenced
Pougaza, Doriano-Boris. "Utilisation de la notion de copule en tomographie". Phd thesis, Université Paris Sud - Paris XI, 2011. http://tel.archives-ouvertes.fr/tel-00684637.
Texto completoVenditti, Véronique. "Aspects du principe de maximum d'entropie en modélisation statistique". Université Joseph Fourier (Grenoble), 1998. http://www.theses.fr/1998GRE10108.
Texto completoRamos, Fernando Manuel. "Résolution d'un problème inverse multidimensionnel de diffusion par la méthode des éléments analytiques et par le principe de l'entropie maximale : contribution à la caractérisation de défauts internes". Toulouse, ENSAE, 1992. http://www.theses.fr/1992ESAE0015.
Texto completoBarrat-Charlaix, Pierre. "Comprendre et améliorer les modèles statistiques de séquences de protéines". Electronic Thesis or Diss., Sorbonne université, 2018. http://www.theses.fr/2018SORUS378.
Texto completoIn the last decades, progress in experimental techniques have given rise to a vast increase in the number of known DNA and protein sequences. This has prompted the development of various statistical methods in order to make sense of this massive amount of data. Among those are pairwise co-evolutionary methods, using ideas coming from statistical physics to construct a global model for protein sequence variability. These methods have proven to be very effective at extracting relevant information from sequences, such as structural contacts or effects of mutations. While co-evolutionary models are for the moment used as predictive tools, their success calls for a better understanding of they functioning. In this thesis, we propose developments on existing methods while also asking the question of how and why they work. We first focus on the ability of the so-called Direct Coupling Analysis (DCA) to reproduce statistical patterns found in sequences in a protein family. We then discuss the possibility to include other types of information such as mutational effects in this method, and then potential corrections for the phylogenetic biases present in available data. Finally, considerations about limitations of current co-evolutionary models are presented, along with suggestions on how to overcome them
Domps, Baptiste. "Identification et détection de phénomènes transitoires contenus dans des mesures radar à faible rapport signal à bruit : Applications conjointes aux problématiques océanographique et atmosphérique". Electronic Thesis or Diss., Toulon, 2021. http://www.theses.fr/2021TOUL0001.
Texto completoObservations of atmospheric and ocean surface dynamics can be performed via radar remote sensing. The usual approach consists, in both cases, in numerically calculating the Doppler spectrum of the received temporal echoes using a discrete Fourier transform. Although satisfactory for most applications, this method is not suitable for observations of transient phenomena due to being shorter than the integration time required for radar observations. We use an alternative technique based on an autoregressive representation of the radar time series combined with the maximum entropy method. This approach is applied to coastal radar measurements of surface currents in the high frequency band as well as to L-band radar measurements of wind in the lower atmosphere. For both cases, through numerical simulations and case studies, we compare our approach with others that use different instruments. We show that for short integration times, where conventional methods fail, our proposed approach leads to reliable estimates of geophysical quantities (ocean currents and wind speeds)
Lutton, Jean-Luc. "Mécanique statistique et théorie des systèmes : utilisation de méthodes de mécanique statistique pour étudier des systèmes de télécommunication et traiter des problèmes de recherche opérationnelle". Paris 11, 1985. http://www.theses.fr/1985PA112172.
Texto completoThe aim of this work is to use· statistical mechanics ideas for studying engineering problems. First we analyze the performance of a class of connecting networks: Clos connecting Networks with 2 k + 1 stages. We show that these networks, like physical systems, exhibit the thermodynamical limit property. We deduce analytical expressions which give values of macroscopic system performance parameters (system with loss, system with rearrangement or system with queueing) in terms of the offered traffic. In particular, we estimate probability distribution of the number of rearrangements and the waiting time distribution using the maximum entropy principle. All these analytical results give good agreements with numerical simulations. We then apply the simulated annealing procedure to some combinatorial optimization problems (travelling salesman problem, minimum weighted matching problem, quadratic sum assignment problem). In fact, we use the Metropolis algorithm to determine a quasi-optimal solution to the problem we consider. We deduce a good heuristic with better performances than other classical methods, especially for large problems. For example we obtain a "good" solution for a 10000-city travelling salesman problem. Using statistical mechanics formalism, we also estimate the asymptotic behaviour of the optimal solution
Tonus, Florent. "Étude de la chimie redox d'oxydes Ruddlesden-Popper n=1 par diffraction de neutrons in situ à haute température sous flux de H₂ et O₂". Rennes 1, 2011. http://www.theses.fr/2011REN1S023.
Texto completoThis study is concerned with the synthesis and crystal chemistry of new Ruddlesden-Popper n=1, oxides which are of interest because of their potential as electrode materials in SOFC batteries. The sol-gel synthesis of two new families having the compositions has been undertaken, along with their characterisation by thermogravimetric analysis, magnetometry, and X-ray, neutron and electron diffraction. A novel reaction cell for in situ neutron thermodiffraction studies of a redox behaviour under different gas flows has been designed. The evolution of the structure, particularly the disorder of the oxygen atoms, has been followed in situ and in real time under the working conditions of a SOFC anode under hydrogen by neutron thermodiffraction (instrument D20, ILL / Grenoble) as a function of the value of δ. Sequential Rietveld refinements showed a deintercalation of oxygen from the equatorial site controlled by the reduction of the cation M. These materials are promising at the technological level given their excellent chemical and structural stability under operating conditions, which is likely attributable to the presence of Cr³⁺ ions. This in situ monitoring was also applied to the similar compositions synthezised at the University of Birmingham, U. K. Data anlysis by a combination of Rietveld refinements and maximum entropy clarified the details of the average nuclear density of the oxygens for certain values of δ and T, and suggested possible anisotropic diffusion pathways of the oxide ions in these compositions
Boukerbout, Hassina. "Analyse en ondelettes et prolongement des champs de potentiel. Développement d'une théorie 3-D et application en géophysique". Rennes 1, 2004. http://www.theses.fr/2004REN10095.
Texto completoSeghier, Abdellatif. "Matrices de Toeplitz dans le cas d-dimensionnel : développement asymptotique à l'ordre d.Extension de fonctions de type positif dans le cas d-dimensionnel et maximum d'entropie : application à la reconstruction de densités". Paris 11, 1988. http://www.theses.fr/1988PA112038.
Texto completoIn the two first chapters we are concerned with the prediction of the second order stationnary process. Here the information depends on a part of past. The main aspect of these papers is the use of hilbertian technics based on Tœplitz and Hankel operators. In the following three papers, we deal with an old Szegö's problem on the expansion of the determinant of Tœplitz matrix. We give in the multidimensionnal case a more precise expansion of the trace of the inverse with order d). Moreover the knew cœfficients which appear are strongly related with geometrical invariants of the domain on which the the Tœplitz operators are truncated. In the last two papers knew results about reconstruction of the spectral densities in the multidimentional case are given. The methods are based on extensions of positive defined function and maximum entropy principle. This work is motivated by the problem of the determination of the phases of the electron density function in crystal analysis. Nevertheless, there is still a great amount of work to be done in order to solve this problem
Fischer, Richard. "Modélisation de la dépendance pour des statistiques d'ordre et estimation non-paramétrique". Thesis, Paris Est, 2016. http://www.theses.fr/2016PESC1039/document.
Texto completoIn this thesis we consider the modelling of the joint distribution of order statistics, i.e. random vectors with almost surely ordered components. The first part is dedicated to the probabilistic modelling of order statistics of maximal entropy with marginal constraints. Given the marginal constraints, the characterization of the joint distribution can be given by the associated copula. Chapter 2 presents an auxiliary result giving the maximum entropy copula with a fixed diagonal section. We give a necessary and sufficient condition for its existence, and derive an explicit formula for its density and entropy. Chapter 3 provides the solution for the maximum entropy problem for order statistics with marginal constraints by identifying the copula of the maximum entropy distribution. We give explicit formulas for the copula and the joint density. An application for modelling physical parameters is given in Chapter 4.In the second part of the thesis, we consider the problem of nonparametric estimation of maximum entropy densities of order statistics in Kullback-Leibler distance. Chapter 5 presents an aggregation method for probability density and spectral density estimation, based on the convex combination of the logarithms of these functions, and gives non-asymptotic bounds on the aggregation rate. In Chapter 6, we propose an adaptive estimation method based on a log-additive exponential model to estimate maximum entropy densities of order statistics which achieves the known minimax convergence rates. The method is applied to estimating flaw dimensions in Chapter 7
Cousin, Jean. "Prédiction des distributions granulométriques par le formalisme d'entropie maximum. Applications à plusieurs injecteurs mécaniques". Rouen, 1996. http://www.theses.fr/1996ROUES037.
Texto completoLacerda, Neto Raul Liberato de. "Exploiting the wireless channel for communication". Nice, 2008. http://www.theses.fr/2008NICE4049.
Texto completoLe récent développement des communications mobiles a fait des réseaux sans fil un des secteurs technologiques les plus prometteurs. Stimulé par les avancées en traitement de l'information, le haut débit est récemment devenu le centre de la recherche dans le domaine des communications. La croissance des flux Internet et l'introduction d'une multitude d'applications ont abouti à une nouvelle ère des communications dans lesquelles les réseaux sans fil jouent un rôle très important. Cependant, l'environnement sans fil offre toujours quelques défis qui doivent être adressés avant d'atteindre les pré-requis nécessaires pour les futurs réseaux sans fils. En raison de la caractérisation imprécise du canal de transmission, une grande partie du potentiel du canal sans fil est gaspillé. En outre, les schémas de communications multi-utilisateurs actuels n'exploitent pas convenablement les différents degrés de liberté (espace et bande) du canal. Le but de cette thèse est de traiter précisément ces deux points. La première partie de cette thèse est consacrée à l'utilisation des outils de théorie des probabilités bayésiens qui permettent de générer des modèles basés seulement sur la connaissance partielle de l'environnement. A partir du principe de maximisation d'entropie, nous présentons une approche qui permet d'inférer sur les caractéristiques des canaux en choisissant les distributions de probabilité qui maximisent l'entropie sous des contraintes qui représentent l'état de la connaissance de l'environnement. Cette technique est à la base des procédés d'inférences et a connu un large succès dans différentes disciplines. Des modèles pour deux différents types d'environnement sont analysés et proposés: canaux à large bande et canaux multi-antennes (MIMO). Dans la deuxième partie, le problème d'accès multiple pour les systèmes ultra large bande (UWB) est étudié. Malgré la grande quantité de travaux effectués ces dernières années sur la technologie d'UWB, aucune technique d'accès multiple n'exploite actuellement la dispersion élevée des canaux d'UWB, ce qui engendre une faible efficacité spectrale. Un schéma innovant qui exploite les canaux des utilisateurs s'intitulant "Channel Division Multiple Access" (ChDMA) est proposé. Ce schéma fournit une solution très simple tout en obtenant une efficacité spectrale élevée
Triballier, Kaëlig. "Etude énergétique des processus d'atomisation primaire : application au comportement des injecteurs essence basse pression à triple disque". Rouen, 2003. http://www.theses.fr/2003ROUES032.
Texto completoThis study describes the atomization processes generated by triple-disk nozzles dedicated to port-fuel injected engines. The approach consists in considering all the stages of the spray production, from the internal flow characterization (FLUENT) to the spray drop size distribution analysis (Maximum Entropy Formalism). The study of the intermediate phase, the disintegration of the liquid flow, lies on an original image analysis approach. The atomization process is seen as a growth of the interface between the liquid and gas phases. Thus, a local interface measurement technique was developed and applied to images of the flows that issue from the injector. It led to the determination of the local stretching of the interface, allowing a better understanding of the atomization processes and of their origin. In parallel, it was shown that the interfaces are fractal objects. This characteristic of the atomization processes finds its origin in the turbulent nature of the implied flows. All these results led to the establishment of a two-dimension cartography, allowing a classification of all the observed behaviours. These two dimensions are the non-axial kinetic energy components and the turbulent kinetic energy. This classification organizes in a coherent way the evolutions of the exit flow expansion, stretching and fractal dimension of the interface, as well as mathematical characteristics of the drop size distributions. Moreover, it clearly delimits a zone to be prohibited, characterized by a poor atomization
Chen, Zhou. "Simulation virtuelle des essais de validation pour l'ameublement - meubles à base de plaques". Thesis, Paris Est, 2019. http://www.theses.fr/2019PESC2071.
Texto completoIn furniture industry, the numerical simulation allows the design and optimization of wood-based structures, thus avoiding expensive experimental campaigns. Most of wood-based furniture present some particular features in terms of material properties and geometries. On the one hand, the properties of timber materials (such as particule boards) are strongly heterogeneous and anisotropic. On the other hand, the furniture are often made of simply-shaped elements and then can be represented by an assembly of plates and/or beams. The present work deals with those specific features and presents the identification of the elastic properties of particle boards from digital image correlation (DIC) [Chevalier et al. 2001] as well as the simulation of the mechanical behavior of furniture. First, three-point bending tests based on Timoshenko’s beam theory are performed on different samples cut from a prototype desk for the identification of the material properties using DIC techniques. Secondly, a probabilistic model for the uncertain material parameters is constructed by using the Maximum Entropy (MaxEnt)principle [Soize 2017] combined with a Markov Chain Monte-Carlo (MCMC) method based on Metropolis-Hastings algorithm for generating realizations of the underlying random variables. Thirdly, numerical virtual tests are performed to propagate the uncertainties in the material properties through the model and assess the impact of such variabilities on the response of the structure. Lastly, several real tests were previously conducted on the desk in order to validate the proposed numerical approach. Quite good agreement is observed between the numerical computations and the experimental measurements
Nasser, Hassan. "Analyse des trains de spike à large échelle avec contraintes spatio-temporelles : application aux acquisitions multi-électrodes rétiniennes". Phd thesis, Université Nice Sophia Antipolis, 2014. http://tel.archives-ouvertes.fr/tel-00990744.
Texto completoGallón, Gómez Santiago Alejandro. "Template estimation for samples of curves and functional calibration estimation via the method of maximum entropy on the mean". Toulouse 3, 2013. http://thesesups.ups-tlse.fr/2000/.
Texto completoOne of the main difficulties in functional data analysis is the extraction of a meaningful common pattern that summarizes the information conveyed by all functions in the sample. The problem of finding a meaningful template function that represents this pattern is considered in Chapter 2 assuming that the functional data lie on an intrinsically low-dimensional smooth manifold with an unknown underlying geometric structure embedding in a high-dimensional space. Under this setting, an approximation of the geodesic distance is developed based on a robust version of the Isomap algorithm. This approximation is used to compute the corresponding empirical Fréchet median function, which provides a robust intrinsic estimator of the template. The Chapter 3 investigates the asymptotic properties of the quantile normalization method by Bolstad, et al. (2003) which is one of the most popular methods to align density curves in microarray data analysis. The properties are proved by considering the method as a particular case of the structural mean curve alignment procedure by Dupuy, Loubes and Maza (2011). However, the method fails in some case of mixtures, and a new methodology to cope with this issue is proposed via the algorithm developed in Chapter 2. Finally, the problem of calibration estimation for the finite population mean of a survey variable under a functional data framework is studied in Chapter 4. The functional calibration sampling weights of the estimator are obtained by matching the calibration estimation problem with the maximum entropy on the mean -MEM- principle. In particular, the calibration estimation is viewed as an infinite-dimensional linear inverse problem following the structure of the MEM approach. A precise theoretical setting is given and the estimation of functional calibration weights assuming, as prior measures, the centered Gaussian and compound Poisson random measures is carried out
Li, Songzi. "W-entropy formulas on super ricci flows and matrix dirichlet processes". Toulouse 3, 2015. http://www.theses.fr/2015TOU30365.
Texto completoThis PhD thesis consists of five parts, which are closely related. In part 1, we prove the Harnack inequality and the logarithmic Sobolev inequalities for the heat semigroup of the Witten Laplacian on the K-super Ricci flows and the (K, m)-super Ricci flows. In part 2, we introduce the W-entropy for the heat equation of the weighted Laplacian on the K-super Ricci flows and the (K, m)-super Ricci flows, and prove its variational formula and monotonicity property. In part 3, we introduce the Langevin deformation of geometric flows on the Wasserstein space over ompact Riemannian manifolds, which interpolate the geodesic flow and the gradient flows on the Wasserstein space. The W-entropy formula has been proved. In part 4, we study the Dyson Brownian motion on the octonion algebra, and give two specific models on which the invariant measure and the algebraic multiplicity can be determined. In part 5, we introduce the matrix Dirichlet distribution as their invariant measure
Stoven, Véronique. "Nouvelles techniques pour la reconstruction par RMN de la structure tridimensionnelle de molécules en solution". Paris 11, 1989. http://www.theses.fr/1989PA112190.
Texto completoDe, bortoli Valentin. "Statistiques non locales dans les images : modélisation, estimation et échantillonnage". Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASN020.
Texto completoIn this thesis we study two non-localstatistics in images from a probabilistic point of view: spatialredundancy and convolutional neural network features. Moreprecisely, we are interested in the estimation and detection ofspatial redundancy in naturalimages. We also aim at sampling images with neural network constraints.We start by giving a definition of spatial redundancy in naturalimages. This definition relies on two concepts: a Gestalt analysisof the notion of similarity in images, and a hypothesis testingframework (the a contrario method). We propose an algorithm toidentify this redundancy in natural images. Using this methodologywe can detect similar patches in images and, with this information,we propose new algorithms for diverse image processing tasks(denoising, periodicity analysis).The rest of this thesis deals with sampling images with non-localconstraints. The image models we consider are obtained via themaximum entropy principle. The target distribution is then obtainedby minimizing an energy functional. We use tools from stochasticoptimization to tackle thisproblem.More precisely, we propose and analyze a new algorithm: the SOUL(Stochastic Optimization with Unadjusted Langevin) algorithm. Inthis methodology, the gradient is estimated using Monte Carlo MarkovChains methods. In the case of the SOUL algorithm we use an unadjustedLangevin algorithm. The efficiency of the SOUL algorithm is relatedto the ergodic properties of the underlying Markov chains. Thereforewe are interested in the convergence properties of certain class offunctional autoregressive models. We characterize precisely thedependency of the convergence rates of these models with respect totheir parameters (dimension, smoothness,convexity).Finally, we apply the SOUL algorithm to the problem ofexamplar-based texture synthesis with a maximum entropy approach. Wedraw links between our model and other entropy maximizationprocedures (macrocanonical models, microcanonical models). Usingconvolutional neural network constraints we obtain state-of-the artvisual results
Krysta, Monika. "Modélisation numérique et assimilation de données de la dispersion de radionucléides en champ proche et à l'échelle continentale". Phd thesis, Université Paris XII Val de Marne, 2006. http://tel.archives-ouvertes.fr/tel-00652840.
Texto completoBochard, Pierre. "Vortex, entropies et énergies de ligne en micromagnétisme". Thesis, Paris 11, 2015. http://www.theses.fr/2015PA112119/document.
Texto completoThis thesis is motivated by mathematical questions arising from micromagnetism. One would say that a central topic of this thesis is curl-free vector fields taking value into the sphere. Such fields naturally arise as minimizers of micromagnetic-type energies. The first part of this thesis is motivated by the following question : can we find a kinetic formulation caracterizing curl-free vector fields taking value into the sphere in dimension greater than 2 ? Such a formulation has been found in two dimension by Jabin, Otto and Perthame in \cite. De Lellis and Ignat used this formulation in \cite{DeLellis_Ignat_Regularizing_2014} to caracterize curl-free vector fields taking value into the sphere with a given regularity. The main result of this part is the generalization of their kinetic formulation in any dimension and the proof that if $d>2$, this formulation caracterizes only constant vector fields and vorteces, i. e. vector fields of the form $\pm \frac$. The second part of this thesis is devoted to a generalization of the notion of \textit, which plays a key role in the article of De Lellis and Ignat we talked about above. We give a definition of entropy in any dimension, and prove properties quite similar to those enjoyed by the classical two-dimensional entropy. The third part of this thesis, which is the result of a joint work with Antonin Monteil, is about the study of an Aviles-Giga type energy. The main point of this part is a necessary condition for such an energy to be lower semi continuous. We give in particular an example of energy of this type for which the viscosity solution of the eikonal equation is \textit a minimizer. The last part, finally is devoted to the study of a Ginzburg-Landau type energy where we replace the boundary condition of the classical Ginzburg-Landau energy introduced by Béthuel, Brezis and Helein by a penalization within the energy at the critical scaling depending on a parameter. The core result of this part is the description of the asymptotic of the minimal energy, which, depending on the parameter, favorizes vortices-like configuration like in the classical Ginzburg-Landau case, or configurations singular along a line
Zheng, Huicheng. "Modèles de maximum d'entropie pour la détection de la peau : application au filtrage de l'internet". Lille 1, 2004. https://pepite-depot.univ-lille.fr/LIBRE/Th_Num/2004/50376-2004-Zheng.pdf.
Texto completoDans un troisième temps, nous ajoutons des contraintes sur les couleurs des pixels voisins. Le modèle obtenu est à nouveau un champs de Markov mais contenant un très grand nombre de paramètres. Afin, aussi bien d'estimer les paramètres de ces modèles que d'effectuer de l'inférence, c'est à dire, étant donné une image couleur, calculer la probabilité pour chaque pixel qu'il corresponde à de la peau, nous proposons d'approximer, localement, le graphe associé aux pixels par un arbre. On dispose alors de l'algorithme "iterative scaling" pour l'estimation des paramètres et de l'algorithme "belief propagation" pour l'inférence. Nous avons effectué de nombreuses études expérimentales afin d'évaluer les performances respectives des différents modèles, en particulier en modifiant la taille et la géométrie des arbres. Dans le cas du projet européen Poesia, nous avons utilisé notre détecteur de peau en entrée d'un système de classification utlisant la méthode des réseaux neuronaux pour bloquer les pages webs indésirable pour les enfants. Nous avons obtenu des résultats extrèmement encourageants
Le, Tien-Thinh. "Modélisation stochastique, en mécanique des milieux continus, de l'interphase inclusion-matrice à partir de simulations en dynamique moléculaire". Thesis, Paris Est, 2015. http://www.theses.fr/2015PESC1172/document.
Texto completoThis work is concerned with the stochastic modeling and identification of the elastic properties in the so-called interphase region surrounding the inclusions in nanoreinforced composites. For the sake of illustration, a prototypical nanocomposite made up with a model polymer matrix filled by a silica nanoinclusion is considered. Molecular Dynamics (MD) simulations are first performed in order to get a physical insight about the local conformation of the polymer chains in the vicinity of the inclusion surface. In addition, a virtual mechanical testing procedure is proposed so as to estimate realizations of the apparent stiffness tensor associated with the MD simulation box. An information-theoretic probabilistic representation is then proposed as a surrogate model for mimicking the spatial fluctuations of the elasticity field within the interphase. The hyper parameters defining the aforementioned model are subsequently calibrated by solving, in a sequential manner, two inverse problems involving a computational homogenization scheme. The first problem, related to the mean model, is formulated in a deterministic framework, whereas the second one involves a statistical metric allowing the dispersion parameter and the spatial correlation lengths to be estimated. It is shown in particular that the spatial correlation length in the radial direction is roughly equal to the interphase thickness, hence showing that the scales under consideration are not well separated. The calibration results are finally refined by taking into account, by means of a random matrix model, the MD finite-sampling noise
Durantel, Florent. "Mesure de luminescence induite par faisceaux d'ions lourds rapides résolue à l'echelle picoseconde". Thesis, Normandie, 2018. http://www.theses.fr/2018NORMC261/document.
Texto completoWe developed an instrument for measuring the luminescence induced by a heavy ion beam (nucleons 12) and energy in the range of MeV / nucleon. Based on a single photon counting method obtained by coincidences, the device can provide in the same run a 16-channel energy spectrum in the UV-visible- IR region (185-920 nm) and a time-resolved response in the range of ns up to µs for each channel. Temperature measurements can be performed from room temperature down to 30K.This work places particular emphasis on data extraction methods: Once the need to deconvolve the signals demonstrated the evaluation of different instrument profiles (simulated and reconstructed from measurements) leads to a systematic temporal characterization of each component of the device. Then, these instrumental profiles are used in two deconvolution methods: least squares first followed by maximum entropy method.Two typical materials are tested: the Strontium Titanate for the study of the dynamics of the electronic excitation, and a commercial scintillator, the BC400, for the study of the aging and the decrease of performances with fluence. In both cases, we have been able to highlight the presence of an ultrafast component of subnanosecond time constant
Rabenoro, Dimbihery. "Distribution asymptotique de vecteurs aléatoires indépendants non identiquement distribués conditionnés par leur somme. Lois limites fonctionnelles pour les incréments d’un processus de Lévy". Thesis, Sorbonne université, 2018. http://www.theses.fr/2018SORUS572.
Texto completoIn the first part of this work, we develop conditional limit theorems for independent not necessarily identically distributed random vectors. We extend thus classical theorems, as the Gibbs conditioning principle, obtained in the i.i.d. case. We use, among other tools, some saddlepoint approximations. In the second part, we obtain a functional form of Erdös-Renyi theorems for the increments of Lévy processes. The main tools are here functional large deviations principles
Caron, Jérôme. "Etude et validation clinique d'un modèle aux moments entropique pour le transport de particules énergétiques : application aux faisceaux d'électrons pour la radiothérapie externe". Thesis, Bordeaux, 2016. http://www.theses.fr/2016BORD0452/document.
Texto completoIn radiotherapy field, dose deposition simulations in patients are performed on Treatment Planning Systems (TPS) equipped with specific algorithms that differ in the way they model the physical interaction processes of electrons and photons. Although those clinical TPS are fast, they show significant discrepancies in the neighbooring of inhomogeneous tissues. My work consisted in validating for clinical electron beams an entropic moments based algorithm called M1. Develelopped in CELIA for warm and dense plasma simulations, M1 relies on the the resolution of the linearized Boltzmann kinetic equation for particles transport according to a moments decomposition. M1 equations system requires a closure based on H-Theorem (entropy maximisation). M1 dose deposition maps of 9 and 20 MeV electron beams simulations were compared to those extracted from reference codes simulations : clinical macro Monte-Carlo (eMC) and full Monte-carlo (GEANT4-MCNPX) codes and from experimental data as well. The different test cases consisted in homogeneous et complex inhomogeneous fantoms with bone and lung inserts. We found that M1 model provided a dose deposition accuracy better than some Pencil Beam Kernel algorithm and close of those furnished by clinical macro and academic full Monte-carlo codes, even in the worst inhomogeneous cases. Time calculation performances were also investigated and found better than the Monte-Carlo codes
Bennani, Youssef. "Caractérisation de la diversité d'une population à partir de mesures quantifiées d'un modèle non-linéaire. Application à la plongée hyperbare". Thesis, Nice, 2015. http://www.theses.fr/2015NICE4128/document.
Texto completoThis thesis proposes a new method for nonparametric density estimation from censored data, where the censing regions can have arbitrary shape and are elements of partitions of the parametric domain. This study has been motivated by the need for estimating the distribution of the parameters of a biophysical model of decompression, in order to be able to predict the risk of decompression sickness. In this context, the observations correspond to quantified counts of bubbles circulating in the blood of a set of divers having explored a variety of diving profiles (depth, duration); the biophysical model predicts of the gaz volume produced along a given diving profile for a diver with known biophysical parameters. In a first step, we point out the limitations of the classical nonparametric maximum-likelihood estimator. We propose several methods for its calculation and show that it suffers from several problems: in particular, it concentrates the probability mass in a few regions only, which makes it inappropriate to the description of a natural population. We then propose a new approach relying both on the maximum-entropy principle, in order to ensure a convenient regularity of the solution, and resorting to the maximum-likelihood criterion, to guarantee a good fit to the data. It consists in searching for the probability law with maximum entropy whose maximum deviation from empirical averages is set by maximizing the data likelihood. Several examples illustrate the superiority of our solution compared to the classic nonparametric maximum-likelihood estimator, in particular concerning generalisation performance
Krysta, Monika Le Dimet François-Xavier Bocquet Marc. "Modélisation numérique et assimilation de données de la dispersion de radionucléides en champ proche et à l'échelle continentale". Créteil : Université de Paris-Val-de-Marne, 2006. http://doxa.scd.univ-paris12.fr:80/theses/th0243186.pdf.
Texto completoTixier, Eliott. "Variability modeling and numerical biomarkers design in cardiac electrophysiology". Thesis, Paris 6, 2017. http://www.theses.fr/2017PA066325/document.
Texto completoThis PhD thesis is dedicated to the study of the variability observed in cardiac electrophysiology (i.e. the electrical activity of biological tissues) measurements and to the design of numerical biomarkers extracted from these measurements. The potential applications are numerous, ranging from a better understanding of existing electrophysiology models to the assessment of adverse effects of drugs or the diagnosis of cardiac pathologies. The cardiac electrophysiology models considered in the present work are either ODEs or PDEs depending on whether we focus on the cell scale or the tissue scale. In both cases, these models are highly non-linear and computationally intensive. We proceed as follows: first we develop numerical tools that address general issues and that are applicable beyond the scope of cardiac electrophysiology. Then, we apply those tools to synthetic electrophysiology measurements in various realistic scenarios and, when available, to real experimental data. In the first part of this thesis, we present a general method for estimating the probability density function (PDF) of uncertain parameters of models based on ordinary differential equations (ODEs) or partial differential equations (PDEs). The method is non-instrusive and relies on offline evaluations of the forward model, making it computationally cheap in practice compared to more sophisticated approaches. The method is illustrated with generic PDE and ODE models. It is then applied to synthetic and experimental electrophysiology measurements. In the second part of this thesis, we present a method to extract and select biomarkers from models outputs in view of performing classication tasks or solving parameter identification problems. The method relies on the resolution of a sparse optimization problem. The method is illustrated with simple models and then applied to synthetic measurements, including electrocardiogram recordings, and to experimental data obtained from micro-electrode array measurements
Bosché, Aurélien. "Flots géodésiques expansifs sur les variétés compactes sans points conjugués". Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAM085/document.
Texto completoThis thesis is divided in two independants parts.In the first part we investigate dynamical properties of expansive geodesic flows on compact manifolds without conjugate points using the work of R.O.~Ruggiero. More precisely we show that such a flow admits a unique measure of maximal entropy and constructthis measure. This extends results known in non-positively curved manifolds of rank one (and our construction is analogous). Wethen show, using this measure of maximal entropy, that the asymptotics of Margulis (known for compact negatively curvedmanifolds) on the number of geodesic loops still hold in this framework.In the second part we study isometries of finite dimensionalsymmetric cones for both the Thompson and the Hilbert metric. More precisely we show that the isometry group induced by the linear automorphisms preserving such a cone is a subgroup of finite indexin the full group of isometries for those two metrics and give a natural set of representatives of the quotient. This extends resultsof L.~Molnar (who studied such isometies for the symmetric irreducible cone of symmetric positive definite operators on acomplex Hilbert space)
Chauvet, Guillaume Carbon Michel. "Méthodes de Bootstrap en population finie". Rennes : Université Rennes 2, 2008. http://tel.archives-ouvertes.fr/tel-00267689/fr.
Texto completoBroux, Thibault. "Caractérisations structurales in situ avancées d'oxydes dérivées de la pérovskite pour des applications électrochimiques à haute température". Thesis, Rennes 1, 2014. http://www.theses.fr/2014REN1S125/document.
Texto completoThis thesis is focused on oxides related to perovskite such as K2NiF4 structure-type, double perovskite and brownmillerite with mixed conduction properties. This ability to conduct both oxygen ions and electrons is relevant for electrochemical devices operating at high temperature, particularly as an electrode for solid oxide fuel cell. Specifically, this thesis deals with the synthesis and advanced crystal structure characterization of the reactivity of these materials mainly through large scale facilities by means of neutron powder diffraction (NPD) and X-ray synchrotron. Preliminary work in these studies involves inorganic synthesis by solid-state or by sol-gel route, thermogravimetric analysis and the iodometric titration. Original reactivity cells have been developed at the ISCR to study redox behavior under different gas flow and as a function of temperature for both neutron diffraction and X-ray synchrotron experiment. In situ study by NPD of La2-xSrxMnO4 ± δ compounds where x = 2.0 and x = 0.8 which derived from the compound cathode reference La1-xSrxMnO3 allowed to follow the structural evolution as a function of δ in reducing conditions for x = 2.0 and oxidizing conditions for x = 0.8. The synchrotron study of Pr2NiO4.22 helped to highlight the monoclinic symmetry at room temperature while previous studies announced an orthorhombic symmetry. Besides, structural changes including the transition to the HTT phase are accompanied by an incommensurable modulation that persists at least up to 900 °C. The study of double perovskites NdBaCo2-xMnxO5+δ where 0 ≤ x ≤ 2 showed that these materials exhibit a promising electrical conductivities for SOFC applications as cathode. In addition, the comparison of the molecular dynamics and NDP combined with MEM for x = 0 compound has elucidated the oxygen diffusion mechanism in these compounds. The study by NPD in reducing condition of LaSrFeCoO6 to the brownmillerite LaSrFeCoO5 has showed that the reduced structure persists at high temperatures and allowed to follow the evolution in the ordering of the magnetic moments while cooling LaSrFeCoO5
Labsy, Zakaria. "Méthode indirecte de détermination de la vitesse maximale aérobie sur le terrain par un test spécifique au football". Paris 11, 2002. http://www.theses.fr/2002PA112318.
Texto completoThe aim of the study is to test the accuracy of two versions of a specific soccer field test for assessing Maximal Aerobic Velocity (MAV). The original Probst field test consists in repetition of 280 m successive races including changes in direction separated by a 30s rest with an initial speed of 8. 4 km/h and a 0. 6 km/h increment at each stage. The adapted version is carried out with the same protocol but constant stages of 2 min length and a 1. 2 km/h increment at each stage. The first study demonstrates that the original version of the Probst field test overestimates significantly the maximal aerobic velocity in soccer players, but not in athletes of medium-distance race, in comparison with two classical tests, with an aligned version and with the same protocol that that of the test of Probst in laboratory. The second study confirms this overestimation and demonstrates that our adapted version is much closed and highly correlated with a maximal aerobic velocity obtained by a laboratory treadmill test in soccer players. In conclusion, our adapted version of the Probst test but not the original version appears as a reliable specific soccer field test and easily conductible for estimate the maximal aerobic velocity in soccer players
Caputo, Jean-Guy. "Dimension et entropie des attracteurs associés à des écoulements réels : estimation et analyse de la méthode". Grenoble 1, 1986. http://www.theses.fr/1986GRE10057.
Texto completoMinoukadeh, Kimiya. "Deterministic and stochastic methods for molecular simulation". Phd thesis, Université Paris-Est, 2010. http://tel.archives-ouvertes.fr/tel-00597694.
Texto completoBarbut, Charlotte. "Méthode d'évaluation de l'intérêt d'un tri basée sur les bilans de valeurs". Compiègne, 2005. http://www.theses.fr/2005COMP1605.
Texto completoWe present a method to estimate the interest of sorting. This method is based on values assessments. We apply them to sortings producing two types of mixtures of materials , for recycling or for incineration. We study three properties : the energy provided by the operators, the recyclability of a mixture of materials and its combustibility. To define the recyclability of a mixture of materials, we choose entropy, synonymous of order level. By analogy with thermodynamics, we define the energy provided by the operators. Next, we use exergy to estimate the energy in a fuel. Finally, we assign values to these three properties. Only a total value of a sorting system above the value of the initial mixture points out an interesting sorting. However, if this total value peaks during sorting, continuing to sort after this maximum becomes uninteresting
Poinsotte, Patrice. "Pluie, crue et inondation maximale probable en Versilia (Toscane, Alpes Apuane) : une nouvelle méthode d'évaluation des aléas pluvio-hydrologiques extrêmes". Dijon, 2004. http://www.theses.fr/2004DIJOL009.
Texto completoThe prospect of this work is building a new methodology for flood management and damage mitigation with accepted standards, especially on risk map implementation. The objectives are to implement into models and tools a new synthetic approach based on the local-storm Probable Maximum Precipitation (PMP) and Probable Maximum Flood (PMF). These concepts are devoted to hazard mitigation and thermodynamic improvement in the Apuane Alps. Research is done in the field of regionalisation in hydrology, in the field of extreme rainfalls and discharges evaluations. First results have been obtain and are conform to the Mediterranean meteorology. The quantification and localization of maximum hazard, at watershed scale, gives a good estimation of the objectives of protection against floods. Thus, the methodology gives effective answers to help decision-makers and engineers to develop solutions in order to resolve specific problems in flood risk prevention
Billat, Véronique. "Puissance critique déterminée par la lactatémie en régime continu d'exercice musculaire pour la validation d'une méthode d'évaluation de la capacité maximale aérobie". Grenoble 1, 1988. http://www.theses.fr/1988GRE10140.
Texto completoBoussandel, Sahbi. "Méthodes de résolution d'équations algébriques et d'évolution en dimension finie et infinie". Thesis, Metz, 2010. http://www.theses.fr/2010METZ027S/document.
Texto completoIn this work, we solve algebraic and evolution equations in finite and infinite-dimensional sapces. In the first chapter, we use the Galerkin method to study existence and maximal regularity of solutions of a gradient abstract system with applications to non-linear diffusion equations and to non-degenerate quasilinear parabolic equations with nonlocal coefficients. In the second chapter, we Study local existence, uniqueness and maximal regularity of solutions of the curve shortening flow equation by using the local inverse theorem. Finally, in the third chapter, we solve an algebraic equation between two Banach spaces by using the continuous Newton’s method and we apply this result to solve a non-linear ordinary differential equation with periodic boundary conditions