Tesi sul tema "Modèles de calcul non standards"
Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili
Vedi i top-50 saggi (tesi di laurea o di dottorato) per l'attività di ricerca sul tema "Modèles de calcul non standards".
Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.
Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.
Vedi le tesi di molte aree scientifiche e compila una bibliografia corretta.
Emmanuel, Aurélien. "Courbes d'accumulations des machines à signaux". Electronic Thesis or Diss., Orléans, 2023. http://www.theses.fr/2023ORLE1079.
This thesis studies a geometric computational model: signal machines. We show how to draw function graphs using-binary trees. In the world of cellular automata, we often consider particles or signals: structures that are periodic in time and space, that is, structures that move at constant speed. When several signals meet, a collision occurs, and the incoming signals can continue, disappear, or give rise to new signals, depending on the rules of the cellular automaton. Signal-machines are a computational model that takes these signals as basic building blocks. Visualized in a space-time diagram, with space on the horizontal axis and time running upwards, this model consists of calculating by drawing segments and half-lines. We draw segments upwards until two or more intersect, and then start new segments, according to predefined rules. Compared to cellular automata, signal-machines allow for the emergence of a new phenomenon: the density of signals can be arbitrarily large, even infinite, even when starting from a finite initial configuration. Such points in the space-time diagram, whose neighborhoods contain an infinity of signals, are called accumulation points.This new phenomenon allows us to define new problems geometrically. For example, what are the isolated accumulation points that can be achieved using rational initial positions and rational velocities? Can we make so the set of accumulation points is a segment? A Cantor set? In this thesis, we tackle the problem of characterizing the function graphs that can be drawn using an accumulation set. This work fits into the exploration of the computational power of signal-machines, which in turn fits into the study of the computational power of non-standard models. We show that the functions from a compact segment of the line of Real numbers whose graph coincides with the accumulation set of a signal machine are exactly the continuous functions. More generally, we show how signal machines can draw any lower semicontinuous function. We also study the question under computational constraints, with the following result: if a computable signal-machine diagram coincides with the graph of a Lipschitz-function of sufficiently small Lipschitz coefficient, then that function is the limit of a growing and computable sequence of rational step functions
Pégny, Maël. "Sur les limites empiriques du calcul : calculabilité, complexité et physique". Thesis, Paris 1, 2013. http://www.theses.fr/2013PA010673/document.
Recent years have seen a surge in the interest for non-standard computational models, inspired by physical, biological or chemical phenomena. The exact properties of some of these models have been a topic of somewhat heated discussion: what do they compute? And how fast do they compute? The stakes of these questions were heightened by the claim that these models would violate the accepted limits of computation, by violating the Church-Turing Thesis or the Extended Church-Turing Thesis. To answer these questions, the physical realizability of some of those models - or lack thereof - has often been put at the center of the argument. It thus seems that empirical considerations have been introduced into the very foundations of computability and computational complexity theory, both subjects that would have been previously considered purely a priori parts of logic and computer science. Consequently, this dissertation is dedicated to the following question: do computability and computational complexity theory rest on empirical foundations? If yes, what are these foundations? We will first examine the precise meaning of those limits of computation, and articulate a philosophical conception of computation able to make sense of this variety of models. We then answer the first question by the affirmative, through a careful examination of current debates around non-standard models. We show the various difficulties surrounding the second question, and study how they stem from the complex translation of computational concepts into physical limitations
Bizouard, Vincent. "Calculs de précision dans un modèle supersymétrique non minimal". Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAY075/document.
Although the Standard Model has been very successful so far, it presents several limitations showing that it is only an effective low energy theory. For example, the neutrino masses or dark matter are not predicted in this model. Gravity is also not taken into account and we expect that it plays a quantum role at energies around the Planck mass. Moreover, radiative corrections to the Higgs boson mass suffer from quadratic divergences. All these problems underline the fact that new physics should appear, and this has to be described by an extension of the Standard Model. One well-motivated possibility is to add a new space-time symetry, called Supersymmetry, which link bosons and fermions. In its minimal extension, Supersymmetry can already solve the dark matter paradox with a natural candidate, the neutralino, and provide a cancellation of the dangerous quadratic corrections to the Higgs boson mass.In this thesis, we focussed on the Next-to-Minimal SuperSymmetric extension of the Standard Model, the NMSSM. To compare theoretical predictions with experiments, physical observables must be computed precisely. Since these calculations are long and complex, automatisation is desirable. This was done by developping SloopS, a program to compute one-loop decay width and cross-section at one-loop order in Supersymmetry. With this code, we first analysed the decay of the Higgs boson in a photon and a Z boson. This decay mode is induced at the quantum level and thus is an interesting probe of new physics. Its measurement has been started during Run 1 of the LHC and is continued now in Run 2. The possibility of deviation between the measured signal strength and the one predicted by the Standard Model motivates a careful theoretical analysis in beyond Standard Models which we realised within the NMSSM. Our goal was to compute radiative corrections for any process in this model. To cancel the ultraviolet divergences appearing in higher order computations, we had to carry out and implement the renormalisation of the NMSSM in SloopS. Finally, it was possible to use the renormalised model to compute radiatives corrections to masses and decay widths of Higgs bosons and supersymmetric particles in the NMSSM and to compare the results between different renormalisation schemes
Boughattas, Sedki. "L'Arithmétique ouverte et ses modèles non-standards". Paris 7, 1987. http://www.theses.fr/1987PA077044.
Ren, Chengfang. "Caractérisation des performances minimales d'estimation pour des modèles d'observations non-standards". Thesis, Paris 11, 2015. http://www.theses.fr/2015PA112167/document.
In the parametric estimation context, estimators performances can be characterized, inter alia, by the mean square error and the resolution limit. The first quantities the accuracy of estimated values and the second defines the ability of the estimator to allow a correct resolvability. This thesis deals first with the prediction the "optimal" MSE by using lower bounds in the hybrid estimation context (i.e. when the parameter vector contains both random and non-random parameters), second with the extension of Cramér-Rao bounds for non-standard estimation problems and finally to the characterization of estimators resolution. This manuscript is then divided into three parts :First, we fill some lacks of hybrid lower bound on the MSE by using two existing Bayesian lower bounds: the Weiss-Weinstein bound and a particular form of Ziv-Zakai family lower bounds. We show that these extended lower bounds are tighter than the existing hybrid lower bounds in order to predict the optimal MSE.Second, we extend Cramer-Rao lower bounds for uncommon estimation contexts. Precisely: (i) Where the non-random parameters are subject to equality constraints (linear or nonlinear). (ii) For discrete-time filtering problems when the evolution of states are defined by a Markov chain. (iii) When the observation model differs to the real data distribution.Finally, we study the resolution of the estimators when their probability distributions are known. This approach is an extension of the work of Oh and Kashyap and the work of Clark to multi-dimensional parameters estimation problems
Kechid, Mounir. "La programmation dynamique non-serial dans les modèles de calcul parallèle BSP/CGM". Amiens, 2009. http://www.theses.fr/2009AMIE0110.
We attend this decade a trend (migration) of the parallel hardware towards coarse-grain multiprocessor systems. However, the majority of the traditional parallel software is designed for fine-grain system and for shared memory machines. One of the main current challenges of the researchers in design of parallel algorithms is to reduce this incompatibility said “software-hardware gap”. Big interest is so focused on the design of efficient parallel algorithms for coarse-grain multi-processors systems. It's in this context that this thesis contributes. We use the BSP/CGM parallel computing model (Bulk synchronous parallel Coarse Grained Multicomputer) to design solutions for problems using dynamic programming approach. We are interested in a typical sample of the dynamic programming polyadic non-serial which is characterized by a very strong dependence of calculations. It is about an important class of problems widely used in the high-performance applications (MCOP: the matrix chain ordering problem, OBST: the optimal binary search tree problem, CTP: the convex polygons triangulation). We firstly present a detailed study of the design tools, i. E. BSP/CGM model, as well as a proposition of refinement of the BSP cost model to improve its prediction accuracy. We present then a generic BSP/CGM solution for the aforesaid problems class. At the end, after a study of the constraints of the acceleration of this generic solution in BSP/CGM model for the problems MCOP and OBST, two accelerated BSP/CGM algorithms are proposed
Pétrino, Paule. "Mise au point d'une méthode prédictive de calcul des viscosités des mélanges liquides non ioniques". Aix-Marseille 3, 1991. http://www.theses.fr/1991AIX3A004.
Daouia, Abdelaati. "Analyse non-paramétrique des frontières de production et des mesures d'efficacité à l'aide de quantiles conditionnels non-standards". Toulouse 1, 2003. http://www.theses.fr/2003TOU10049.
The present thesis is a part of the literature of nonparametric frontier and efficiency estimation in that we instigate this problem. Chapter 1 introduces a mathematical formulation of the problem of efficiency measurement showing its connection with the problem of estimating the support of a multivariate distribution, and summarizes the contributions of this dissertation
Cois, Olivier. "Systèmes linéaires non entiers et identification par modèle non entier : application en thermique". Bordeaux 1, 2002. http://www.theses.fr/2002BOR12534.
This thesis deals with system representation and identification by fractional models. Chapter 1 recalls the definitions and main properties of the fractional operators. Chapter 2 proposes the study of a continuous MIMO complex-fractional system through a generalized state space representation. A stability theorem is established from the output analytical expression. Chapter 3 deals with the modeling of diffusive processes using fractional differentiation operators. The heat transfer trough 6 different finite and semi-infinite media is studied. Approximations using integer or fractional transfer functions are then established and compared. Chapter 4 is devoted to system identification by fractional model. Equation error methods as well as output error methods are developed. Finally, chapter 5 gives an application of system identification to the solving of a thermal inverse problem. An example, consisting of the estimation of the thermal cut conditions, is given
BERGA, ABDELMAJID. "Calcul elastoplastique des sols a lois non associees par elements finis base sur l'approche des materiaux standards implicites". Compiègne, 1993. http://www.theses.fr/1993COMP595S.
Riga, Candia. "Calcul fonctionnel non-anticipatif et applications en finance". Thesis, Paris 6, 2015. http://www.theses.fr/2015PA066737/document.
This thesis develops a mathematical framework for the analysis of continuous-time trading strategies which, in contrast to the classical setting of continuous-time finance, does not rely on stochastic integrals or other probabilistic notions.Using the `non-anticipative functional calculus', we first develop a pathwise definition of the gain process for a large class of continuous-time trading strategies which includes delta-hedging strategies, as well as a pathwise definition of the self-financing condition. Using these concepts, we propose a framework for analyzing the performance and robustness of delta-hedging strategies for path-dependent derivatives across a given set of scenarios. Our setting allows for general path-dependent payoffs and does not require any probabilistic assumption on the dynamics of the underlying asset, thereby extending previous results on robustness of hedging strategies in the setting of diffusion models. We obtain a pathwise formula for the hedging error for a general path-dependent derivative and provide sufficient conditions ensuring the robustness of the delta-hedge. We show in particular that robust hedges may be obtained in a large class of continuous exponential martingale models under a vertical convexity condition on the payoff functional. We also show that discontinuities in the underlying asset always deteriorate the hedging performance. These results are applied to the case of Asian options and barrier options. The last chapter, independent of the rest of the thesis, proposes a novel method, jointly developed with Andrea Pascucci and Stefano Pagliarani, for analytical approximations in local volatility models with Lévy jumps
Le, Bodic Pierre. "Variantes non standards de problèmes d'optimisation combinatoire". Thesis, Paris 11, 2012. http://www.theses.fr/2012PA112190.
This thesis is composed of two parts, each part belonging to a sub-domain of combinatorial optimization a priori distant from the other. The first research subject is stochastic bilevel programming. This term regroups two research subject rarely studied together, namely stochastic programming on the one hand, and bilevel programming on the other hand. Mathematical Programming (MP) is a set of modelisation and resolution methods, that can be used to tackle practical problems and help take decisions. Stochastic programming and bilevel programming are two sub-domains of MP, each one of them being able to model a specific aspect of these practical problems. Starting from a practical problem, we design a mathematical model where the bilevel and stochastic aspects are used together, then apply a series of transformations to this model. A resolution method is proposed for the resulting MP. We then theoretically prove and numerically verify that this method converges. This algorithm can be used to solve other bilevel programs than the ones we study.The second research subject in this thesis is called "partial cut and cover problems in graphs". Cut and cover problems are among the most studied from the complexity and algorithmical point of view. We consider some of these problems in a partial variant, which means that the cut or cover property that is looked into must be verified partially, according to a given parameter, and not completely, as it was the case with the original problems. More precisely, the problems that we study are the partial multicut, the partial multiterminal cut, and the partial dominating set. Versions of these problems were vertices are
Rota, Nodari Simona. "Etude mathématique de modèles non linéaires issus de la physique quantique relativiste". Paris 9, 2011. http://basepub.dauphine.fr/xmlui/handle/123456789/7233.
This thesis is devoted to the study of two nonlinear relativistic quantum models. In the first part, we prove by a perturbation method the existence of solutions of the coupled Einstein–Dirac–Maxwell equations for a static, spherically symmetric system of two fermions in a singlet spinor state and for a weak electromagnetic coupling. In the second part, we study a relativistic mean-field model that describes the behavior of nucleons in the atomic nucleus. We provide a condition that ensures the existence of a ground state solution of the relativistic mean-field equations in a static case; in particular, we relate the existence of critical points of a strongly indefinite energy functional to strict concentration-compactness inequalities
Dolejší, Vít. "Sur les méthodes combinant des volumes finis et des éléments finis pour le calcul d'écoulements compressibles sur des maillages non structurés". Aix-Marseille 2, 1998. http://www.theses.fr/1998AIX22123.
Munch, Guillaume. "Syntaxe et modèles d'une composition non-associative des programmes et des preuves". Paris 7, 2013. http://www.theses.fr/2013PA077130.
The thesis is a contribution to the understanding of the nature, role, and mechanisms of polarisation in programming languages, proof theory and categorical models. Polarisation corresponds to the idea that we can relax the associativity of composition, as we show by relating duploids, our direct model of polarisation, to adjunctions. As a consequence, polarisation underlies many models of computation, which we further show by decomposing continuation-passing-style models of delimited control in three fondamental steps which allowing us to reconstruct four call-by-name variants of the shift and reset operators. It also explains constructiveness-related phenomena in proof theory, which we illustrate by providing a formulae-as-types interpretation for polarisation in general and for an involutive negation in particular. The cornerstone of our approach is an interactive term-based représentation of proofs and programs (L calculi) which exposes the structure of polarities. It is based on the correspondence between abstract machines and sequent calculi, and it aims at synthesising various trends: the modelling of control, evaluation order and effects in programming languages, the quest for a relationship between categorical duality and continuations, and the interactive notion of construction in proof theory. We give a gentle introduction to our approach which only assumes elementary knowledge of simply-typed lambda calculus and rewriting
Baron, Céline. "Contribution à l'estimation de paramètres physiques à l'aide de modèles d'ordre réduit". Poitiers, 2003. http://www.theses.fr/2003POIT2349.
The works presented in the thesis dissertation deal with a methdology for the determination of uncertainty parametric domains and for the estimation of physical parameters using reduced order models. Firstly, we present a methodology for the determination of uncertainty parametric domains with a hypothesis of bounded error. This method is based on global identification approach defined by J. RICHALET. Uncertainty domains correspond to iso-criterion curves given by the minimization of the quadratic criterion. We also present a methodology for the estimation of physical parameters using reduced order models. Modelling error, responsible of a deterministic bias of the estimator, is taken into account using a constrained black box model. These constraints are introduced using time moments or frequency ones. Application examples are used to validate this methodology
Bensalem, Hadjira. "Calcul non linéaire des structures en béton : cas particulier du béton soumis au feu". Electronic Thesis or Diss., Université Grenoble Alpes, 2022. https://thares.univ-grenoble-alpes.fr/2022GRALI014.pdf.
The serious fires that have occurred in recent decades on concrete structures or housing such as those in the Channel Tunnel (1996) or Mont Blanc (1999) have brought the issue to the fore of concrete subjected to fire and its consequences. Ordinary concrete is, however, renowned for its good fire resistance. But the appearance and use of high-performance or ultra-high-performance concretes, while solving unprecedented technical problems, simultaneously pose the problem of their greater sensitivity to fire due to their greater compactness. This study is a contribution to the understanding of thermal degradation processes by an experimental and numerical approach. In the experimental approach, high performance concrete specimens-slabs formulated from standardized sand and instrumented with thermocouples were subjected to thermal spalling tests following the standard ISO834 fire temperature curve of the Eurocode 2. Different thermal and event data collected during and after the tests were analyzed. The time to first burst, the temperature of the furnace and that of the concrete to the first burst as well as the debris produced were characterized. Using some of the experimental results as input data, numerical modeling allowed the temperature field in the specimens to be determined satisfactorily during testing. In some cases, the use of the technique of analysis by reverse method has been necessary. By subsequently using the thermomechanical approach and by relying on Mazars's isotropic damage behavior law, it was possible to calculate in particular the damage field in the test specimens-slabs during the various tests. The comparison of the results of the numerical modeling with those of the experiment showed that the modeled damage field is likely to account for the thermal degradations observed during the tests both in terms of extent and depth
Noiret, Céline. "Utilisation du calcul formel pour l'identifiabilité de modèles paramètriques et nouveaux algorithmes en estimation de paramètres". Compiègne, 2000. http://www.theses.fr/2000COMP1303.
HOANG, HUY KHANH. "Apport du non-determinisme dans les langages de requetes". Paris 1, 1997. http://www.theses.fr/1997PA010062.
Khouaja, Anis. "Modélisation et identification de systèmes non-linéaires à l'aide de modèles de volterra à complexité réduite". Phd thesis, Université de Nice Sophia-Antipolis, 2005. http://tel.archives-ouvertes.fr/tel-00090557.
Payet, Jimmy. "États fondamentaux dans l'approximation quasi-classique pour des modèles d'électrodynamique quantique non relativiste". Electronic Thesis or Diss., Université de Lorraine, 2023. http://www.theses.fr/2023LORR0045.
In this thesis, we study quantum field theory models that describe the interactions between a non-relativistic particle and a quantized radiation field. In particular, we focus on the minimization of the quasi-classical energy of the considered models, i.e. the energy of the system when the field is in a coherent state. A first result concerns the Spin-boson model. It is a simple (but non-trivial) model where the non-relativistic particle is described by a finite dimensional system and is linearly coupled to a quantized scalar field. We obtain an explicit expression for the quasi-classical ground state energy and the set of minimizers for this model, for any values of the coupling constant. We also prove that the set of minimizers is trivial when the coupling constant is below a critical value. We also obtain the existence of a ground state for the energy when the field is in a superposition of two coherent states.Next, we consider models where the non-relativistic particle is described by a Schrödinger operator. In the case where the coupling between the particle and the field is linear in the creation and annihilation operators (Nelson model, polaron model for instance), we show the existence and uniqueness of a quasi-classical ground state associated with the quasi-classical energy, up to a phase symmetry. We consider a general external potential, either bindind or confining, and do not impose an ultraviolet cutoff in the definition of the energy functional. Then, we obtain an asymptotic expansion of the quasi-classical ground state energy as the coupling parameter goes to 0. Finally, by making the energy depend on the ultraviolet parameter, we prove that the ground states and associated ground state energies converge in the ultraviolet limit. In the case of the standard model of non-relativistic quantum electrodynamics with a spin, under similar assumptions, we show the existence of a quasi-classical ground state. We also obtain an asymptotic expansion as the coupling parameter tends to 0 and the convergence of the ground state energies in the ultraviolet limit
Benchellal, Amel. "Modélisation des interfaces de diffusion à l'aide d'opérateurs d' intégration fractionnaires". Poitiers, 2008. http://www.theses.fr/2008POIT2256.
The research presented in this thesis deals with the approximation of diffusion systems with fractional models. This work is based on the definition of a fractional integration operator, which allows to generalise the numerical simulation procedure of conventional differential systems to those of fractional ones. The first part was devoted to the study of a model with two fractional integration operators. This model offers a very good approximation in both frequency and time domains. The identification results obtained from the heat transfer simulation show that this model is able to adapt to the geometry of studied diffusion processes. For a more compact and parsimonious model, a new fractional integrator with a variable order according to the frequency has been defined. It allows to define a model with one fractional integrator, which offers the same approximation qualities of the previously model. Finally, a simplification of this last operator is proposed while retaining the same approximation qualities. Several models defined were validated on real data: the case of modelling of skin effect machines and the case of pilot heat transfer
Avalos, Marta. "Modèles additifs parcimonieux". Phd thesis, Université de Technologie de Compiègne, 2004. http://tel.archives-ouvertes.fr/tel-00008802.
Munch-Maccagnoni, Guillaume. "Syntaxe et modèles d'une composition non-associative des programmes et des preuves". Phd thesis, Université Paris-Diderot - Paris VII, 2013. http://tel.archives-ouvertes.fr/tel-00918642.
Iacomi, Marius. "Calculs non-perturbatifs en théorie des champs bidimensionnelle". Montpellier 2, 1997. http://www.theses.fr/1997MON20240.
Derfoul, Ratiba. "Intégration des données de sismique 4D dans les modèles de réservoir : recalage d'images fondé sur l'élasticité non linéraire". Phd thesis, INSA de Rouen, 2013. http://tel.archives-ouvertes.fr/tel-00924825.
Botella, Arnaud. "Génération de maillages non structurés volumiques de modèles géologiques pour la simulation de phénomènes physiques". Thesis, Université de Lorraine, 2016. http://www.theses.fr/2016LORR0097/document.
The geomodeling main goals are to represent and understand the subsurface. The geological structures have an important role for understanding and predicting its physical behavior. We defined a geological model as a set of structures and their connections. The meshes are numerical supports to solve the equations modeling the subsurface physics. So it is important to build a mesh representing a geological model to take into account the impact of these structures on the subsurface phenomena. The objective of this thesis is to develop volumetric meshing methods for geological models. We propose a volumetric unstructured meshing method to build two mesh types: an adaptive tetrahedral mesh and an hex-dominant mesh (i.e. made of tetrahedra, triangular prisms, quadrilateral pyramids and hexahedra). This method generates first a tetrahedral mesh that can respect different types of data: (1) a geological model defined by its boundaries to capture the structures in the volumetric mesh, (2) well paths represented as a set of segments, (3) a mesh size property to control the mesh element edge length and (4) a direction field to control vertex/element alignments inside the mesh to increase some features such as elements with right angles. Then, this tetrahedral mesh can be transformed in a mixed-element mesh. The method recognizes combinatorial relationships between tetrahedra to identify new elements such as prisms, pyramids and hexahedra. This method is then used to generate meshes whose features correspond to a given application in order to reduce errors during numerical computation. Several application domains are considered such as geomechanical, ow and wave propagation simulations
Bouby, Céline. "Adaptation élastoplastique de structures sous chargements variables avec règle d'écrouissage cinématique non linéaire et non associée". Phd thesis, Université des Sciences et Technologie de Lille - Lille I, 2006. http://tel.archives-ouvertes.fr/tel-00109296.
Un premier exemple concernant une éprouvette sous traction constante et torsion alternée est traité en utilisant la méthode pas à pas puis dans le cadre des MSI. La confrontation des résultats porte aussi bien sur le facteur d'adaptation que sur les contraintes internes. La comparaison entre les prédictions du calcul incrémental et celles de la solution analytique puis de la programmation mathématique, construites par l'approche du bipotentiel, montre une très bonne concordance.
La deuxième partie de l'étude est consacrée aux structures de type coques minces. Après avoir constaté que l'implémentation du problème de borne statique dans le cas de l'écrouissage cinématique linéaire limité nécessite d'imposer explicitement la limitation des contraintes internes pour que les résultats soient mécaniquement acceptables, il est montré que le cadre des MSI permet de construire un problème de borne cinématique. Son implémentation donne un encadrement très précis du facteur d'adaptation entre les facteurs cinématique et statique. Enfin, une règle d'écrouissage cinématique non linéaire et un bipotentiel sont construits pour les coques minces.
Hay, Alexander. "Étude des stratégies d'estimation d'erreur numérique et d'adaptation locale de maillages non-structurés pour les équations de Navier-Stokes en moyenne de Reynolds". Nantes, 2004. http://www.theses.fr/2004NANT2002.
Gagnaire-Renou, Elodie. "Amélioration de la modélisation spectrale des états de mer par un calcul quasi-exact des interactions non-linéaires vague-vague". Phd thesis, Université du Sud Toulon Var, 2009. http://tel.archives-ouvertes.fr/tel-00595353.
Bouilloc, Thomas. "Applications de décompositions tensorielles à l'identification de modèles de Volterra et aux systèmes de communication MIMO-CDMA". Nice, 2011. http://www.theses.fr/2011NICE4048.
This thesis concerns both the theoretical and constructive resolution of inverse problems for isotropic diffusion equation in planar domains, simply and doubly connected. From partial Cauchy boundary data (potential, flux), we look for those quantities on the remaining part of the boundary, where no information is available, as well as inside the domain. The proposed approach proceed by considering solutions to the diffusion equation as real parts of complex valued solutions to some conjugated Beltrami equation. These particular generalized analytic functions allow to introduce Hardy classes, where the inverse problems is stated as a best constrained approximation issue (bounded extremal problem), and thereby is regularized. Hence, existence and smoothness properties, together with density results of traces on the boundary, ensure well-posedness. An application is studied, to a free boundary problem for magnetically confined plasma in the tokamak Tore Supra (CEA-IRFM Caldarache). The resolution of the approximation problem on a suitable basis of functions (toroidal harmonics) lead to a qualification criterion for the estimated plasma boundary. A descent algorithm makes it decrease, and refines the estimations. The methods do not require any integration of the solution in the overall domain. It furnishes very accurate numerical results, and could be extended to other devices, like JET ou ITER
Hinojosa, Rehbein Jorge Andrés. "Sur la robustesse d'une méthode de décomposition de domaine mixte avec relocalisation non linéaire pour le traitement des instabilités géométriques dans les grandes structures raidies". Thesis, Cachan, Ecole normale supérieure, 2012. http://www.theses.fr/2012DENS0010/document.
The thesis work focus on the evaluation and the robustness of adapted strategies for the simulation of large structures with not equitably distributed nonlinearities, like local buckling, and global nonlinearities on aeronautical structures. The nonlinear relocalization strategy allows the introduction of nonlinear solving schemes in the sub-structures of the classical domain decomposition methods.At a first step, the performances and the robustness of the method are analysed on academic examples. Then, the strategy is parallelized and studies of speed-up and extensibility are carried out. Finally, the method is tested on larger and more realistic structures
Deng, Xiaoguang. "Développement d'un outil d'aide à la conception des matériaux fibreux multifonctionnels par les techniques de calcul avancé". Thesis, Lille 1, 2008. http://www.theses.fr/2008LIL10111/document.
Face to the more and more intensive international concurrence, how to develop the new products with a minimum life cycle design is an interesting topic for many industries. A quick product development meeting the vary consumer's requirements is a critical key for the success of the company. ln this context, the tools of « rapid prototyping» have been developed and used for forecasting the performance of product and process. Using the computer modelling techniques, these tools permit to realise the representative prototypes close to the quality features predefined in the functional specifications with a limited trails and measures. This PHD thesis presents the research works in the filed of developing a design support tool for multifunctional materials. It permits to realise the prototypes adapted to the client's need with a limited time. This tool aims at measuring the relevance of design factors (raw material, process parameters and structural parameters), determining the relevant operation settings (acceptable range of design factors) related to the manufacturing process, and evaluating the global quality of prototypes using the multicriteria decision support methods
Caire, François. "Les équations de Maxwell covariantes pour le calcul rapide des champs diffractés par des conducteurs complexes. Application au Contrôle Non Destructif par courants de Foucault". Thesis, Paris 11, 2014. http://www.theses.fr/2014PA112284/document.
This PhD work concerns the development of fast numerical tools, dedicated to the computation of the electromagnetic interaction between a low frequency 3D current source and a complex conductor, presenting rough interfaces and/or conductivity (and/or permeability) variations. The main application is the simulation of the Eddy Current non-destructive testing process applied to complex specimens. Indeed, the existing semi-analytical models currently available in the CIVA simulation platform are limited to canonical geometries. The method we propose here is based on the covariant Maxwell equations, which allow us to consider the physical equations and relationships in a non-orthogonal coordinate system depending on the geometry of the specimen. Historically, this method (cf. C-method) has been developed in the framework of optical applications, particularly for the characterization of diffraction gratings. Here, we transpose this formalism into the quasi-static regime and we thus develop an innovative formulation of the Second Order Vector Potential formalism, widely used for the computation of the quasi-static fields in canonical geometries. Then, we determine numerically a set of modal solutions of the source-free Maxwell equations in the coordinate system introduced, and this allows us to represent the unknown fields as modal expansions in source-free domains. Then, the coefficients of these expansions are computed by introducing the source fields and enforcing the boundary conditions that the total fields must verify at the interfaces between media. In order to tackle the case of a layered conductor presenting rough interfaces, the generalized SOVP formalism is coupled with a recursive algorithm called the S-matrices. On the other hand, the application case of a complex shape specimen with depth-varying physical properties is treated by coupling the modal method we developed with a high-order numerical method: pseudo-spectral method. The validation of these codes is carried out numerically by comparison with a commercial finite element software in some particular configurations. Besides, the homogeneous case is also validated by comparison with experimental data
Le, Treust Loïc. "Méthodes variationnelles et topologiques pour l'étude de modèles non liénaires issus de la mécanique relativiste". Phd thesis, Université Paris Dauphine - Paris IX, 2013. http://tel.archives-ouvertes.fr/tel-00908953.
Chmaycem, Ghada. "Étude des équations des milieux poreux et des modèles de cloques". Thesis, Paris Est, 2014. http://www.theses.fr/2014PEST1080/document.
In this thesis, we study two completely independent problems. The first one focuses on a simple mathematical model of thin films delamination and blistering analysis. In the second one, we are interested in the study of the porous medium equation motivated by seawater intrusion problems. In the first part of this work, we consider a simple one-dimensional variational model, describing the delamination of thin films under cooling. We characterize the global minimizers, which correspond to films of three possible types : non delaminated, partially delaminated (called blisters), or fully delaminated. Two parameters play an important role : the length of the film and the cooling parameter. In the phase plane of those two parameters, we classify all the minimizers. As a consequence of our analysis, we identify explicitly the smallest possible blisters for this model. In the second part, we answer a long standing open question about the existence of new contractions for porous medium type equations. For m>0, we consider nonnegative solutions U(t,x) of the following equationU_t=Delta U^m.For 0
Aymard, Benjamin. "Simulation numérique d'un modèle multi-échelle de cinétique cellulaire formulé à partir d'équations de transport non conservatives". Thesis, Paris 6, 2014. http://www.theses.fr/2014PA066254/document.
The thesis focuses on the numerical simulation of a biomathematical, multiscale model explaining the phenomenon of selection within the population of ovarian follicles, and grounded on a cellular basis. The PDE model consists of a large dimension hyperbolic quasilinear system governing the evolution of cell density functions for a cohort of follicles (around twenty in practice).The equations are coupled in a nonlocal way by control terms involving moments of the solution, defined on either the mesoscopic or macroscopic scale.Three chapters of the thesis, presented in the form of articles, develop the method used to simulate the model numerically. The numerical code is implemented on a parallel architecture. PDEs are discretized with a Finite Volume scheme on an adaptive mesh driven by a multiresolution analysis. Flux discontinuities, at the interfaces between different cellular states, require a specific treatment to be compatible with the high order numerical scheme and mesh refinement.A chapter of the thesis is devoted to the calibration method, which translates the biological knowledge into constraints on the parameters and model outputs. The multiscale character is crucial, since parameters are used at the microscopic level in the equations governing the evolution of the density of cells within each follicle, whereas quantitative biological data are rather available at the mesoscopic and macroscopic levels.The last chapter of the thesis focuses on the analysis of computational performances of the parallel code, based on statistical methods inspired from the field of uncertainty quantification
Lacour, Catherine. "Analyse et résolution numérique de méthodes de sous-domaines non conformes pour des problèmes de plaques". Phd thesis, Université Pierre et Marie Curie - Paris VI, 1997. http://tel.archives-ouvertes.fr/tel-00369578.
Parisot, Martin. "Modélisation intermédiaire entre équations cinétiques et limites hydrodynamiques : dérivation, analyse et simulations". Thesis, Lille 1, 2011. http://www.theses.fr/2011LIL10052/document.
This work is devoted to the study of a problem resulting from plasma physics: heat transfer of electrons in a plasma close to Maxwellian equilibrium. Firstly, the asymptotic regime of Spitzer-Harm is studied. A model proposed by Schurtz and Nicolai is analyzed and located in the context of hydrodynamic limits outside of the strictly asymptotic. The link to non-local models of Luciani and Mora is established, as well as the mathematical properties such as the principle of maximum and entropy dissipation. Then, a formal derivation from the Vlasov equations is proposed. A hierarchy of intermediate models between the kinetic equations and the hydrodynamic limit is described. In particular, a new system hydrodynamics, integro-differential by nature, is proposed. The system Schurtz and Nicolai appears as a simplification of the system resulting from the diversion. The existence and uniqueness of the solution of the nonstationary system are established in a simplified framework.The last part is devoted to the implementation of a specific numerical scheme for solving these models. We propose a finite volume approach can be effective on unstructured grids. The accuracy of this scheme to capture specific effects such as kinetic, which may not be reproduced by the asymptotic Spitzer-Harm model. The consistency of this pattern with that of the Spitzer-Harm equation is highlighted, paving the way for a strategy of coupling between the two models
Kardous, Zohra. "Sur la modélisation et la commande multimodèle des processus complexes et/ou incertains". Ecole Centrale de Lille, 2004. http://www.theses.fr/2004ECLI0006.
Monteil, François. "Étude d'une structure multicapteur à courants de Foucault : utilisation en capteur proximétrique pour un système de suivi de profil". Paris 11, 1987. http://www.theses.fr/1987PA112315.
Tournier, Pierre-Henri. "Absorption de l'eau et des nutriments par les racines des plantes : modélisation, analyse et simulation". Thesis, Paris 6, 2015. http://www.theses.fr/2015PA066030/document.
In the context of the development of sustainable agriculture aiming at preserving natural resources and ecosystems, it is necessary to improve our understanding of underground processes and interactions between soil and plant roots.In this thesis, we use mathematical and numerical tools to develop explicit mechanistic models of soil water and solute movement accounting for root water and nutrient uptake and governed by nonlinear partial differential equations. An emphasis is put on resolving the geometry of the root system as well as small scale processes occurring in the rhizosphere, which play a major role in plant root uptake.The first study is dedicated to the mathematical analysis of a model of phosphorus (P) uptake by plant roots. The evolution of the concentration of P in the soil solution is governed by a convection-diffusion equation with a nonlinear boundary condition at the root surface, which is included as a boundary of the soil domain. A shape optimization problem is formulated that aims at finding root shapes maximizing P uptake.The second part of this thesis shows how we can take advantage of the recent advances of scientific computing in the field of unstructured mesh adaptation and parallel computing to develop numerical models of soil water and solute movement with root water and nutrient uptake at the plant scale while taking into account local processes at the single root scale
Ruoppolo, Domenico. "Relational graph models and Morris's observability : resource-sensitive semantic investigations on the untyped λ-calculus". Thesis, Sorbonne Paris Cité, 2016. http://www.theses.fr/2016USPCD069/document.
This thesis is a contribution to the study of Church’s untyped λ-calculus, a term rewritingsystem having the β-reduction (the formal counterpart of the idea of execution of programs) asmain rule. The focus is on denotational semantics, namely the investigation of mathematical models of the λ-calculus giving the same denotation to β-convertible λ-terms. We investigate relational semantics, a resource-sensitive semantics interpreting λ-terms as relations,with their inputs grouped together in multisets. We define a large class of relational models,called relational graph models (rgm’s), and we study them in a type/proof-theoretical way, using some non-idempotent intersection type systems. Firstly, we find the minimal and maximal λ-theories (equational theories extending -conversion) represented by the class.Then we use rgm’s to solve the full abstraction problem for Morris’s observational λ-theory,the contextual equivalence of programs that one gets by taking the β-normal forms asobservable outputs. We solve the problem in different ways. Through a type-theoretical characterization of β-normalizability, we find infinitely many fully abstract rgm’s, that wecall uniformly bottomless.We then give an exhaustive answer to the problem, by showing thatan rgm is fully abstract for Morris’s observability if and only if it is extensional (a model of ŋ-conversion) and λ-König. Intuitively an rgm is λ-König when every infinite computable tree has an infinite branch witnessed by some type of the model, where the witnessing is a property of non-well-foundedness on the type
Khemane, Firas. "Estimation fréquentielle par modèle non entier et approche ensembliste : application à la modélisation de la dynamique du conducteur". Thesis, Bordeaux 1, 2011. http://www.theses.fr/2010BOR14282/document.
This thesis deals with system identification and modeling of fractional transfer functions using bounded and uncertain frequency responses. Therefor, both of fractional differentiation and integration definitions are extended into intervals. Set membership approaches are then applied to estimate coefficients and derivative orders as intervals. These methods are applied to estimate certain Linear Time Invariant systems (LTI), uncertain LTI systems and Linear Parameter Varying systems (LPV). They are notably adopted to model driver's dynamics, since most of studies on one or several individuals shave shown that the collected reactions are not identical and are varying from an experiment to another
Brasco, Lorenzo. "Geodesics and PDE methods in transport models". Phd thesis, Université Paris Dauphine - Paris IX, 2010. http://tel.archives-ouvertes.fr/tel-00578447.
Brasco, Lorenzo. "Geodesics and PDE methods in transport models". Phd thesis, Paris 9, 2010. https://bu.dauphine.psl.eu/fileviewer/index.php?doc=2010PA090033.
This thesis is devoted to to the study of optimal transport problems, alternative to the so called Monge-Kantorovich one: they naturally arise in some real world applications, like in the design of optimal transportation networks or in urban traffic modeling. More precisely, we consider problems where the transport cost has a nonlinear dependence on the mass: typically in this type of problems, to move a mass m for a distance ℓ costs φ (m) ℓ, where φ is a given function, thus giving rise to a total cost of the type Σ φ (m) ℓ. Two interesting cases are widely addressed in this work: the case where φ is subadditive (branched transport), so that masses have the interest to travel together in order to lower the total cost; the case of φ being superadditive (congested transport), where on the contrary the mass tends to be as widespread as possible. In the case of branched transport, we introduce two new dynamical models: in the first one, the transport is described through the employ of curves of probabilty measures minimizing a length-type functional (with a weight function penalizing non atomic measures). On the other hand, the secon model is much more in the spirit of the celebrated Benamou-Brenier formulation for Wasserstein distances: in particular, the transport is described by means of pairs ``curve of measures--velocity vector field'', satisfying the continuity equation and minimizing a suitable dynamical energy (which is a non convex one, actually). For both models we prove existence of minimal configurations and equivalence with other modelizations existing in literature. Concerning the case of congested transport, we review in great details two already existing models, proving their equivalence: while the first one can be viewed as a Lagrangian approach to the problem and it has some interesting links with traffic equilibrium issues, the second one is a divergence-constrained convex optimization problem. The proof of this equivalence represents the central core of the second part of the work and contains various points of interest: among them, the DiPerna-Lions theory of flows of weakly differentiable vector fields, the Dacorogna-Moser construction for transport maps and, above all, some regularity estimates (that we derive here) for a very degenerate elliptic equation, that seems to be quite unexplored
Questa tesi è dedicata allo studio di problemi di trasporto ottimo, alternativi al cosiddetto problema di Monge-Kantorovich: essi appaiono in modo naturale in alcune applicazioni concrete, come nel disegno di reti ottimali di trasporto o nella modellizzazione di problemi di traffico urbano. In particolare, si considerano problemi in cui il costo di trasporto ha una dipendenza non lineare dalla massa: tipicamente in questo tipo di problemi, muovere una massa m per un tratto di lunghezza ℓ costa φ (m) ℓ, dove φ è una funzione assegnata, dando perciò luogo ad un costo totale del tipo Σ φ (m) ℓ. Due casi significativi vengono ampiamente trattati in questo lavoro: il caso in cui la funzione φ è subadditiva (trasporto ramificato), ragion per cui le masse hanno maggiore interesse a viaggiare insieme, in modo da diminuire il costo totale; il caso in cui φ è superadditiva (trasporto congestionato), dove al contrario la massa tende a diffondersi quanto più possibile. Nel caso del trasporto ramificato, si introducono due nuovi modelli dinamici: nel primo il trasporto è descritto da curve di misure di probabilità che minimizzano un funzionale di tipo geodetico (con un coefficiente penalizzante le misure non atomiche). Il secondo invece è maggiormente nello spirito della formulazione data da Benamou e Brenier per le distanze di Wasserstein: in particolare, il trasporto è descritto per mezzo di coppie ``curva di misure--campo di velocità'', legate dall'equazione di continuità, che minimizzano un'opportuna energia (non convessa). Per entrambi i modelli, si mostra l'esistenza di configurazioni minimali e si prova l'equivalenza con altre formulazioni esistenti in letteratura. Per quanto riguarda il caso del trasporto congestionato, si rivedono in dettaglio due modelli già esistenti, provandone l'equivalenza: mentre il primo di questi modelli può essere visto come un approccio Lagrangiano al problema ed ha interessanti legami con questioni di equilibrio per il traffico urbano, il secondo è un problema di ottimizzazione convessa con vincolo di divergenza. La dimostrazione dell'equivalenza tra i due modelli costituisce il corpo centrale della seconda parte del lavoro e contiene vari elementi di interesse, tra questi: la teoria dei flussi di campi vettoriali poco regolari di DiPerna e Lions, la costruzione di Dacorogna e Moser per mappe di trasporto e soprattutto dei risultati di regolarità (che quivi ricaviamo) per un'equazione ellittica molto degenere, che non sembra essere stata molto studiata
Xia, Liang. "Towards optimal design of multiscale nonlinear structures : reduced-order modeling approaches". Thesis, Compiègne, 2015. http://www.theses.fr/2015COMP2230/document.
High-performance heterogeneous materials have been increasingly used nowadays for their advantageous overall characteristics resulting in superior structural mechanical performance. The pronounced heterogeneities of materials have significant impact on the structural behavior that one needs to account for both material microscopic heterogeneities and constituent behaviors to achieve reliable structural designs. Meanwhile, the fast progress of material science and the latest development of 3D printing techniques make it possible to generate more innovative, lightweight, and structurally efficient designs through controlling the composition and the microstructure of material at the microscopic scale. In this thesis, we have made first attempts towards topology optimization design of multiscale nonlinear structures, including design of highly heterogeneous structures, material microstructural design, and simultaneous design of structure and materials. We have primarily developed a multiscale design framework, constituted of two key ingredients : multiscale modeling for structural performance simulation and topology optimization forstructural design. With regard to the first ingredient, we employ the first-order computational homogenization method FE2 to bridge structural and material scales. With regard to the second ingredient, we apply the method Bi-directional Evolutionary Structural Optimization (BESO) to perform topology optimization. In contrast to the conventional nonlinear design of homogeneous structures, this design framework provides an automatic design tool for nonlinear highly heterogeneous structures of which the underlying material model is governed directly by the realistic microstructural geometry and the microscopic constitutive laws. Note that the FE2 method is extremely expensive in terms of computing time and storage requirement. The dilemma of heavy computational burden is even more pronounced when it comes to topology optimization : not only is it required to solve the time-consuming multiscale problem once, but for many different realizations of the structural topology. Meanwhile we note that the optimization process requires multiple design loops involving similar or even repeated computations at the microscopic scale. For these reasons, we introduce to the design framework a third ingredient : reduced-order modeling (ROM). We develop an adaptive surrogate model using snapshot Proper Orthogonal Decomposition (POD) and Diffuse Approximation to substitute the microscopic solutions. The surrogate model is initially built by the first design iteration and updated adaptively in the subsequent design iterations. This surrogate model has shown promising performance in terms of reducing computing cost and modeling accuracy when applied to the design framework for nonlinear elastic cases. As for more severe material nonlinearity, we employ directly an established method potential based Reduced Basis Model Order Reduction (pRBMOR). The key idea of pRBMOR is to approximate the internal variables of the dissipative material by a precomputed reduced basis computed from snapshot POD. To drastically accelerate the computing procedure, pRBMOR has been implemented by parallelization on modern Graphics Processing Units (GPUs). The implementation of pRBMOR with GPU acceleration enables us to realize the design of multiscale elastoviscoplastic structures using the previously developed design framework inrealistic computing time and with affordable memory requirement. We have so far assumed a fixed material microstructure at the microscopic scale. The remaining part of the thesis is dedicated to simultaneous design of both macroscopic structure and microscopic materials. By the previously established multiscale design framework, we have topology variables and volume constraints defined at both scales
Scotti, Simone. "Applications of the error theory using Dirichlet forms". Phd thesis, Université Paris-Est, 2008. http://tel.archives-ouvertes.fr/tel-00349241.
Bonnet, Benoît. "Optimal control in Wasserstein spaces". Electronic Thesis or Diss., Aix-Marseille, 2019. http://www.theses.fr/2019AIXM0442.
A wealth of mathematical tools allowing to model and analyse multi-agent systems has been brought forth as a consequence of recent developments in optimal transport theory. In this thesis, we extend for the first time several of these concepts to the framework of control theory. We prove several results on this topic, including Pontryagin optimality necessary conditions in Wasserstein spaces, intrinsic regularity properties of optimal solutions, sufficient conditions for different kinds of pattern formation, and an auxiliary result pertaining to singularity arrangements in Sub-Riemannian geometry
Mourad, Aya. "Identification de la conductivité hydraulique pour un problème d'intrusion saline : Comparaison entre l'approche déterministe et l'approche stochastique". Thesis, Littoral, 2017. http://www.theses.fr/2017DUNK0465/document.
This thesis is concerned with the identification, from observations or field measurements, of the hydraulic conductivity K for the saltwater intrusion problem involving a nonhomogeneous, isotropic and free aquifer. The involved PDE model is a coupled system of nonlinear parabolic equations completed by boudary and initial conditions, as well as compatibility conditions on the data. The main unknowns are the saltwater/freshwater interface depth and the elevation of upper surface of the aquifer. The inverse problem is formulated as the optimization problem where the cost function is a least square functional measuring the discrepancy between experimental interfaces depths and those provided by the model. Considering the exact problem as a constraint for the optimization problem and introducing the Lagrangian associated with the cost function, we prove that the optimality system has at least one solution. The main difficulties are to find the set of all eligible parameters and to prove the differentiability of the operator associating to the hydraulic conductivity K, the state variables (h, h₁). This is the first result of the thesis. The second result concerns the numerical implementation of the optimization problem. We first note that concretely, we only have specific observations (in space and in time) corresponding to the number of monitoring wells, we then adapt the previous results to the case of discrete observations data. The gradient of the cost function is computed thanks to an approximate formula in order to take into account the discrete observations data. The cost functions then is minimized by using a method based on BLMVM algorithm. On the other hand, the exact problem and the adjoint problem are discretized in space by a P₁-Lagrange finite element method combined with a semi-implicit time discretization scheme. Some numerical results are presented to illustrate the ability of the method to determine the unknown parameters. In the third part of the thesis we consider the hydraulic conductivity as a stochastic parameter. To perform a rigorous numerical study of stochastic effects on the saltwater intrusion problem, we use the spectral decomposition and the stochastic variational problem is reformulated to a set of deterministic variational problems to be solved for each Wiener polynomial chaos