Dissertations / Theses on the topic 'Méthodes second ordre'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 43 dissertations / theses for your research on the topic 'Méthodes second ordre.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Etcheverlepo, Adrien. "Développement de méthodes de domaines fictifs au second ordre." Phd thesis, Université Sciences et Technologies - Bordeaux I, 2013. http://tel.archives-ouvertes.fr/tel-00821897.
Gesbert, David. "Egalisation et identification multi-voies : méthodes auto-adaptatives au second-ordre." Paris, ENST, 1997. http://www.theses.fr/1997ENST0003.
Halard, Matthieu. "Méthodes du second ordre pour la conception optimale en élasticité non-linéaire." Paris 9, 1999. https://portail.bu.dauphine.fr/fileviewer/index.php?doc=1999PA090029.
Lamoulie, Laurence. "Couplage de méthodes primale et duale d'éléments finis pour des problèmes elliptiques du second ordre." Pau, 1993. http://www.theses.fr/1993PAUU3014.
González-Contreras, Brian Manuel. "Contribution à la tolérance aux défauts des systèmes linéaires : Synthèse de méthodes d'accommodation fondée sur l'information du second ordre." Thesis, Nancy 1, 2009. http://www.theses.fr/2009NAN10008/document.
This thesis is devoted to the synthesis of accommodation methods founded on the second order information (SOI) assignment in the context of fault tolerance for deterministic linear systems. The major contribution of this research concerns using this information in the reconfigurability analysis (capability of the system to respond to faults) and developing strategies for fault accommodation in order to recover nominal performances in terms of system dynamics and also to guarantee the assigned second order information. Firstly, approaches for measuring the SOI using the system's input/output data are proposed. A first approach based on the initial response is considered. An interesting alternative to this approach, in considering the problem as one of identification, is proposed as an indirect computation of the SOI but online and using input/output data. An index based on reconfigurability, which is directly related to the SOI, is also proposed. Based on this online SOI computation, the index is applied to networked control systems affected by network induced delays in order to calculate their impact over the system. Secondly, fault accommodation strategies for loss of effectiveness type faults are proposed under the feedback SOI synthesis. SISO systems are first considered, approach founded on the modified pseudo inverse method. On the other hand, a strategy for MIMO systems founded on the pseudo inverse method is taken into account. Examples illustrating the application of the approaches are also presented. All these developed approaches are applied and illustrated through the well known process benchmark: the three tank hydraulic system. The simulations show up and notice the results obtained, and bring out the contribution of the developed approaches
Jerad, Sadok. "Approches du second ordre de d'ordre élevées pour l'optimisation nonconvex avec variantes sans évaluation de la fonction objective." Electronic Thesis or Diss., Université de Toulouse (2023-....), 2024. http://www.theses.fr/2024TLSEP024.
Even though nonlinear optimization seems (a priori) to be a mature field, new minimization schemes are proposed or rediscovered for modern large-scale problems. As an example and in retrospect of the last decade, we have seen a surge of first-order methods with different analysis, despite the fact that well-known theoretical limitations of the previous methods have been thoroughly discussed.This thesis explores two main lines of research in the field of nonconvex optimization with a narrow focus on second and higher order methods.In the first series, we focus on algorithms that do not compute function values and operate without knowledge of any parameters, as the most popular currently used first-order methods fall into the latter category. We start by redefining the well-known Adagrad algorithm in a trust-region framework and use the latter paradigm to study two first-order deterministic OFFO (Objective-Free Function Optimization) classes. To enable faster exact OFFO algorithms, we then propose a pth-order deterministic adaptive regularization method that avoids the computation of function values. This approach recovers the well-known convergence rate of the standard framework when searching for stationary points, while using significantly less information.In the second set of papers, we analyze adaptive algorithms in the more classical framework where function values are used to adapt parameters. We extend adaptive regularization methods to a specific class of Banach spaces by developing a Hölder gradient descent algorithm. In addition, we investigate a second-order algorithm that alternates between negative curvature and Newton steps with a near-optimal convergence rate. To handle large problems, we propose subspace versions of the algorithm that show promising numerical performance.Overall, this research covers a wide range of optimization techniques and provides valuable insights and contributions to both parameter-free and adaptive optimization algorithms for nonconvex functions. It also opens the door for subsequent theoretical developments and the introduction of faster numerical algorithms
González-Contreras, Brian. "Contribution à la Tolérance aux Défauts des Systèmes Linéaires : Synthèse de Méthodes d'Accommodation Fondée sur l'Information du Second Ordre." Phd thesis, Université Henri Poincaré - Nancy I, 2009. http://tel.archives-ouvertes.fr/tel-00377104.
Dans un premier temps, on propose des approches pour mesurer l'information du second ordre à partir des grandeurs entrée/sortie des systèmes linéaires. Dans une première approche, la réponse (données de sortie) à la condition initiale est considérée. Une alternative intéressante à cette approche, en considérant le problème comme un d'identification et basée sur la réponse impulsionnelle (paramètres de Markov), est proposée afin d'évaluer l'information du second ordre indirectement mais en-ligne en utilisant des grandeurs entrée/sortie. Un indice résultant de cette évaluation est proposé afin de contribuer à l'étude de la reconfigurabilité en ligne d'un système défaillant. Cette estimation en temps réel de l'information du second ordre est étendue aux systèmes commandés en réseau afin d'évaluer l'impact de retards sur la reconfigurabilité du système.
Dans un deuxième temps, des stratégies permettant l'accommodation de défauts du type perte d'efficacité des actionneurs sont proposées, approches considérées dans le contexte de la synthèse de l'information du second ordre par retour d'état. On aborde le cas des systèmes à une entrée, approche proposée et basée sur la méthode de la pseudo inverse modifiée. Ensuite on considère le cas multivariable, approche basée sur la méthode de la pseudo inverse. Des exemples se présentent pour illustrer l'application des approches proposées.
Les éléments développés au cours du mémoire sont illustrés à travers une application couramment étudiée dans la commande de procédés : le système hydraulique des trois cuves. Les simulations effectuées mettent en relief les résultats obtenus et l'apport des méthodes développées.
Rigaux, Clémence. "Méthodes de Monte Carlo du second ordre et d’inférence bayésienne pour l’évaluation des risques microbiologiques et des bénéfices nutritionnels dans la transformation des légumes." Thesis, Paris, AgroParisTech, 2013. http://www.theses.fr/2013AGPT0015/document.
The aim of this work is to set up microbiological risk and nutritional benefit assessment methods in the transformation of vegetables, in view of a risk-benefit analysis. The considered (industrial) risk is the alteration of green bean cans due to thermophilic bacteria Geobacillus stearothermophilus, and the nutritional benefit is the vitamin C content in appertized green beans. Reference parameters have first been acquired, by a meta-analysis using Bayesian inference for the risk part. Thermal resistance parameters D at 121.1°C and pH 7, zT and zpH of G.stearothermophilus have been respectively estimated at 3.3 min, 9.1°C and 4.3 pH units on average in aqueous media. The risk and benefit models have then been analyzed by a two-dimensional Monte Carlo simulation method, allowing a separated propagation of uncertainty and variability. The vitamin C losses between fresh and appertized green beans predicted by the model are of 86% on average, and the predicted non-stability at 55°C rate is of 0.5% on average, in good accordance with reality. A risk-benefit analysis has then been carried out to optimize benefit while keeping risk at an acceptable level, by exploring possible intervention scenarios based on some sensibility analysis results. Finally, a risk analysis model involving pathogenic bacteria Bacillus cereus in a courgette puree has been confronted to incubated product contamination data, by means of a Bayesian inference
Rigaux, Clémence, and Clémence Rigaux. "Méthodes de Monte Carlo du second ordre et d'inférence bayésienne pour l'évaluation des risques microbiologiques et des bénéfices nutritionnels dans la transformation des légumes." Phd thesis, AgroParisTech, 2013. http://pastel.archives-ouvertes.fr/pastel-00967496.
Eng, Catherine. "Développement de méthodes de fouille de données basées sur les modèles de Markov cachés du second ordre pour l'identification d'hétérogénéités dans les génomes bactériens." Thesis, Nancy 1, 2010. http://www.theses.fr/2010NAN10041/document.
Second-order Hidden Markov Models (HMM2) are stochastic processes with a high efficiency in exploring bacterial genome sequences. Different types of HMM2 (M1M2, M2M2, M2M0) combined to combinatorial methods were developed in a new approach to discriminate genomic regions without a priori knowledge on their genetic content. This approach was applied on two bacterial models in order to validate its achievements: Streptomyces coelicolor and Streptococcus thermophilus. These bacterial species exhibit distinct genomic traits (base composition, global genome size) in relation with their ecological niche: soil for S. coelicolor and dairy products for S. thermophilus. In S. coelicolor, a first HMM2 architecture allowed the detection of short discrete DNA heterogeneities (5-16 nucleotides in size), mostly localized in intergenic regions. The application of the method on a biologically known gene set, the SigR regulon (involved in oxidative stress response), proved the efficiency in identifying bacterial promoters. S. coelicolor shows a complex regulatory network (up to 12% of the genes may be involved in gene regulation) with more than 60 sigma factors, involved in initiation of transcription. A classification method coupled to a searching algorithm (i.e. R’MES) was developed to automatically extract the box1-spacer-box2 composite DNA motifs, structure corresponding to the typical bacterial promoter -35/-10 boxes. Among the 814 DNA motifs described for the whole S. coelicolor genome, those of sigma factors (B, WhiG) could be retrieved from the crude data. We could show that this method could be generalized by applying it successfully in a preliminary attempt to the genome of Bacillus subtilis
Borhani, Alamdari Bijan. "Nouvelles méthodes de calcul de la charge de ruine et des déformations associées." Compiègne, 1990. http://www.theses.fr/1990COMPD254.
Cots, Olivier. "Contrôle optimal géométrique : méthodes homotopiques et applications." Phd thesis, Université de Bourgogne, 2012. http://tel.archives-ouvertes.fr/tel-00742927.
Vie, Jean-Léopold. "Second-order derivatives for shape optimization with a level-set method." Thesis, Paris Est, 2016. http://www.theses.fr/2016PESC1072/document.
The main purpose of this thesis is the definition of a shape optimization method which combines second-order differentiationwith the representation of a shape by a level-set function. A second-order method is first designed for simple shape optimization problems : a thickness parametrization and a discrete optimization problem. This work is divided in four parts.The first one is bibliographical and contains different necessary backgrounds for the rest of the work. Chapter 1 presents the classical results for general optimization and notably the quadratic rate of convergence of second-order methods in well-suited cases. Chapter 2 is a review of the different modelings for shape optimization while Chapter 3 details two particular modelings : the thickness parametrization and the geometric modeling. The level-set method is presented in Chapter 4 and Chapter 5 recalls the basics of the finite element method.The second part opens with Chapter 6 and Chapter 7 which detail the calculation of second-order derivatives for the thickness parametrization and the geometric shape modeling. These chapters also focus on the particular structures of the second-order derivative. Then Chapter 8 is concerned with the computation of discrete derivatives for shape optimization. Finally Chapter 9 deals with different methods for approximating a second-order derivative and the definition of a second-order algorithm in a general modeling. It is also the occasion to make a few numerical experiments for the thickness (defined in Chapter 6) and the discrete (defined in Chapter 8) modelings.Then, the third part is devoted to the geometric modeling for shape optimization. It starts with the definition of a new framework for shape differentiation in Chapter 10 and a resulting second-order method. This new framework for shape derivatives deals with normal evolutions of a shape given by an eikonal equation like in the level-set method. Chapter 11 is dedicated to the numerical computation of shape derivatives and Chapter 12 contains different numerical experiments.Finally the last part of this work is about the numerical analysis of shape optimization algorithms based on the level-set method. Chapter 13 is concerned with a complete discretization of a shape optimization algorithm. Chapter 14 then analyses the numerical schemes for the level-set method, and the numerical error they may introduce. Finally Chapter 15 details completely a one-dimensional shape optimization example, with an error analysis on the rates of convergence
Rouot, Jérémy. "Méthodes géométriques et numériques en contrôle optimal et applications au transfert orbital à poussée faible et à la nage à faible nombre de Reynolds." Thesis, Université Côte d'Azur (ComUE), 2016. http://www.theses.fr/2016AZUR4103/document.
The first part of this work is devoted to the study of the swimming at low Reynolds number where we consider a2-link swimmer to model the motion of a Copepod and the seminal model of the Purcell Three-link swimmer. Wepropose a geometric and numerical approach using optimal control theory assuming that the motion occursminimizing the energy dissipated by the drag fluid forces related with a concept of efficiency of a stroke. TheMaximum Principle is used to compute periodic controls considered as minimizing control using propertransversality conditions, in relation with periodicity, minimizing the energy dissipated for a fixed displacement ormaximizing the efficiency of a stroke. These problems fall into the framework of sub-Riemannian geometry whichprovides efficient techniques to tackle these problems : the nilpotent approximation is used to compute strokeswith small amplitudes which are continued numerically for the true system. Second order optimality, necessary orsufficient, are presented to select weak minimizers in the framework of periodic optimal controls.In the second part, we study the motion of a controlled spacecraft in a central field taking into account thegravitational interaction of the Moon and the oblateness of the Earth. Our purpose is to study the time minimalorbital transfer problem with low thrust. Due to the small control amplitude, our approach is to define anaveraged system from the Maximum Principle and study the related approximations to the non averaged system.We provide proofs of convergence and give numerical results where we use the averaged system to solve the nonaveraged system using indirect method
Ramirez-Cabrera, Hector. "Aspects théoriques et algorithmiques de l'optimisation semidéfinie." Phd thesis, Ecole Polytechnique X, 2005. http://pastel.archives-ouvertes.fr/pastel-00001048.
Yurenko, Yevgen. "Etude des propriétés énergétiques, conformationnelles et vibrationnelles des déoxyribonucléosides canoniques et modifiés à l'aide des méthodes de la chimie quantique ab initio." Paris 6, 2007. http://www.theses.fr/2007PA066526.
Gorsse, Yannick. "Approximation numérique sur maillage cartésien de lois de conservation : écoulements compressibles et élasticité non linéaire." Phd thesis, Université Sciences et Technologies - Bordeaux I, 2012. http://tel.archives-ouvertes.fr/tel-00796722.
Wangermez, Maxence. "Méthode de couplage surfacique pour modèles non-compatibles de matériaux hétérogènes : approche micro-macro et implémentation non-intrusive." Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASN001.
One of the priority objectives of the aeronautics industry is to reduce the mass of structures while improving their performances. This involves the use of composite materials and the increasing use of digital simulation to optimize structures.The major challenge of this project is to be able to accurately calculate the local variations of the microstructure - for instance detected by tomography and directly modelled from tomogram - on the behavior of an architectured material part. In order to take into account the whole structure and its load effects, a multi-scale approach seems to be a natural choice. Indeed, the related models to the part and its microstructure might use different formalisms according to each scale.In this context, a coupling formulation was proposed in order to replace, in a non-intrusive way, a part of a homogenized macroscopic finite-element model by a local one described at a microscopic level. It is based on a micro-macro separation of interface quantities in the coupling area between the two models. To simplify its use in design offices, a non-intrusive iterative resolution procedure has also been proposed. It allows the implementation of the proposed coupling method in an industrial software environment that often uses closed commercial finite element codes. Different mechanical problems under linear elasticity assumption are proposed. The proposed method is systematically compared with other coupling methods of the literature and the quality of the solutions is quantified compared to a reference one obtained by direct numerical simulation at a fine scale.The main results are promising as they show, for representatives test cases under linear elasticity assumption in two and three-dimensions, solutions that are consistent with first- and second-order homogenization theories. The solutions obtained with the proposed method are systematically the best approximations of the reference solution whereas the methods of the literature are less accurate and shown to be unsuitable to couple non-compatible models.Finally, there are many perspectives due to the different alternatives of the method which could become, in an industrial context, a real analytic tool that aims to introduce a local model described at a fine scale, into a homogenized macroscopic global one
Cheaytou, Rima. "Etude des méthodes de pénalité-projection vectorielle pour les équations de Navier-Stokes avec conditions aux limites ouvertes." Thesis, Aix-Marseille, 2014. http://www.theses.fr/2014AIXM4715.
Motivated by solving the incompressible Navier-Stokes equations with open boundary conditions, this thesis studies the Vector Penalty-Projection method denoted VPP, which is a splitting method in time. We first present a literature review of the projection methods addressing the issue of the velocity-pressure coupling in the incompressible Navier-Stokes system. First, we focus on the case of Dirichlet conditions on the entire boundary. The numerical tests show a second-order convergence in time for both the velocity and the pressure. They also show that the VPP method is fast and cheap in terms of number of iterations at each time step. In addition, we established for the Stokes problem optimal error estimates for the velocity and pressure and the numerical experiments are in perfect agreement with the theoretical results. However, the incompressibility constraint is not exactly equal to zero and it scales as O(varepsilondelta t) where $varepsilon$ is a penalty parameter chosen small enough and delta t is the time step. Moreover, we deal with the natural outflow boundary condition. Three types of outflow boundary conditions are presented and numerically tested for the projection step. We perform quantitative comparisons of the results with those obtained by other methods in the literature. Besides, a theoretical study of the VPP method with outflow boundary conditions is stated and the numerical tests prove to be in good agreement with the theoretical results. In the last chapter, we focus on the numerical study of the VPP scheme with a nonlinear open artificial boundary condition modelling a singular load for the unsteady incompressible Navier-Stokes problem
Seeger, Alberto. "Analyse du second ordre de problèmes non différentiables." Toulouse 3, 1986. http://www.theses.fr/1986TOU30118.
Pham, Anh Tu. "Détermination numérique des propriétés de résistance de roches argileuses." Thesis, Paris Est, 2017. http://www.theses.fr/2017PESC1237/document.
The strength capacities of Callovo-Oxfordian (COx) argillite which is a potential host rock for the deep underground repository of high-level radioactive waste in France are investigated. At a micro-scale, micro-pores can be observed in the matrix. A first strength homogenization step has been performed in order to evaluate the matrix strength criteria. The microstructure analysis of this material at some hundreds of micromet scale, referred at meso-scale, shows a clay matrix and a random distribution of mineral inclusions (quartz and calcite).Aiming to the determination of COx argillite strength domain, an FEM numerical tool has been developed in the context of the elastoplastic behavior of the matrix. Several morphological patterns of the representative elementary volume have been considered and subjected to an incremental loading in periodic conditions until collapse occurs. As a result of such elastoplastic calculation, one point of the boundary of the strength domain is obtained. The latter then could be reached by successive elastoplastic calculations.As an alternative to direct elastoplastic simulations, kinematic and static approaches of limit analysis are performed. The stress-based (static approach) and the velocity-based (kinematic approach) finite element method are used to develop a numerical tool able to derive a lower bound and upper bound of strength domain, respectively
Abdallah, Fahed. "Noyaux reproduisants et critères de contraste pour l'élaboration de détecteurs à structure imposée." Troyes, 2004. http://www.theses.fr/2004TROY0002.
In this thesis, we consider statistical learning machines with try to infer rules from a given set or observations in order to make correct predictions on unseen examples. Building upon the theory of reproducing kernels, we develop a generalized linear detector in transformed spaces of high dimension, without explicitly doing any calculus in these spaces. The method is based on the optimization of the best second-order criterion with respect to the problem to solve. In fact, theoretical results show that second-order criteria are able, under some mild conditions, to guarantee the best solution in the sense of classical detection theories. Achieving a good generalisation performance with a receiver requires matching its complexity to the amount of available training data. This problem, known as the curse of dimensionality, has been studied theoretically by Vapnik and Chervonenkis. In this dissertation, we propose complexity control procedures in order to improve the performance of these receivers when few training data are available. Simulation results on real and synthetic data show clearly the competitiveness of our approach compared with other state of the art existing kernel methods like Support Vector Machines
Duvallet, Jeanne. "Etude de systèmes différentiels du second ordre avec conditions aux deux bornes et résolution par la méthode homotopique simpliciale." Pau, 1986. http://www.theses.fr/1986PAUU3018.
Cheng, Jianqiang. "Stochastic Combinatorial Optimization." Thesis, Paris 11, 2013. http://www.theses.fr/2013PA112261.
In this thesis, we studied three types of stochastic problems: chance constrained problems, distributionally robust problems as well as the simple recourse problems. For the stochastic programming problems, there are two main difficulties. One is that feasible sets of stochastic problems is not convex in general. The other main challenge arises from the need to calculate conditional expectation or probability both of which are involving multi-dimensional integrations. Due to the two major difficulties, for all three studied problems, we solved them with approximation approaches.We first study two types of chance constrained problems: linear program with joint chance constraints problem (LPPC) as well as maximum probability problem (MPP). For both problems, we assume that the random matrix is normally distributed and its vector rows are independent. We first dealt with LPPC which is generally not convex. We approximate it with two second-order cone programming (SOCP) problems. Furthermore under mild conditions, the optimal values of the two SOCP problems are a lower and upper bounds of the original problem respectively. For the second problem, we studied a variant of stochastic resource constrained shortest path problem (called SRCSP for short), which is to maximize probability of resource constraints. To solve the problem, we proposed to use a branch-and-bound framework to come up with the optimal solution. As its corresponding linear relaxation is generally not convex, we give a convex approximation. Finally, numerical tests on the random instances were conducted for both problems. With respect to LPPC, the numerical results showed that the approach we proposed outperforms Bonferroni and Jagannathan approximations. While for the MPP, the numerical results on generated instances substantiated that the convex approximation outperforms the individual approximation method.Then we study a distributionally robust stochastic quadratic knapsack problems, where we only know part of information about the random variables, such as its first and second moments. We proved that the single knapsack problem (SKP) is a semedefinite problem (SDP) after applying the SDP relaxation scheme to the binary constraints. Despite the fact that it is not the case for the multidimensional knapsack problem (MKP), two good approximations of the relaxed version of the problem are provided which obtain upper and lower bounds that appear numerically close to each other for a range of problem instances. Our numerical experiments also indicated that our proposed lower bounding approximation outperforms the approximations that are based on Bonferroni's inequality and the work by Zymler et al.. Besides, an extensive set of experiments were conducted to illustrate how the conservativeness of the robust solutions does pay off in terms of ensuring the chance constraint is satisfied (or nearly satisfied) under a wide range of distribution fluctuations. Moreover, our approach can be applied to a large number of stochastic optimization problems with binary variables.Finally, a stochastic version of the shortest path problem is studied. We proved that in some cases the stochastic shortest path problem can be greatly simplified by reformulating it as the classic shortest path problem, which can be solved in polynomial time. To solve the general problem, we proposed to use a branch-and-bound framework to search the set of feasible paths. Lower bounds are obtained by solving the corresponding linear relaxation which in turn is done using a Stochastic Projected Gradient algorithm involving an active set method. Meanwhile, numerical examples were conducted to illustrate the effectiveness of the obtained algorithm. Concerning the resolution of the continuous relaxation, our Stochastic Projected Gradient algorithm clearly outperforms Matlab optimization toolbox on large graphs
Hermann, Odile. "Mécanisation de la recherche de preuves et de programmes en arithmétique fonctionnelle du second ordre." Nancy 1, 1995. http://www.theses.fr/1995NAN10054.
Zhou, Chao. "Model Uncertainty in Finance and Second Order Backward Stochastic Differential Equations." Palaiseau, Ecole polytechnique, 2012. https://pastel.hal.science/docs/00/77/14/37/PDF/Thesis_ZHOU_Chao_Pastel.pdfcc.
The main objective of this PhD thesis is to study some financial mathematics problems in an incomplete market with model uncertainty. In recent years, the theory of second order backward stochastic differential equations (2BSDEs for short) has been developed by Soner, Touzi and Zhang on this topic. In this thesis, we adopt their point of view. This thesis contains of four key parts related to 2BSDEs. In the first part, we generalize the 2BSDEs theory initially introduced in the case of Lipschitz continuous generators to quadratic growth generators. This new class of 2BSDEs will then allow us to consider the robust utility maximization problem in non-dominated models. In the second part, we study this problem for exponential utility, power utility and logarithmic utility. In each case, we give a characterization of the value function and an optimal investment strategy via the solution to a 2BSDE. In the third part, we provide an existence and uniqueness result for second order reflected BSDEs with lower obstacles and Lipschitz generators, and then we apply this result to study the problem of American contingent claims pricing with uncertain volatility. In the fourth part, we define a notion of 2BSDEs with jumps, for which we prove the existence and uniqueness of solutions in appropriate spaces. We can interpret these equations as standard BSDEs with jumps, under both volatility and jump measure uncertainty. As an application of these results, we shall study a robust exponential utility maximization problem under model uncertainty, where the uncertainty affects both the volatility process and the jump measure
Nezamabadi, Saeid. "Méthode asymptotique numérique pour l'étude multi échelle des instabilités dans les matériaux hétérogènes." Thesis, Metz, 2009. http://www.theses.fr/2009METZ046S/document.
The multiscale modelling of the heterogeneous materials is a challenge in computational mechanics. In the nonlinear case, the effective properties of heterogeneous materials cannot be obtained by the techniques used for linear media because the superposition principle is no longer valid. Hence, in the context of the finite element method, an alternative to mesh the whole structure, including all heterogeneities, is the use of the multiscale finite element method (FE2). These techniques have many advantages, such as taking into account : large deformations at the micro and macro scales, the nonlinear constitutive behaviors of the material, and microstructure evolution. The nonlinear problems in micro and macro scales are often solved by the classical Newton-Raphson procedures, which are generally suitable for solving nonlinear problems but have difficulties in the presence of instabilities. In this thesis, the combination of the multiscale finite element method (FE2) and the asymptotic numerical method (ANM), called Multiscale-ANM, allows one to obtain a numerical effective technique for dealing with the instability problems in the context of heterogeneous materials. These instabilities can occur at both micro and macro levels. Different classes of material constitutive relation have been implemented within our procedure. To improve the multiscale problem conditioning, a second order homogenization technique was also adapted in the framework of Multiscale-ANM technique. Furthermore, to reduce the computational time, some techniques been proposed in this work
Rharif, Nour-Eddine. "Fermetures au second ordre de la turbulence : Modèle réalisable anisotherme appliqué à un jet impactant une paroi, et mise en oeuvre en éléments finis." Ecully, Ecole centrale de Lyon, 1995. http://www.theses.fr/1995ECDL0010.
In a first part, a second moment closure model of turbulence called "Cubic" (Craft & Launder 1991) has been implemented in a finite volume code. The numerical approach is based on fractional steps method with a structured non-orthogonal and semi-staggered grid. The cubic model has been applied to a round and heated jet impinging normally on a flat plate. Comprisons, of results obtained by both cubic and standard Reynolds stress model of Gibson & Launder (1978), have been made with experimental data. The results are improved by the cubic model compared to the Gibson & Launder model predictions. The second part deals withe the introduction and the feasibility of second moment closure trubulence model (Reynolds Stress Model) in a industrial finite element code. The Gibson & Launder model has been implemented in this code. The numerical approach is based on a time discretization using fractional steps method. Diffusion step is solved by a classical Galerkin finite element method and semi-implicit method coupling all the variables. The Gibson & Launder model, in the finite element code, has been applied to a flow in sub-channel of staggered tube bundle. Comparisons, of results obtained by both Reynolds stress model and k-epsilon model, have been made with experimental data investigated by Simonin and Barcouda (1988) at L. N. H. Weaknesses of closure models are pointed out. Numerical tests (preconditioning of the algorithm) have allowed a substantial reduction of CPU time
ElNady, Khaled. "Modèles de comportement non linéaire des matériaux architecturés par des méthodes d'homogénéisation discrètes en grandes déformations. Application à des biomembranes et des textiles." Thesis, Université de Lorraine, 2015. http://www.theses.fr/2015LORR0032/document.
The present thesis deals with the development of micromechanical schemes for the computation of the homogenized response of architectured materials, focusing on periodical lattice materials. Architectured and micro-architectured materials cover a wide range of mechanical properties according to the nodal connectivity, geometrical arrangement of the structural elements, their moduli, and a possible structural hierarchy. The principal objective of the thesis is the consideration of geometrical nonlinearities accounting for the large changes of the initial lattice geometry, due to the small bending stiffness of the structural elements, in comparison to their tensile rigidity. The so-called discrete homogenization method is extended to the geometrically nonlinear setting for periodical lattices; incremental schemes are constructed based on a staggered localization-homogenization computation of the lattice response over a repetitive unit cell submitted to a controlled deformation loading. The obtained effective medium is a micropolar anisotropic continuum, the effective properties of which accounting for the geometrical arrangement of the structural elements within the lattice and their mechanical properties. The non affine response of the lattice leads to possible size effects which can be captured by an enrichment of the classical Cauchy continuum either by adding rotational degrees of freedom as for the micropolar effective continuum, or by considering second order gradients of the displacement field. Both strategies are followed in this work, the construction of second order grade continua by discrete homogenization being done in a small perturbations framework. We show that both strategies for the enrichment of the effective continuum are complementary due to the existing analogy in the construction of the micropolar and second order grade continua by homogenization. The combination of both schemes further delivers tension, bending and torsion internal lengths, which reflect the lattice topology and the mechanical properties of its structural elements. Applications to textiles and biological membranes described as quasi periodical networks of filaments are considered. The computed effective response is validated by comparison with FE simulations performed over a representative unit cell of the lattice. The homogenization schemes have been implemented in a dedicated code written in combined symbolic and numerical language, and using as an input the lattice geometry and microstructural mechanical properties. The developed predictive micromechanical schemes offer a design tool to conceive new architectured materials to expand the boundaries of the 'material-property' space
Cisternino, Marco. "A parallel second order Cartesian method for elliptic interface problems and its application to tumor growth model." Phd thesis, Université Sciences et Technologies - Bordeaux I, 2012. http://tel.archives-ouvertes.fr/tel-00690743.
Debroux, Noémie. "Mathematical modelling of image processing problems : theoretical studies and applications to joint registration and segmentation." Thesis, Normandie, 2018. http://www.theses.fr/2018NORMIR02/document.
In this thesis, we study and jointly address several important image processing problems including registration that aims at aligning images through a deformation, image segmentation whose goal consists in finding the edges delineating the objects inside an image, and image decomposition closely related to image denoising, and attempting to partition an image into a smoother version of it named cartoon and its complementary oscillatory part called texture, with both local and nonlocal variational approaches. The first proposed model addresses the topology-preserving segmentation-guided registration problem in a variational framework. A second joint segmentation and registration model is introduced, theoretically and numerically studied, then tested on various numerical simulations. The last model presented in this work tries to answer a more specific need expressed by the CEREMA (Centre of analysis and expertise on risks, environment, mobility and planning), namely automatic crack recovery detection on bituminous surface images. Due to the image complexity, a joint fine structure decomposition and segmentation model is proposed to deal with this problem. It is then theoretically and numerically justified and validated on the provided images
Janin, David. "Contribution aux fondements des méthodes formelles : jeux, logique et automates." Habilitation à diriger des recherches, Université Sciences et Technologies - Bordeaux I, 2005. http://tel.archives-ouvertes.fr/tel-00659990.
Plaquet, Aurélie. "Design of molecular switches exhibiting second-order nonlinear optical responses : ab initio investigations and hyper Rayleigh scattering characterizations." Thesis, Bordeaux 1, 2011. http://www.theses.fr/2011BOR14268/document.
Molecular switches are compounds presenting the ability to commutereversibly between two or more states in response to external stimuli. The goal of thework is the design of molecular switches exhibiting contrasts of their second-ordernonlinear optical (NLO) properties and the highlight of the structural and electronicparameters leading to large contrasts of first hyperpolarizability (β) via amultidisciplinary approach combining the synthesis of new compounds, thecharacterization of their linear (by UV-Visible absorption spectroscopy) and nonlinearoptical properties (by hyper Rayleigh scattering), and the theoretical simulations inorder to predict and interpret molecular properties. These reversible switchingprocesses and the resulting variations of molecular properties have many interests intechnological area such as the development of molecular computers or in lifesciences since many biological functions are based on commutation mechanisms.The major results of our investigations are the interpretation of the NLO responsesand contrasts as a function of the nature, the positioning, and the donor/acceptorcharacter of the substituents
Maltey, Fanton Isabelle. "Hyperpolarisabité de premier ordre de molécules organiques : complexes organoméetalliques, photochromes, molécules en lambda." Cachan, Ecole normale supérieure, 1997. http://www.theses.fr/1997DENS0016.
Nguyen, Nho Gia Hien. "The role of the microstructure in granular material instability." Thesis, Lyon, 2016. http://www.theses.fr/2016LYSEI062/document.
Granular materials consist of dense pack of solid grains and a pore-filling element such as a fluid or a solid matrix. The grains interact via elastic repulsion, friction, adhesion and other surface forces. External loading leads to grain deformations as well as cooperative particle rearrangements. The particle deformations are of particular importance in many industry applications and research subjects, such as powder metallurgy and soil mechanics. The response of granular materials to external loading is complex, especially in case when failure occurs: the mode of the failure can be diffuse or localized, and the development of specimen pattern can be drastically different when the specimen can no longer sustain external loading. In this thesis, a thorough numerical analysis based on a discrete element method is carried out to investigate the macroscopic and microscopic behavior of granular materials when a failure occurs. The numerical simulations include the vanishing of the second-order work instability criterion to detect failure. Furthermore, it is proved that the vanishing of second-order work coincides with the change from a quasi-static regime to a dynamic regime in the response of the specimen. Then, microstructure evolution is investigated. Evolution of force-chains and grain-loops are investigated during the deformation process until reaching the failure. The second-order work is once again taken into account to elucidate the local aspect that governs the failure, taking place at the particle scale. The collapse of the discrete specimen when it turns from quasi-static to dynamic regime is accompanied with a burst in kinetic energy. This rise of kinetic energy occurs when the internal stress cannot balance with the external loading when a small perturbation is added to the boundary, resulting in a difference between the internal and external second-order works of the system. The mesostructures have a symbiosis relationship with each other and their evolution decides the macroscopic behavior of the discrete system. The distribution of the collapse of force-chain correlates with the vanishing of the second-order work at the grain scale. The mesostructures play an important role in the instability of granular media. The second-order work can be used as an effective criterion to detect the instability of the system on both the macroscale and microscale (grain scale)
Cettour-Janet, Raphael. "Modelling the vibrational response and acoustic radiation of the railway tracks." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLC040/document.
In a context of urban and transport network densification, people are increasingly exposed to noise. Consequently, the result of vibro-acoustic impact assessment has a pivotal role in rail network expansion. One of the main sources is the rolling noise: Roughness on the wheel and rail surface produce an imposed displacement one the both. This last, generates vibrational response of wheels and the railway track and their acoustic radiation. This PhD thesis presents some improvements of the vibro-acoustic railway track modelling.Concerning vibrational response, the infinite dimension in the longitudinal direction of the track and its deformation in the 3 dimensions, make the analytical models and finite elements non-optimal. The Semi-analytical finite element method (SAFEM), used in this thesis, is particularly well adapted in this case. Firstly, it is used to model railway track on a continuous support. Then, it is coupled with Floquet theorem to model tracks with a periodic support. However, this technique suffers from numerical problems that imposed an adapted algorithm. The second-order Arnoldi method (SOAR) is used to tackle them. This reduction allows to eliminate critical values improving the robustness of the method. Comparison with existing techniques and experimental results validate this model.Concerning acoustic radiation, big domains simulations at high frequency are almost unfeasible when using conventional techniques (FEM, BEM,…). The method used in this thesis, the Variational theory of complex ray (VTCR) is particularly well adapted to these cases. The principal features of VTCR approach are the use of a weak formulation of the acoustic problem, which allows to consider automatically boundary conditions between sub-domains. Then, the use of an integral repartition of plane waves in all the direction allow to simulate the acoustic field. The unknowns of the problem are their amplitudes. This method well assessed for closed domain, has been extended to open domain and coupled to vibrational response of the rail. Comparison with analytic solution and FEM simulation at low frequency allow to validate the method.Coupling these both methods allowed to simulate complex real life vibro-acoustic scenarios. Result of different railway tracks are presented and validated
Picot, Gautier. "Contrôle optimal géométrique et numérique appliqué au problème de transfert Terre-Lune." Thesis, Dijon, 2010. http://www.theses.fr/2010DIJOS067/document.
This PhD thesis provides a numerical study of space trajectories in the Earth-Moon system when low-thrust is applied. Our computations are based on fundamental results from geometric control theory. The spacecraft's motion is modelled by the equations of the controlled restricted three-body problem. We focus on minimizing energy cost and transfer time. Optimal trajectories are found among a set of extremal curves, solutions of the Pontryagin's maximum principle, which can be computed solving a shooting equation thanks to a Newton algorithm. In this framework, initial conditions are found using homotopic methods or studying the linearized control system. We check local optimality of the trajectories using the second order optimality conditions related to the concept of conjugate points. In the case of the energy minimization problem, we also describe the principle of approximating Earth-Moon optimal transfers by concatening optimal keplerian trajectories around The Earth and the Moon and an energy-minimal solution of the linearized system in the neighbourhood of the equilibrium point L1
Hermant, Audrey. "Sur l'algorithme de tir pour les problèmes de commande optimale avec contraintes sur l'état." Phd thesis, Ecole Polytechnique X, 2008. http://tel.archives-ouvertes.fr/tel-00348227.
Chronopoulos, Dimitrios. "Prediction of the vibroacoustic response of aerospace composite structures in a broadband frequency range." Phd thesis, Ecole Centrale de Lyon, 2012. http://tel.archives-ouvertes.fr/tel-00787864.
Piffet, Loïc. "Décomposition d’image par modèles variationnels : débruitage et extraction de texture." Thesis, Orléans, 2010. http://www.theses.fr/2010ORLE2053/document.
This thesis is devoted in a first part to the elaboration of a second order variational modelfor image denoising, using the BV 2 space of bounded hessian functions. We here take a leaf out of the well known Rudin, Osher and Fatemi (ROF) model, where we replace the minimization of the total variation of the function with the minimization of the second order total variation of the function, that is to say the total variation of its partial derivatives. The goal is to get a competitive model with no staircasing effect that generates the ROF model anymore. The model we study seems to be efficient, but generates a blurry effect. In order to deal with it, we introduce a mixed model that permits to get solutions with no staircasing and without blurry effect on details. In a second part, we take an interset to the texture extraction problem. A model known as one of the most efficient is the T V -L1 model. It just consits in replacing the L2 norm of the fitting data term with the L1 norm.We propose here an original way to solve this problem by the use of augmented Lagrangian methods. For the same reason than for the denoising case, we also take an interest to the T V 2-L1 model, replacing again the total variation of the function by the second order total variation. A mixed model for texture extraction is finally briefly introduced. This manuscript ends with a huge chapter of numerical tests
Ngodock, Hans Emmanuel. "Assimilation de données et analyse de sensibilité : une application à la circulation océanique." Phd thesis, Université Joseph Fourier (Grenoble), 1996. http://tel.archives-ouvertes.fr/tel-00005006.
Sabbagh, Wissal. "Some Contributions on Probabilistic Interpretation For Nonlinear Stochastic PDEs." Thesis, Le Mans, 2014. http://www.theses.fr/2014LEMA1019/document.
The objective of this thesis is to study the probabilistic representation (Feynman-Kac for- mula) of different classes ofStochastic Nonlinear PDEs (semilinear, fully nonlinear, reflected in a domain) by means of backward doubly stochastic differential equations (BDSDEs). This thesis contains four different parts. We deal in the first part with the second order BDS- DEs (2BDSDEs). We show the existence and uniqueness of solutions of 2BDSDEs using quasi sure stochastic control technics. The main motivation of this study is the probabilistic representation for solution of fully nonlinear SPDEs. First, under regularity assumptions on the coefficients, we give a Feynman-Kac formula for classical solution of fully nonlinear SPDEs and we generalize the work of Soner, Touzi and Zhang (2010-2012) for deterministic fully nonlinear PDE. Then, under weaker assumptions on the coefficients, we prove the probabilistic representation for stochastic viscosity solution of fully nonlinear SPDEs. In the second part, we study the Sobolev solution of obstacle problem for partial integro-differentialequations (PIDEs). Specifically, we show the Feynman-Kac formula for PIDEs via reflected backward stochastic differentialequations with jumps (BSDEs). Specifically, we establish the existence and uniqueness of the solution of the obstacle problem, which is regarded as a pair consisting of the solution and the measure of reflection. The approach is based on stochastic flow technics developed in Bally and Matoussi (2001) but the proofs are more technical. In the third part, we discuss the existence and uniqueness for RBDSDEs in a convex domain D without any regularity condition on the boundary. In addition, using the approach based on the technics of stochastic flow we provide the probabilistic interpretation of Sobolev solution of a class of reflected SPDEs in a convex domain via RBDSDEs. Finally, we are interested in the numerical solution of BDSDEs with random terminal time. The main motivation is to give a probabilistic representation of Sobolev solution of semilinear SPDEs with Dirichlet null condition. In this part, we study the strong approximation of this class of BDSDEs when the random terminal time is the first exit time of an SDE from a cylindrical domain. Thus, we give bounds for the discrete-time approximation error.. We conclude this part with numerical tests showing that this approach is effective
Manzagol, Pierre-Antoine. "TONGA : un algorithme de gradient naturel pour les problèmes de grande taille." Thèse, 2007. http://hdl.handle.net/1866/7226.