Dissertations / Theses on the topic 'Méthodes des points intérieurs'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Méthodes des points intérieurs.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
SEGALAT, Philippe. "Méthodes de Points Intérieurs et de quasi-Newton." Phd thesis, Université de Limoges, 2002. http://tel.archives-ouvertes.fr/tel-00005478.
Orban, Dominique. "Méthodes de points intérieurs pour l'optimisation non-linéaire." Toulouse, INPT, 2001. http://www.theses.fr/2001INPT012H.
Segalat, Philippe. "Méthodes de points intérieurs et de quasi-Newton." Limoges, 2002. http://www.theses.fr/2002LIMO0041.
This thesis is interested in interior point and quasi-Newton methods in nonlinear optimization and with their implementation. One presents the code NOPTIQ using the limited memory BFGS formulas to solve large scale problems. The originality of this approach is the use of these formulas within the framework of interior point methods. The storage requirement and the computing cost of one iteration are then low. One shows that NOPTIQ is robust and that its performance are comparable with the reference codes 1-BFGS-B and LANCELOT. One presents also an infeasible algorithm using the preceding methods to solve a nonlinear problem with inequality constraints and linear equality constraints. The idea to penalize the problem using shift variables and a variant of the big-M method of linear programming. The q-superlinear convergence of the inner iterates and the global convergence of outer iterates are shown
Hadjou, Tayeb. "Analyse numérique des méthodes de points intérieurs : simulations et applications." Rouen, 1996. http://www.theses.fr/1996ROUES062.
Bouhtou, Mustapha. "Méthodes de points intérieurs pour l'optimisation des systèmes de grande taille." Paris 9, 1993. https://portail.bu.dauphine.fr/fileviewer/index.php?doc=1993PA090061.
Bouafia, Mousaab. "Étude asymptotique des méthodes de points intérieurs pour la programmation linéaire." Thesis, Le Havre, 2016. http://www.theses.fr/2016LEHA0019/document.
In this research, we are interested by asymptotic study of interior point methods for linear programming. By basing itself on the works of Schrijver and Padberg, we propose two new displacement steps to accelerate the convergence of Karmarkar's algorithm and reduce its algorithmic complexity. The first step is a moderate improvement of the behaviour of this algorithm; the second represents the best fixed displacement step obtained actually. We propose two parameterized approaches of the central trajectory algorithm via a kernel function. The first function generalizes the kernel function given by Y. Q. Bai et al., the second is the first trigonometric kernel function that gives the best algorithmic complexity, obtained until now. These proposals have made new contributions of algorithmic, theoretical and numerical order
Zerari, Amina. "Méthodes de points intérieurs et leurs applications sur des problèmes d'optimisation semi-définis." Thesis, Normandie, 2020. http://www.theses.fr/2020NORMLH24.
Interior point methods are well known as the most efficient to solve optimization problems. These methods have a polynomial convergence and good behavior. In this research, we are interested in a theoretical, numerical and an algorithmic study of interior-point methods for semidefinite programming.Indeed, we present in a first part, a primal-dual projective interior point algorithm of polynomial type with two phases, where we introduced three new effective alternatives for computing the displacement step.Then, in the second part, we are interested in a primal-dual central trajectory method via a kernel function, we proposed two new kernel functions with a logarithmic term that give the best-known complexity results
Roumili, Hayet. "Méthodes de points intérieurs non réalisables en optimisation : théorie, algorithmes et applications." Le Havre, 2007. http://www.theses.fr/2007LEHA0013.
In this study, we are interested to the initialization problem for central path following interior point methods, taking Y. Zhang's work for the linear programming (LP) as bench-mark. After, we make use of an appropriate algorithm for linear programming, we propose an extension for the quadratic convex programming as well semidefinite programming
Veiga, Géraldo. "Sur l'implantation des méthodes de points intérieurs pour la programmation linéaire : Texte imprimé." Paris 13, 1997. http://www.theses.fr/1997PA132010.
Kebbiche, Zakia. "Étude et extensions d'algorithmes de points intérieurs pour la programmation non linéaire." Le Havre, 2007. http://www.theses.fr/2007LEHA0014.
In this thesis, we present an algorithmically and numerical study concerning the central path method for linear complementarity problem wich is considered as an unifying framework of linear and quadratic programming. Then, we propose two intersting variants namely the central path and the projective with linearization methods for minimizing a convex differentiable function on a polyhedral set. The algorithms are well defined and the corresponding theoretical results are established
Ouriemchi, Mohammed. "Résolution de problèmes non linéaires par les méthodes de points intérieurs : théorie et algorithmes." Phd thesis, Université du Havre, 2005. http://tel.archives-ouvertes.fr/tel-00011376.
Dans cette thèse, nous avons utilisé une fonction barrière logarithmique. A chaque itération externe, la technique SQP se charge de produire une série de sous-problèmes quadratiques dont les solutions forment une suite, dite interne, de directions de descente pour résoudre le problème non linéaire pénalisé.
Nous avons introduit un changement de variable sur le pas de déplacement ce qui a permis d'obtenir des conditions d'optimalité plus stable numériquement.
Nous avons réalisé des simulations numériques pour comparer les performances de la méthode des gradients conjugués à celle de la méthode D.C., appliquées pour résoudre des problèmes quadratiques de région de confiance.
Nous avons adapté la méthode D.C. pour résoudre les sous-problèmes verticaux, ce qui nous a permis de ramener leurs dimensions de $n+m$ à $m+p$ ($ p < n $).
L'évolution de l'algorithme est contrôlée par la fonction de mérite. Des tests numériques permettent de comparer les avantages de différentes formes de la fonction de mérite. Nous avons introduit de nouvelles règles pour améliorer cette évolution.
Les expériences numériques montrent un gain concernant le nombre de problèmes résolus. L'étude de la convergence de notre méthode SDC, clôt ce travail.
Kadiri, Abderrahim. "Analyse des méthodes des points intérieurs pour les problèmes de complémentarité linéaire et la programmation quadratique convexe." INSA de Rouen, 2001. http://www.theses.fr/2001ISAM0008.
Seguin, Jean-Philippe. "Simulation thermomécanique de structures en alliages à mémoire de forme par la méthode des points intérieurs." Versailles-St Quentin en Yvelines, 2010. http://www.theses.fr/2010VERS0016.
Shape Memory Alloys (SMA) are materials on which mechanical behaviour depends of thermal solicitation. Thermomechanical coupling in models is necessary to understand better this kind of material. PhD thesis is concerned with the development of numerical tools for simulating thermomechanical evolutions of 3D SMA structures. In the approach that is presented, a crucial point consists in reformulating the incremental problem as a linear complementarity problem. This allows one to take advantage of interior point algorithms for solving the discretized evolutionary equations. Tests simulations with fixed temperature allowed to validate this approach. At least, others simulations have been made to study the influence of the thermomechanical coupling on the structural response
Halard, Matthieu. "Méthodes du second ordre pour la conception optimale en élasticité non-linéaire." Paris 9, 1999. https://portail.bu.dauphine.fr/fileviewer/index.php?doc=1999PA090029.
Akoa, François Bertrand. "Approches de points intérieurs et de la programmation DC en optimisation non convexe. Codes et simulations numériques industrielles." Rouen, INSA, 2005. http://www.theses.fr/2005ISARA001.
Jonsson, Xavier. "Méthodes de points intérieurs et de régions de confiance en optimisation non linéaire et application à la conception de verres ophtalmiques progressifs." Paris 6, 2002. http://www.theses.fr/2002PA066191.
Davy, Guillaume. "Génération de codes et d'annotations prouvables d'algorithmes de points intérieurs à destination de systèmes embarqués critiques." Thesis, Toulouse, ISAE, 2018. http://www.theses.fr/2018ESAE0034/document.
In the industry, the use of optimization is ubiquitous. Optimization consists of calculating the best solution subject to a number of constraints. However, this calculation is complex,long and not always reliable. This is why this task has long been confined to the design stages,which allowed time to do the computation and then check that the solution is correct and if necessary redo the computation. In recent years, thanks to the ever-increasing power of computers, the industry has begun to integrate optimization computation at the heart of the systems. That is to say that optimization computation is carried out continuously within the system, sometimes dozens of times per second. Therefore, it is impossible to check a posteriori the solution or restart a calculation. That is why it is important to check that the program optimization is perfectly correct and bug-free. The objective of this thesis was to develop tools and methods to meet this need. To do this we have used the theory of formal proof that is to consider a program as a mathematical object. This object takes input data and produces a result. We can then, under certain conditions on the inputs, prove that the result meets our requirements. Our job was to choose an optimization program and formally prove that the result of this program is correct
Vu, Duc Thach Son. "Numerical resolution of algebraic systems with complementarity conditions : Application to the thermodynamics of compositional multiphase mixtures." Electronic Thesis or Diss., université Paris-Saclay, 2020. http://www.theses.fr/2020UPASG006.
In reservoir simulators, it is usually delicate to take into account the laws of thermodynamic equilibrium for multiphase hydrocarbon mixtures. The difficulty lies in handling the appearance and disappearance of phases for different species. The traditional dynamic approach, known as variable switching, consists in considering only the unknowns and equations of the present phases. It is cumbersome and costly, insofar as "switching" occurs constantly, even from one Newton iteration to another.An alternative approach, called unified formulation, allows a fixed set of unknowns and equations to be maintained during the calculations. From a theoretical point of view, this is an major advance. On the practical level, because of the nonsmoothness of the complementarity equations involved in the new formulation, the discretized equations have to be solved by the semi-smooth Newton-min method, whose behavior is often pathological.In order to fully exploit the interest of the unified approach, this thesis aims at circumventing this numerical obstacle by means of more robust resolution algorithms, with a better convergence. To this end, we draw inspiration from the methods that have proven their worth in constrained optimization and we try to transpose them to general systems. This gives rise to interior-point methods, of which we propose a nonparametric version called NPIPM. The results appear to be superior to those of Newton-min.Another contribution of this doctoral work is the understanding and (partial) resolution of another obstruction to the proper functioning of the unified formulation, hitherto unidentified in the literature. This is the limitation of the domain of definition of Gibbs' functions associated with cubic equations of state. To remedy the possible non-existence of a system solution, we advocate a natural extension of Gibbs' functions
Taha, Khaled. "Analyse numérique d'algorithmes pour la programmation linéaire-quadratique généralisée." Rouen, 1995. http://www.theses.fr/2000ROUES035.
In this work, we present a theoretical and numerical study of extended linear-quadratic programming. We start with bringing out the different proprieties of objective and defining the relation of optimality with variational inequalities and linear complementarity problems. To solve the problem numerically, we adapt first and foremost a SQP variant of the quasi-Newton method BFGS and suggest the proximal point algorithm for the non-differentiable case. In the following, we deal with interior point methods and propose a new method based on solving a sequence of quasi-definite systems. This method takes advantage of the particular structure of these systems. Afterwards, we generalize our study on the minimax problem. In this context, two important cases are analysed; the case of polyhedral constraints and the case of linear matrix inequalities. Finally, we apply our results to solve problems of dynamic and stochastic optimisation. Numerical simulations done in this work assess the efficiency of our method
Malisani, Paul. "Pilotage dynamique de l'énergie du bâtiment par commande optimale sous contraintes utilisant la pénalisation intérieure." Phd thesis, Ecole Nationale Supérieure des Mines de Paris, 2012. http://pastel.archives-ouvertes.fr/pastel-00740044.
Corbineau, Marie-Caroline. "Proximal and interior point optimization strategies in image recovery." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLC085/document.
Inverse problems in image processing can be solved by diverse techniques, such as classical variational methods, recent deep learning approaches, or Bayesian strategies. Although relying on different principles, these methods all require efficient optimization algorithms. The proximity operator appears as a crucial tool in many iterative solvers for nonsmooth optimization problems. In this thesis, we illustrate the versatility of proximal algorithms by incorporating them within each one of the aforementioned resolution methods.First, we consider a variational formulation including a set of constraints and a composite objective function. We present PIPA, a novel proximal interior point algorithm for solving the considered optimization problem. This algorithm includes variable metrics for acceleration purposes. We derive convergence guarantees for PIPA and show in numerical experiments that it compares favorably with state-of-the-art algorithms in two challenging image processing applications.In a second part, we investigate a neural network architecture called iRestNet, obtained by unfolding a proximal interior point algorithm over a fixed number of iterations. iRestNet requires the expression of the logarithmic barrier proximity operator and of its first derivatives, which we provide for three useful types of constraints. Then, we derive conditions under which this optimization-inspired architecture is robust to an input perturbation. We conduct several image deblurring experiments, in which iRestNet performs well with respect to a variational approach and to state-of-the-art deep learning methods.The last part of this thesis focuses on a stochastic sampling method for solving inverse problems in a Bayesian setting. We present an accelerated proximal unadjusted Langevin algorithm called PP-ULA. This scheme is incorporated into a hybrid Gibbs sampler used to perform joint deconvolution and segmentation of ultrasound images. PP-ULA employs the majorize-minimize principle to address non log-concave priors. As shown in numerical experiments, PP-ULA leads to a significant time reduction and to very satisfactory deconvolution and segmentation results on both simulated and real ultrasound data
REBAI, Raja. "Optimisation de réseaux de télécommunications avec sécurisation." Phd thesis, Université Paris Dauphine - Paris IX, 2000. http://tel.archives-ouvertes.fr/tel-00010841.
Kallel, Emna. "Une synthèse sur les méthodes du point intérieur." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp01/MQ35688.pdf.
Takouda, Pawoumodom Ledogada. "Problèmes d'approximation matricielle linéaires coniques : approches par projections et via optimisation sous contraintes de semidéfinie positivité." Toulouse 3, 2003. http://www.theses.fr/2003TOU30129.
Le, Manh Hung. "Études mathématiques et numériques de la complémentarité aux valeurs propres et des problèmes d'accélération dans l'optimisation du premier ordre." Electronic Thesis or Diss., Limoges, 2023. http://www.theses.fr/2023LIMO0104.
In this thesis, I explore two key topics. Firstly, I delve into the mathematical and numerical study of the Pareto eigenvalue complementarity problem and its inverse counterpart. Our approach employs interior point methods, supplemented by a non-parametric smoothing technique. The efficacy of these proposed methodologies is underscored through an array of numerical experiments. Shifting our focus to continuous optimization, we adopt a dynamical systems perspective. Specifically, we study various proximal gradient inertial algorithms, discretized from a non-regular inertial dynamical system featuring elements of dry friction and Hessian-driven damping. Additionally, we examine a doubly nonlinear evolution equation governed by two potentials, and its convergence acceleration through the application of time scaling and averaging techniques, which results in inertial dynamics featuring dry friction and implicit Hessian-driven damping. The numerical tests corroborate the superior performance of inertial systems over their first-order counterparts, aligning with the theoretical results
Al, Kharboutly Mira. "Résolution d’un problème quadratique non convexe avec contraintes mixtes par les techniques de l’optimisation D.C." Thesis, Normandie, 2018. http://www.theses.fr/2018NORMLH06/document.
Our objective in this work is to solve a binary quadratic problem under mixed constraints by the techniques of DC optimization. As DC optimization has proved its efficiency to solve large-scale problems in different domains, we decided to apply this optimization approach to solve this problem. The most important part of D.C. optimization is the choice of an adequate decomposition that facilitates determination and speeds convergence of two constructed suites where the first converges to the optimal solution of the primal problem and the second converges to the optimal solution of the dual problem. In this work, we propose two efficient decompositions and simple to manipulate. The application of the DC Algorithm (DCA) leads us to solve at each iteration a convex quadratic problem with mixed, linear and quadratic constraints. For it, we must find an efficient and fast method to solve this last problem at each iteration. To do this, we apply three different methods: the Newton method, the semidefinite programing and interior point method. We present the comparative numerical results on the same benchmarks of these three approaches to justify our choice of the fastest method to effectively solve this problem
Chatel, Gweltaz. "Comptage de points : application des méthodes cristallines." Rennes 1, 2007. http://www.theses.fr/2007REN1S023.
We deal in this thesis with the computation of the number of points of algebraic curves over finite fields. By use of the stability of the rigid cohomology with compact support by finite etale descent, we show that the computation of the cohomology groups of such a curve can be reduced to the computation of the cohomology groups of an isocrystal over an open subset of the affine line and we build an algorithm achieving this operation in polynomial time. We then show that using a lifting of Frobenius for an algebraic curve over a finite field computed thanks to an algorithm presented by Gerkmann in his thesis, we can count the number of points of the curve by application of the trace formula for rigid cohomology, finally obtaining a polynomial time algorithm working for a large class of curves. We furthermore find complexities for our algorithms, using some technics introduced by Lauder in order to control the absolute value of the elements of the cohomology basis we handle
Djellali, Assia. "Optimisation technico-économique d'un réseau d'énergie électrique dans un environnement dérégulé." Paris 11, 2003. http://www.theses.fr/2003PA112211.
The electric utility industry is undergoing a process of liberalization and deregulation. In this context new difficulties are occurring in the field of transmission network management and optimization. In addition to the classical difficulties encountered in a monopolistic context such as the nature of the network constraints, the considerable size of the problem to be solved and the nonlinearity of the network equations, the optimization procedure has to take into account the new constraints, which are related to the deregulation of the electrical energy market. The nature of this problem requires mathematical models, which allow us the optimization of a nonlinear criterion being subject to nonlinear constraints. In this thesis we investigate two different methods in order to determine on the one hand the difficulties related to the resolution of a nonlinear optimization problem and on the other hand the difficulties related to the network operation in a deregulated environment. The first method is the so-called Newton-Lagrange method, which is applied to a simplified 5-buses network in a monopolistic context in order to achieve a technico-economical optimization. The optimization goal is the determination of the optimal power generation of each power producer to ensure the security of the system operation and to minimize the system operation costs. Even though convergence time can be considerable due to the inequality constraints, the method provides satisfactory results and will be used as a basis in the second part. A second optimization tool is developed, which is based on the primal-dual interior point method. It is applied to a 12-buses test network in order to investigate and to resolve the difficulties related to a competitive environment such as congestion and energy lasses management, the control of generation deviations and the impact of the occurrence of new independent power producers in an established network. An important advantage of this method is the capacity to treat the inequality constraints in an easy way. The reliable and robust optimization tool provides very satisfactory results
Primet, Maël. "Méthodes probabiliste pour le suivi de points et l'analyse d'images biologiques." Phd thesis, Université René Descartes - Paris V, 2011. http://tel.archives-ouvertes.fr/tel-00669220.
Demarche, Cyril. "Méthodes cohomologiques pour l’étude des points rationnels sur les espaces homogènes." Paris 11, 2009. http://www.theses.fr/2009PA112146.
Primet, Maël. "Méthodes probabilistes pour le suivi de points et l'analyse d'images biologiques." Paris 5, 2011. http://www.theses.fr/2011PA05S009.
The subject of this thesis is the problem of object tracking, that we approached using statistical methods. The first contribution of this work is the conception of a tracking algorithm of bacterial cells in a sequence of image, to recover their lineage; this work has led to the implementation of a software suite that is currently in use in a research laboratory. The second contribution is a theoretical study of the detection of trajectories in a cloud of points. We define a trajectory detector using the a-contrario statistical framework, which requires essentially no parameter to run. This detector yields remarkable results, and is in particular able to detect trajectories in sequences containing a large number of noise points, while keeping a very low number of false detections. We then study more specifically the correspondence problem between two point clouds, a problem often encountered for the detection of trajectories or the matching of stereographic images. We first introduce a theoretically optimal model for the point correspondence problem that makes it possible to study the performances of several classical algorithms in a variety of conditions. We then formulate a parameterless point correspondence algorithm using the a-contrario framework, that enables us to define a new trajectory tracking algorithm
Calenge, Clément. "Des outils statistiques pour l'analyse des semis de points dans l'espace écologique." Lyon 1, 2005. http://www.theses.fr/2005LYO10264.
Castro, Pedro Machado Manhães de. "Méthodes pour accélérer les triangulations de Delaunay." Nice, 2010. https://tel.archives-ouvertes.fr/tel-00531765.
This thesis proposes several new practical ways to speed-up some of the most important operations in a Delaunay triangulation. We propose two approaches to compute a Delaunay triangulation for points on or close to a sphere. The first approach computes the Delaunay triangulation of points placed exactly on the sphere. The second approach directly computes the convex hull of the input set, and gives some guarantees on the output. Both approaches are based on the regular triangulation on the sphere. The second approach outperforms previous solutions. Updating a Delaunay triangulation when its vertices move is a bottleneck in several domains of application. Rebuilding the whole triangulation from scratch is surprisingly a viable option compared to relocating the vertices. However, when all points move with a small magnitude, or when only a fraction of the vertices moves, rebuilding is no longer the best option. We propose a filtering scheme based upon the concept of vertex tolerances. We conducted several experiments to showcase the behavior of the algorithm for a variety of data sets. The experiments showed that the algorithm is particularly relevant for convergent schemes such as the Lloyd iterations. In two dimensions, the algorithm presented performs up to an order of magnitude faster than rebuilding for Lloyd iterations. In three dimensions, although rebuilding the whole triangulation at each time stamp when all vertices move can be as fast as our algorithm, our solution is fully dynamic and outperforms previous dynamic solutions. This result makes it possible to go further on the number of iterations so as to produce higher quality meshes. Point location in spatial subdivision is one of the most studied problems in computational geometry. In the case of triangulations of R^d, we revisit the problem to exploit a possible coherence between the query points. We analyze, implement, and evaluate a distribution-sensitive point location algorithm based on the classical Jump & Walk, called Keep, Jump, & Walk. For a batch of query points, the main idea is to use previous queries to improve the retrieval of the current one. Regarding point location in a Delaunay triangulation, we show how the Delaunay hierarchy can be used to answer, under some hypotheses, a query q with a O(log #(pq)) randomized expected complexity, where p is a previously located query and #(s) indicates the number of simplices crossed by the line segment s. We combine the good distribution-sensitive behavior of Keep, Jump, & Walk, and the good complexity of the Delaunay hierarchy, into a novel point location algorithm called Keep, Jump, & Climb. To the best of our knowledge, Keep, Jump, & Climb is the first practical distribution-sensitive algorithm that works both in theory and in practice for Delaunay triangulations---in our experiments, it is faster than the Delaunay hierarchy regardless of the spatial coherence of queries, and significantly faster when queries have reasonable spatial coherence
Surroca, Ortiz Andrea. "Méthodes de transcendance et géométrie diophantienne." Paris 6, 2003. http://www.theses.fr/2003PA066313.
AKOA, François. "APPROCHES DE POINTS INTERIEURS ET DE LA PROGRAMMATION DC EN OPTIMISATION NON CONVEXE. CODES ET SIMULATIONS NUMERIQUES INDUSTRIELLES." Phd thesis, INSA de Rouen, 2005. http://tel.archives-ouvertes.fr/tel-00008475.
La thèse comporte trois parties :
la première partie est consacrée aux techniques d'optimisations locales et s'articule autour des méthodes de points intérieurs et de la programmation DC. Nous y développons deux algorithmes. Après une présentation non exhaustive de la programmation DC, des méthodes de points intérieurs et des propriétés essentielles de la classe des matrices quasi-définies au chapitre un, nous présentons au chapitre deux un nouvel algorithme basé sur une reformulation des conditions d'optimalité de Karush-Kuhn-Tucker. Le troisième chapitre est consacré à l'intégration des techniques d'optimisation DC dans un schéma de points intérieurs, c'est l'algorithme IPDCA.
La seconde partie de la thèse est consacrée aux solutions globales de problèmes de programmation quadratique. Dans le premier chapitre de cette partie nous explorons l'intégration d'IPDCA dans un schéma B&B. Le second chapitre de la partie est consacré à la résolution de problèmes quadratiques à variables 0-1 par un schéma B\&B dans lequel nous faisons intervenir IPDCA. Le troisième chapitre est quant à lui consacré à l'optimisation monotone due au Professeur Tuy. Nous examinons plus particulièrement son intégration dans un B&B dans lequelle DCA est appelé pour améliorer la borne supérieure.
Le quatrième et dernier chapitre de cette partie est consacré à une procédure de redémarrage de DCA.
La dernière partie de la thèse est consacrée aux applications industrielles. Nous y appliquons les deux algorithmes développés dans la première partie de la thèse à un problème de mécanique de structure de grande dimension et à un problème en Data Mining.
Calvet, Lilian. "Méthodes de reconstruction tridimensionnelle intégrant des points cycliques : application au suivi d’une caméra." Phd thesis, Toulouse, INPT, 2014. http://oatao.univ-toulouse.fr/11901/1/Calvet.pdf.
Calvet, Lilian. "Méthodes de reconstruction tridimensionnelle intégrant des points cycliques : application au suivi d'une caméra." Phd thesis, Institut National Polytechnique de Toulouse - INPT, 2014. http://tel.archives-ouvertes.fr/tel-00981191.
Czarnecki, Marc-Olivier. "Méthodes d’approximation pour des problèmes non différentiables : application à l’existence de points fixes." Paris 1, 1996. http://www.theses.fr/1996PA010075.
Bouju, Alain. "Etiquetage et poursuite de points caractéristiques d'un objet 3D par des méthodes connexionistes." Toulouse, ENSAE, 1993. http://www.theses.fr/1993ESAE0017.
Bornschlegell, Augusto Salomao. "Optimisation aérothermique d'un alternateur à pôles saillants pour la production d'énergie électrique décentralisée." Thesis, Valenciennes, 2012. http://www.theses.fr/2012VALE0023/document.
This work relates the thermal optimization of an electrical machine. The lumped method is used to simulate the temperature field. This model solves the heat equation in three dimensions, in cylindrical coordinates and in transient or steady state. We consider two transport mechanisms: conduction and convection. The evaluation of this model is performed by means of 13 design variables that correspond to the main flow rates of the equipment. We analyse the machine cooling performance by varying these 13 flow rates. Before starting the study of such a complicated geometry, we picked a simpler case in order to better understand the variety of the available optimization tools. The experience obtained in the simpler case is applyed in the resolution of the thermal optimization problem of the electrical machine. This machine is evaluated from the thermal point of view by combining two criteria : the maximum and the mean temperature. Constraints are used to keep the problem consistent. We solved the problem using the gradient based methods (Active-set and Interior-Point) and the Genetic Algorithms
Bronschlegell, Augusto. "Optimisation aérothermique d'un alternateur à pôles saillants pour la production d'énergie électrique décentralisée." Phd thesis, Université de Valenciennes et du Hainaut-Cambresis, 2012. http://tel.archives-ouvertes.fr/tel-00768249.
Seghir, Rachid. "Méthodes de dénombrement de points entiers de polyèdres et applications à l'optimisation de programmes." Université Louis Pasteur (Strasbourg) (1971-2008), 2006. https://publication-theses.unistra.fr/public/theses_doctorat/2006/SEGHIR_Rachid_2006.pdf.
The polyhedral model is a well-known framework in the field of automatic program optimization. Iterations and array references in affine loop nests are represented by integer points in bounded polyhedra, or (parametric) Z-polytopes. In this thesis, three new counting algorithms have been developed: counting integer points in a parametric Z-polytope, in a union of parametric Z-polytopes and in their images by affine functions. The result of such a counting is given by one or many multivariate polynomials in which the coefficients may be periodic numbers. These polynomials, known as Ehrhart quasipolynomials, are defined on sub-sets of the parameter values called validity domains or chambers. Many affine loop nest analysis and optimization methods require such counting algorithms. We applied them in array linearization which achieves memory compression and improves spatial locality of accessed data. Besides program optimization, the proposed algorithms have many other applications, as in mathematics and economics
Itier, Vincent. "Nouvelles méthodes de synchronisation de nuages de points 3D pour l'insertion de données cachées." Thesis, Montpellier, 2015. http://www.theses.fr/2015MONTS017/document.
This thesis addresses issues relating to the protection of 3D object meshes. For instance, these objects can be created using CAD tool developed by the company STRATEGIES. In an industrial context, 3D meshes creators need to have tools in order to verify meshes integrity, or check permission for 3D printing for example.In this context we study data hiding on 3D meshes. This approach allows us to insert information in a secure and imperceptible way in a mesh. This may be an identifier, a meta-information or a third-party content, for instance, in order to transmit secretly a texture. Data hiding can address these problems by adjusting the trade-off between capacity, imperceptibility and robustness. Generally, data hiding methods consist of two stages, the synchronization and the embedding. The synchronization stage consists of finding and ordering available components for insertion. One of the main challenges is to propose an effective synchronization method that defines an order on mesh components. In our work, we propose to use mesh vertices, specifically their geometric representation in space, as basic components for synchronization and embedding. We present three new synchronisation methods based on the construction of a Hamiltonian path in a vertex cloud. Two of these methods jointly perform the synchronization stage and the embedding stage. This is possible thanks to two new high-capacity embedding methods (from 3 to 24 bits per vertex) that rely on coordinates quantization. In this work we also highlight the constraints of this kind of synchronization. We analyze the different approaches proposed with several experimental studies. Our work is assessed on various criteria including the capacity and imperceptibility of the embedding method. We also pay attention to security aspects of the proposed methods
Keraghel, Abdelkrim. "Étude adaptative et comparative des principales variantes dans l'algorithme de Karmarkar." Phd thesis, Grenoble 1, 1989. http://tel.archives-ouvertes.fr/tel-00332749.
De, Castro Pedro. "Méthodes pour accélérer les triangulations de Delaunay." Phd thesis, Université de Nice Sophia-Antipolis, 2010. http://tel.archives-ouvertes.fr/tel-00531765.
Arnaud, Elise. "Méthodes de filtrage pour du suivi dans des équences d'images. Application au suivi de points caractéristiques." Rennes 1, 2004. http://www.theses.fr/2004REN10101.
Guillemot, Thierry. "Méthodes et structures non locales pour la restaurationd'images et de surfaces 3D." Thesis, Paris, ENST, 2014. http://www.theses.fr/2014ENST0006/document.
In recent years, digital technologies allowing to acquire real world objects or scenes have been significantly improved in order to obtain high quality datasets. However, the acquired signal is corrupted by defects which can not be rectified materially and require the use of adapted restoration methods. Until the middle 2000s, these approaches were only based on a local process applyed on the damaged signal. With the improvement of computing performance, the neighborhood used by the filter has been extended to the entire acquired dataset by exploiting their self-similar nature. These non-local approaches have mainly been used to restore regular and structured data such as images. But in the extreme case of irregular and unstructured data as 3D point sets, their adaptation is few investigated at this time. With the increase amount of exchanged data over the communication networks, new non-local methods have recently been proposed. These can improve the quality of the restoration by using an a priori model extracted from large data sets. However, this kind of method is time and memory consuming. In this thesis, we first propose to extend the non-local methods for 3D point sets by defining a surface of points which exploits their self-similar of the point cloud. We then introduce a new flexible and generic data structure, called the CovTree, allowing to learn the distribution of a large set of samples with a limited memory capacity. Finally, we generalize collaborative restoration methods applied to 2D and 3D data by using our CovTree to learn a statistical a priori model from a large dataset
Atallah, Nabil. "Analyse des méthodes itératives par points pour les problèmes de diffusion-convection approchés par les schémas compacts." Toulouse 3, 2002. http://www.theses.fr/2002TOU30010.
Pouliquen, Mathieu. "De l'identification des systèmes : points de vue sur les méthodes des sous-espaces, identification pour la commande." Caen, 2003. http://www.theses.fr/2003CAEN2074.
Guillemot, Thierry. "Méthodes et structures non locales pour la restaurationd'images et de surfaces 3D." Electronic Thesis or Diss., Paris, ENST, 2014. http://www.theses.fr/2014ENST0006.
In recent years, digital technologies allowing to acquire real world objects or scenes have been significantly improved in order to obtain high quality datasets. However, the acquired signal is corrupted by defects which can not be rectified materially and require the use of adapted restoration methods. Until the middle 2000s, these approaches were only based on a local process applyed on the damaged signal. With the improvement of computing performance, the neighborhood used by the filter has been extended to the entire acquired dataset by exploiting their self-similar nature. These non-local approaches have mainly been used to restore regular and structured data such as images. But in the extreme case of irregular and unstructured data as 3D point sets, their adaptation is few investigated at this time. With the increase amount of exchanged data over the communication networks, new non-local methods have recently been proposed. These can improve the quality of the restoration by using an a priori model extracted from large data sets. However, this kind of method is time and memory consuming. In this thesis, we first propose to extend the non-local methods for 3D point sets by defining a surface of points which exploits their self-similar of the point cloud. We then introduce a new flexible and generic data structure, called the CovTree, allowing to learn the distribution of a large set of samples with a limited memory capacity. Finally, we generalize collaborative restoration methods applied to 2D and 3D data by using our CovTree to learn a statistical a priori model from a large dataset