Dissertations / Theses on the topic 'Algorithme de réduction d’erreur'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 41 dissertations / theses for your research on the topic 'Algorithme de réduction d’erreur.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Khoder, Jihan. "Nouvel Algorithme pour la Réduction de la Dimensionnalité en Imagerie Hyperspectrale." Phd thesis, Université de Versailles-Saint Quentin en Yvelines, 2013. http://tel.archives-ouvertes.fr/tel-00939018.
Full textKhoder, Jihan Fawaz. "Nouvel algorithme pour la réduction de la dimensionnalité en imagerie hyperspectrale." Versailles-St Quentin en Yvelines, 2013. http://www.theses.fr/2013VERS0037.
Full textIn hyperspectral imaging, the volumes of data acquired often reach the gigabyte for a single scene observed. Therefore, the analysis of these data complex physical content must go with a preliminary step of dimensionality reduction. Therefore, the analyses of these data of physical content complex go preliminary with a step of dimensionality reduction. This reduction has two objectives, the first is to reduce redundancy and the second facilitates post-treatment (extraction, classification and recognition) and therefore the interpretation of the data. Automatic classification is an important step in the process of knowledge extraction from data. It aims to discover the intrinsic structure of a set of objects by forming groups that share similar characteristics. In this thesis, we focus on dimensionality reduction in the context of unsupervised classification of spectral bands. Different approaches exist, such as those based on projection (linear or nonlinear) of high-dimensional data in a representation subspaces or on the techniques of selection of spectral bands exploiting of the criteria of complementarity-redundant information do not allow to preserve the wealth of information provided by this type of data
Allier, Pierre-Eric. "Contrôle d’erreur pour et par les modèles réduits PGD." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLN063/document.
Full textMany structural mechanics problems require the resolution of several similar numerical problems. An iterative model reduction approach, the Proper Generalized Decomposition (PGD), enables the control of the main solutions at once, by the introduction of additional parameters. However, a major drawback to its use in the industrial world is the absence of a robust error estimator to measure the quality of the solutions obtained.The approach used is based on the concept of constitutive relation error. This method consists in constructing admissible fields, thus ensuring the conservative and guaranteed aspect of the estimation of the error by reusing the maximum number of tools used in the finite elements framework. The ability to quantify the importance of the different sources of error (reduction and discretization) allows to control the main strategies of PGD resolution.Two strategies have been proposed in this work. The first was limited to post-processing a PGD solution to construct an estimate of the error committed, in a non-intrusively way for existing PGD codes. The second consists of a new PGD strategy providing an improved approximation associated with an estimate of the error committed. The various comparative studies are carried out in the context of linear thermal and elasticity problems.This work also allowed us to optimize the admissible fields construction methods by substituting the resolution of many similar problems by a PGD solution, exploited as a virtual chart
Zapien, Durand-Viel Karina. "Algorithme de chemin de régularisation pour l'apprentissage statistique." Phd thesis, INSA de Rouen, 2009. http://tel.archives-ouvertes.fr/tel-00557888.
Full textDriant, Thomas. "Réduction de la traînée aérodynamique et refroidissement d'un tricycle hybride par optimisation paramétrique." Thèse, Université de Sherbrooke, 2015. http://hdl.handle.net/11143/6990.
Full textDelestre, Barbara. "Reconstruction 3D de particules dans un écoulement par imagerie interférométrique." Electronic Thesis or Diss., Normandie, 2022. http://www.theses.fr/2022NORMR116.
Full textFor many industrial or environmental applications, it is important to measure the size and volume of irregularly shaped particles. This is for example the case in the context of aircraft icing which occurs during flights, where it is necessary to measure in situ the water content and the ice content in the troposphere in order to detect and avoid risk areas. Our interest has been on interferometric out-of-focus imaging, an optical technique offering many advantages (wide measurement field, extended range of sizes studied [50 μm: a few millimeters], distance particle / measuring device several tens of centimeters ...). During this thesis, we showed that the 3D reconstruction of a particle can be done from a set of three interferometric images of this particle (under three perpendicular viewing angles). This can be done using the error reduction (ER) algorithm which allows to obtain the function f(x,y) from the measurements of the modulus of its 2D Fourier transform |F(u,v)| , by reconstructing the phase of its 2D Fourier transform. The implementation of this algorithm allowed us to reconstruct the shape of irregular particles from their interferometric images. Experimental demonstrations were carried out using a specific assembly based on the use of a micro-mirror array (DMD) which generates the interferometric images of programmable rough particles. The results obtained are very encouraging. The volumes obtained remain quite close to the real volume of the particle and the reconstructed 3D shapes give us a good idea of the general shape of the particle studied even in the most extreme cases where the orientation of the particle is arbitrary. Finally, we showed that an accurate 3D reconstruction of a "programmed" rough particle can be performed from a set of 120 interferometric images
Karina, Zapien. "Algorithme de Chemin de Régularisation pour l'apprentissage Statistique." Phd thesis, INSA de Rouen, 2009. http://tel.archives-ouvertes.fr/tel-00422854.
Full textL'approche habituelle pour déterminer ces hyperparamètres consiste à utiliser une "grille". On se donne un ensemble de valeurs possibles et on estime, pour chacune de ces valeurs, l'erreur de généralisation du meilleur modèle. On s'intéresse, dans cette thèse, à une approche alternative consistant à calculer l'ensemble des solutions possibles pour toutes les valeurs des hyperparamètres. C'est ce qu'on appelle le chemin de régularisation. Il se trouve que pour les problèmes d'apprentissage qui nous intéressent, des programmes quadratiques paramétriques, on montre que le chemin de régularisation associé à certains hyperparamètres est linéaire par morceaux et que son calcul a une complexité numérique de l'ordre d'un multiple entier de la complexité de calcul d'un modèle avec un seul jeu hyper-paramètres.
La thèse est organisée en trois parties. La première donne le cadre général des problèmes d'apprentissage de type SVM (Séparateurs à Vaste Marge ou Support Vector Machines) ainsi que les outils théoriques et algorithmiques permettant d'appréhender ce problème. La deuxième partie traite du problème d'apprentissage supervisé pour la classification et l'ordonnancement dans le cadre des SVM. On montre que le chemin de régularisation de ces problèmes est linéaire par morceaux. Ce résultat nous permet de développer des algorithmes originaux de discrimination et d'ordonnancement. La troisième partie aborde successivement les problèmes d'apprentissage semi supervisé et non supervisé. Pour l'apprentissage semi supervisé, nous introduisons un critère de parcimonie et proposons l'algorithme de chemin de régularisation associé. En ce qui concerne l'apprentissage non supervisé nous utilisons une approche de type "réduction de dimension". Contrairement aux méthodes à base de graphes de similarité qui utilisent un nombre fixe de voisins, nous introduisons une nouvelle méthode permettant un choix adaptatif et approprié du nombre de voisins.
Nouet, Christophe. "Réduction de l'ordre des systèmes continus, linéaires, via un processus d'orthogonalisation et un algorithme de gauss newton." Brest, 1994. http://www.theses.fr/1994BRES2040.
Full textSimon, Frank. "Contrôle actif appliqué à la réduction du bruit interne d'aéronefs." Toulouse, ENSAE, 1997. http://www.theses.fr/1997ESAE0002.
Full textZapién, Arreola Karina. "Algorithme de chemin de régularisation pour l'apprentissage statistique." Thesis, Rouen, INSA, 2009. http://www.theses.fr/2009ISAM0001/document.
Full textThe selection of a proper model is an essential task in statistical learning. In general, for a given learning task, a set of parameters has to be chosen, each parameter corresponds to a different degree of “complexity”. In this situation, the model selection procedure becomes a search for the optimal “complexity”, allowing us to estimate a model that assures a good generalization. This model selection problem can be summarized as the calculation of one or more hyperparameters defining the model complexity in contrast to the parameters that allow to specify a model in the chosen complexity class. The usual approach to determine these parameters is to use a “grid search”. Given a set of possible values, the generalization error for the best model is estimated for each of these values. This thesis is focused in an alternative approach consisting in calculating the complete set of possible solution for all hyperparameter values. This is what is called the regularization path. It can be shown that for the problems we are interested in, parametric quadratic programming (PQP), the corresponding regularization path is piece wise linear. Moreover, its calculation is no more complex than calculating a single PQP solution. This thesis is organized in three chapters, the first one introduces the general setting of a learning problem under the Support Vector Machines’ (SVM) framework together with the theory and algorithms that allow us to find a solution. The second part deals with supervised learning problems for classification and ranking using the SVM framework. It is shown that the regularization path of these problems is piecewise linear and alternative proofs to the one of Rosset [Ross 07b] are given via the subdifferential. These results lead to the corresponding algorithms to solve the mentioned supervised problems. The third part deals with semi-supervised learning problems followed by unsupervised learning problems. For the semi-supervised learning a sparsity constraint is introduced along with the corresponding regularization path algorithm. Graph-based dimensionality reduction methods are used for unsupervised learning problems. Our main contribution is a novel algorithm that allows to choose the number of nearest neighbors in an adaptive and appropriate way contrary to classical approaches based on a fix number of neighbors
Oulefki, Abdelhakim. "Réduction de modèles thermiques par amalgame modal." Phd thesis, Ecole Nationale des Ponts et Chaussées, 1993. http://tel.archives-ouvertes.fr/tel-00523620.
Full textAudoux, Yohann. "Développement d’une nouvelle méthode de réduction de modèle basée sur les hypersurfaces NURBS (Non-Uniform Rational B-Splines)." Thesis, Paris, ENSAM, 2019. http://www.theses.fr/2019ENAM0016/document.
Full textDespite undeniable progress achieved in computer sciences over the last decades, some problems remain intractable either by their numerical complexity (optimisation problems, …) or because they are subject to specific constraints such as real-time processing (virtual and augmented reality, …). In this context, metamodeling techniques can minimise the computational effort to realize complex multi-field and/or multi-scale simulations. The metamodeling process consists of setting up a metamodel that needs less resources to be evaluated than the complex one that is extracted from by guaranteeing, meanwhile, a minimal accuracy. Current methods generally require either the user’s expertise or arbitrary choices. Moreover, they are often tailored for a specific application, but they can be hardly transposed to other fields. Thus, even if it is not the best, our approach aims at obtaining a metamodel that remains a good one for whatever problem at hand. The developed strategy relies on NURBS hypersurfaces and stands out from existing ones by avoiding the use of empiric criteria to set its parameters. To do so, a metaheuristic (a genetic algorithm) able to deal with optimisation problems defined over a variable number of optimisation variables sets automatically all the hypersurface parameters so that the complexity is not transferred to the user
Gama, Nicolas. "Géométrie des nombres et cryptanalyse de NTRU." Paris 7, 2008. http://www.theses.fr/2008PA077199.
Full textPublic-key cryptography, invented by Diffie and Hellman in 1976, is now part of everyday life: credit cards, game consoles and electronic commerce are using public key schemes. The security of certain cryptosystems, like NTRU, is based on problems arising from the geometry of numbers, including the shortest vector problem or the closest vector problem in Euclidean lattices. While these problems are mostly NP-hard, it is still possible to compute good approximations in practice. In this thesis, we study approximation algorithms for these lattice reduction problems, which operate either in proved polynomial time, or more generally in reasonable time. We first analyze the functioning of these algorithms from a theoretical point of view, which allows us to build for example, the best proved algorithm for its complexity and the quality of its results. But we also study the practical aspects, through a lot of simulations, which allows us to highlight an important difference between properties of complexity and quality that we can prove, and those (much better) that can be achieved in practice. These simulations also allow us to correctly predict the actual behavior of lattice reduction algorithms. We study these algorithms first in the general case, and then we show how to make specialized versions for the very particular lattices drawn from NTRU cryptosystem
Vezard, Laurent. "Réduction de dimension en apprentissage supervisé : applications à l’étude de l’activité cérébrale." Thesis, Bordeaux 1, 2013. http://www.theses.fr/2013BOR15005/document.
Full textThe aim of this work is to develop a method able to automatically determine the alertness state of humans. Such a task is relevant to diverse domains, where a person is expected or required to be in a particular state. For instance, pilots, security personnel or medical personnel are expected to be in a highly alert state, and this method could help to confirm this or detect possible problems. In this work, electroencephalographic data (EEG) of 58 subjects in two distinct vigilance states (state of high and low alertness) were collected via a cap with $58$ electrodes. Thus, a binary classification problem is considered. In order to use of this work on a real-world applications, it is necessary to build a prediction method that requires only a small number of sensors (electrodes) in order to minimize the time needed by the cap installation and the cap cost. During this thesis, several approaches have been developed. A first approach involves use of a pre-processing method for EEG signals based on the use of a discrete wavelet decomposition in order to extract the energy of each frequency in the signal. Then, a linear regression is performed on the energies of some of these frequencies and the slope of this regression is retained. A genetic algorithm (GA) is used to optimize the selection of frequencies on which the regression is performed. Moreover, the GA is used to select a single electrode .A second approach is based on the use of the Common Spatial Pattern method (CSP). This method allows to define linear combinations of the original variables to obtain useful synthetic signals for the task classification. In this work, a GA and a sequential search method have been proposed to select a subset of electrode which are keep in the CSP calculation.Finally, a sparse CSP algorithm, based on the use of existing work in the sparse principal component analysis, was developed.The results of the different approaches are detailed and compared. This work allows us to obtaining a reliable model to obtain fast prediction of the alertness of a new individual
Hadjee, Gino Eric. "Gain environnemental lié à une gestion coordonnée de la charge sur les réseaux de distribution d'énergie électrique." Paris 11, 2006. http://www.theses.fr/2006PA112044.
Full textThe electricity demand has increased with the new developments in the world. This has put the pressure on the power utilities to meet the increasing demand of costumers. On electric power distribution, one simple way for meet this demand is to reinforce the power system capacity. However, this solution need not only expensive investments for electricity distributors but also increases the cost per for the costumers. An innovative way is to operate on costumer level. Then, it is interesting to study the management strategies for smoothing electricity demand curve without compromising the consumers comfort. Generally, the load control located at the level of the house or a group of houses. On this level, it consists in analyzing how a coordinated load management, being based on the typology of the principal electrical appliances, would allow reducing the load peak on distribution network level. On the distributor level, it consists in studying the influence of the tariff profile and proposing an optimization method in order to modify the consumer's behaviour to shift part of the peak load during off peak periods
Vezard, Laurent. "Réduction de dimension en apprentissage supervisé : applications à l'étude de l'activité cérébrale." Phd thesis, Université Sciences et Technologies - Bordeaux I, 2013. http://tel.archives-ouvertes.fr/tel-00944790.
Full textDelaroque, Aurélie. "Élaboration d’un outil numérique pour la réduction et l’optimisation des mécanismes cinétiques pour les systèmes de combustion." Thesis, Sorbonne université, 2018. http://www.theses.fr/2018SORUS417.
Full textIn the modeling of a combustion process, obtention of global data such as flame speed can, under certain circumstances, be achieved through extremely reduced mechanisms. On the contrary, prediction of detailed data such as polluant species requires the use of detailed kinetic mechanisms involving many chemical species. Due to the size and to the presence of many differents time scales, the integration of those models to complex numerical simulations is a non trivial task. A reduction tool based on Directed Relation Graph and sensitivity analysis methods is proposed to tackle this issue. Reduced mechanisms fitting user defined tolerances for quantities of interest such as global (flame speed, ignition delay, etc) and detailed data (concentration profiles) are automatically generated. The reduction process is paired up with an optimisation of reaction rates through a genetic algorithm to make up for the error induced by the loss of information. This process can use both numerical and experimental reference entries. The complete numerical tool has been tested on several canonical configurations for several fuels (methane, ethane and n-heptane) and reduction rates up to 90% have been observed
Stehlé, Damien. "Algorithmique de la réduction de réseaux et application à la recherche de pires cas pour l'arrondi defonctions mathématiques." Phd thesis, Université Henri Poincaré - Nancy I, 2005. http://tel.archives-ouvertes.fr/tel-00011150.
Full textplusieurs domaines de l'algorithmique, en cryptographie et en théorie
algorithmique des nombres par exemple. L'objet du présent mémoire est dual : nous améliorons les algorithmes de réduction des réseaux,
et nous développons une nouvelle application dans le domaine
de l'arithmétique des ordinateurs. En ce qui concerne l'aspect algorithmique, nous nous intéressons aux cas des petites dimensions (en dimension un, où il s'agit du calcul de pgcd, et aussi en dimensions 2 à 4), ainsi qu'à la description d'une nouvelle variante de l'algorithme LLL, en dimension quelconque. Du point de vue de l'application, nous utilisons la méthode
de Coppersmith permettant de trouver les petites racines de polynômes modulaires multivariés, pour calculer les pires cas pour l'arrondi des fonctions mathématiques, quand la fonction, le mode d'arrondi, et la précision sont donnés. Nous adaptons aussi notre technique aux mauvais cas simultanés pour deux fonctions. Ces deux méthodes sont des pré-calculs coûteux, qui une fois
effectués permettent d'accélérer les implantations des fonctions mathématiques élémentaires en précision fixée, par exemple en double précision.
La plupart des algorithmes décrits dans ce mémoire ont été validés
expérimentalement par des implantations, qui sont
disponibles à l'url http://www.loria.fr/~stehle.
Vezard, Laurent. "Réduction de dimension en apprentissage supervisé. Application à l'étude de l'activité cérébrale." Phd thesis, Université Sciences et Technologies - Bordeaux I, 2013. http://tel.archives-ouvertes.fr/tel-00926845.
Full textThai, Hoang phuong. "Sur l'utilisation de l'analyse isogéométrique en mécanique linéaire ou non-linéaire des structures : certification des calculs et couplage avec la réduction de modèle PGD." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLN017/document.
Full textThe topic of the PhD thesis deals with the construction of advanced numerical approaches for the simulation and optimization of mechanical structures with complex geometry. It focuses on the Isogeometric Analysis (IGA) technology which has received much attention of the last decade due to its increased flexibility, accuracy, and robustness in many engineering simulations compared to classical Finite Element Analysis (FEA). In particular, IGA enables a direct link with CAD software (the same functions are used for both analysis and geometry) and facilitates meshing procedures.In this framework, and as a first part of the work, a verification method based on duality and the concept of Constitutive Relation Error (CRE) is proposed. It enables to derive guaranteed and fully computable a posteriori error estimates on the numerical solution provided by IGA. Such estimates, which are valid for a wide class of linear or nonlinear structural mechanics models, thus constitute performing and useful tools to quantitatively control the numerical accuracy and drive adaptive procedures. The focus here is on the construction of equilibrated flux fields, which is key ingredient of the CRE concept, and which was until now almost exclusively developed in the FEA framework alone. The extension to IGA requires to address some technical issues, due to the use of B-Spline/NURBS basis functions. The CRE concept is also implemented together with adjoint techniques in order to perform goal-oriented error estimation.In a second part, IGA is coupled with model reduction in order to get certified real-time solutions to problems with parameterized geometry. After defining the parametrization on the mapping from the IGA parametric space to the physical space, a reduced model based on the Proper Generalized Decomposition (PGD) is introduced to solve the multi-dimensional problem. From an offline/online strategy, the procedure then enables to describe the manifold of parametric solutions with reduced CPU cost, and to further perform shape optimization in real-time. Here again, a posteriori estimation of the various error sources inheriting from discretization and PGD model reduction is performed from the CRE concept. It enables to control the quality of the approximate PGD solution (globally or on outputs of interest), for any geometry configuration, and to feed a robust greedy algorithm that optimizes the computational effort for a prescribed error tolerance.The overall research work thus provides for reliable and practical tools in mechanical engineering simulation activities. Capabilities and performance of these tools are shown on several numerical experiments with academic and engineering problems, and with linear and nonlinear (damage) models
Salazar, Veronica. "Etude des propriétés physiques des aérosols de la moyenne et haute atmosphère à partir d'une nouvelle analyse des observations du GOMOS-ENVISAT pour la période 2002-2006." Phd thesis, Université d'Orléans, 2010. http://tel.archives-ouvertes.fr/tel-00608052.
Full textGiacofci, Joyce. "Classification non supervisée et sélection de variables dans les modèles mixtes fonctionnels. Applications à la biologie moléculaire." Thesis, Grenoble, 2013. http://www.theses.fr/2013GRENM025/document.
Full textMore and more scientific studies yield to the collection of a large amount of data that consist of sets of curves recorded on individuals. These data can be seen as an extension of longitudinal data in high dimension and are often modeled as functional data in a mixed-effects framework. In a first part we focus on performing unsupervised clustering of these curves in the presence of inter-individual variability. To this end, we develop a new procedure based on a wavelet representation of the model, for both fixed and random effects. Our approach follows two steps : a dimension reduction step, based on wavelet thresholding techniques, is first performed. Then a clustering step is applied on the selected coefficients. An EM-algorithm is used for maximum likelihood estimation of parameters. The properties of the overall procedure are validated by an extensive simulation study. We also illustrate our method on high throughput molecular data (omics data) like microarray CGH or mass spectrometry data. Our procedure is available through the R package "curvclust", available on the CRAN website. In a second part, we concentrate on estimation and dimension reduction issues in the mixed-effects functional framework. Two distinct approaches are developed according to these issues. The first approach deals with parameters estimation in a non parametrical setting. We demonstrate that the functional fixed effects estimator based on wavelet thresholding techniques achieves the expected rate of convergence toward the true function. The second approach is dedicated to the selection of both fixed and random effects. We propose a method based on a penalized likelihood criterion with SCAD penalties for the estimation and the selection of both fixed effects and random effects variances. In the context of variable selection we prove that the penalized estimators enjoy the oracle property when the signal size diverges with the sample size. A simulation study is carried out to assess the behaviour of the two proposed approaches
Fahlaoui, Tarik. "Réduction de modèles et apprentissage de solutions spatio-temporelles paramétrées à partir de données : application à des couplages EDP-EDO." Thesis, Compiègne, 2020. http://www.theses.fr/2020COMP2535.
Full textIn this thesis, an algorithm for learning an accurate reduced order model from data generated by a high fidelity solver (HF solver) is proposed. To achieve this goal, we use both Dynamic Mode Decomposition (DMD) and Proper Orthogonal Decomposition (POD). Anomaly detection, during the learning process, can be easily done by performing an a posteriori spectral analysis on the reduced order model learnt. Several extensions are presented to make the method as general as possible. Thus, we handle the case of coupled ODE/PDE systems or the case of second order hyperbolic equations. The method is also extended to the case of switched control systems, where the switching rule is learnt by using an Artificial Neural Network (ANN). The reduced order model learnt allows to predict time evolution of the POD coefficients. However, the POD coefficients have no interpretable meaning. To tackle this issue, we propose an interpretable reduction method using the Empirical Interpolation Method (EIM). This reduction method is then adapted to the case of third-order tensors, and combining with the Kernel Ridge Regression (KRR) we can learn the solution manifold in the case of parametrized PDEs. In this way, we can learn a parametrized reduced order model. The case of non-linear PDEs or disturbed data is finally presented in the opening
Giacomini, Matteo. "Quantitative a posteriori error estimators in Finite Element-based shape optimization." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLX070/document.
Full textGradient-based shape optimization strategies rely on the computation of the so-called shape gradient. In many applications, the objective functional depends both on the shape of the domain and on the solution of a PDE which can only be solved approximately (e.g. via the Finite Element Method). Hence, the direction computed using the discretized shape gradient may not be a genuine descent direction for the objective functional. This Ph.D. thesis is devoted to the construction of a certification procedure to validate the descent direction in gradient-based shape optimization methods using a posteriori estimators of the error due to the Finite Element approximation of the shape gradient.By means of a goal-oriented procedure, we derive a fully computable certified upper bound of the aforementioned error. The resulting Certified Descent Algorithm (CDA) for shape optimization is able to identify a genuine descent direction at each iteration and features a reliable stopping criterion basedon the norm of the shape gradient.Two main applications are tackled in the thesis. First, we consider the scalar inverse identification problem of Electrical Impedance Tomography and we investigate several a posteriori estimators. A first procedure is inspired by the complementary energy principle and involves the solution of additionalglobal problems. In order to reduce the computational cost of the certification step, an estimator which depends solely on local quantities is derived via an equilibrated fluxes approach. The estimators are validated for a two-dimensional case and some numerical simulations are presented to test the discussed methods. A second application focuses on the vectorial problem of optimal design of elastic structures. Within this framework, we derive the volumetric expression of the shape gradient of the compliance using both H 1 -based and dual mixed variational formulations of the linear elasticity equation. Some preliminary numerical tests are performed to minimize the compliance under a volume constraint in 2D using the Boundary Variation Algorithm and an a posteriori estimator of the error in the shape gradient is obtained via the complementary energy principle
Barkouki, Houda. "Rational Lanczos-type methods for model order reduction." Thesis, Littoral, 2016. http://www.theses.fr/2016DUNK0440/document.
Full textNumerical solution of dynamical systems have been a successful means for studying complex physical phenomena. However, in large-scale setting, the system dimension makes the computations infeasible due to memory and time limitations, and ill-conditioning. The remedy of this problem is model reductions. This dissertations focuses on projection methods to efficiently construct reduced order models for large linear dynamical systems. Especially, we are interesting by projection onto unions of Krylov subspaces which lead to a class of reduced order models known as rational interpolation. Based on this theoretical framework that relate Krylov projection to rational interpolation, four rational Lanczos-type algorithms for model reduction are proposed. At first, an adaptative rational block Lanczos-type method for reducing the order of large scale dynamical systems is introduced, based on a rational block Lanczos algorithm and an adaptive approach for choosing the interpolation points. A generalization of the first algorithm is also given where different multiplicities are consider for each interpolation point. Next, we proposed another extension of the standard Krylov subspace method for Multiple-Input Multiple-Output (MIMO) systems, which is the global Krylov subspace, and we obtained also some equations that describe this process. Finally, an extended block Lanczos method is introduced and new algebraic properties for this algorithm are also given. The accuracy and the efficiency of all proposed algorithms when applied to model order reduction problem are tested by means of different numerical experiments that use a collection of well known benchmark examples
Luu, Thi Hieu. "Amélioration du modèle de sections efficaces dans le code de cœur COCAGNE de la chaîne de calculs d'EDF." Thesis, Paris 6, 2017. http://www.theses.fr/2017PA066120/document.
Full textIn order to optimize the operation of its nuclear power plants, the EDF's R&D department iscurrently developing a new calculation chain to simulate the nuclear reactors core with state of the art tools. These calculations require a large amount of physical data, especially the cross-sections. In the full core simulation, the number of cross-section values is of the order of several billions. These cross-sections can be represented as multivariate functions depending on several physical parameters. The determination of cross-sections is a long and complex calculation, we can therefore pre-compute them in some values of parameters (online calculations), then evaluate them at all desired points by an interpolation (online calculations). This process requires a model of cross-section reconstruction between the two steps. In order to perform a more faithful core simulation in the new EDF's chain, the cross-sections need to be better represented by taking into account new parameters. Moreover, the new chain must be able to calculate the reactor in more extensive situations than the current one. The multilinear interpolation is currently used to reconstruct cross-sections and to meet these goals. However, with this model, the number of points in its discretization increases exponentially as a function of the number of parameters, or significantly when adding points to one of the axes. Consequently, the number and time of online calculations as well as the storage size for this data become problematic. The goal of this thesis is therefore to find a new model in order to respond to the following requirements: (i)-(online) reduce the number of pre-calculations, (ii)-(online) reduce stored data size for the reconstruction and (iii)-(online) maintain (or improve) the accuracy obtained by multilinear interpolation. From a mathematical point of view, this problem involves approaching multivariate functions from their pre-calculated values. We based our research on the Tucker format - a low-rank tensor approximation in order to propose a new model called the Tucker decomposition . With this model, a multivariate function is approximated by a linear combination of tensor products of one-variate functions. These one-variate functions are constructed by a technique called higher-order singular values decomposition (a « matricization » combined with an extension of the Karhunen-Loeve decomposition). The so-called greedy algorithm is used to constitute the points related to the resolution of the coefficients in the combination of the Tucker decomposition. The results obtained show that our model satisfies the criteria required for the reduction of the data as well as the accuracy. With this model, we can eliminate a posteriori and a priori the coefficients in the Tucker decomposition in order to further reduce the data storage in online steps but without reducing significantly the accuracy
Giacofci, Madison. "Classification non supervisée et sélection de variables dans les modèles mixtes fonctionnels. Applications à la biologie moléculaire." Phd thesis, Université de Grenoble, 2013. http://tel.archives-ouvertes.fr/tel-00987441.
Full textGirardet, Brunilde. "Trafic aérien : détermination optimale et globale des trajectoires d'avion en présence de vent." Thesis, Toulouse, INSA, 2014. http://www.theses.fr/2014ISAT0027/document.
Full textIn the context of the future Air Traffic Management system (ATM), one objective is to reduce the environmental impact of air traffic. With respect to this criterion, the “freeroute” concept, introduced in the mid 1990’s, is well suited to improve over nowadays airspace based ATM. Aircraft will no longer be restricted to fly along airways and may fly along fuel-optimal routes. The objective of this thesis is to introduce a novel pretactical trajectory planning methodology which aims at minimizing airspace congestion while taking into account weather conditions so as to minimize also fuel consumption.The development of the method was divided in two steps. The first step is dedicated to compute a time-optimal route for one aircraft taking into account wind conditions. This optimization is based on an adaptation of the Ordered Upwind Method on the sphere.The second step introduces a hybrid algorithm, based on simulated annealing and on the deterministic algorithm developed in the first step, in order to minimize congestion. Thus the algorithm combines the ability to reach a globally-optimal solution with a local-search procedure that speeds up the convergence
Joubert, Christophe. "Vérification distribuée à la volée de grands espaces d'états." Phd thesis, Grenoble INPG, 2005. http://tel.archives-ouvertes.fr/tel-00011939.
Full textDo, Thanh Ha. "Sparse representations over learned dictionary for document analysis." Thesis, Université de Lorraine, 2014. http://www.theses.fr/2014LORR0021/document.
Full textIn this thesis, we focus on how sparse representations can help to increase the performance of noise removal, text region extraction, pattern recognition and spotting symbols in graphical documents. To do that, first of all, we give a survey of sparse representations and its applications in image processing. Then, we present the motivation of building learning dictionary and efficient algorithms for constructing a learning dictionary. After describing the general idea of sparse representations and learned dictionary, we bring some contributions in the field of symbol recognition and document processing that achieve better performances compared to the state-of-the-art. These contributions begin by finding the answers to the following questions. The first question is how we can remove the noise of a document when we have no assumptions about the model of noise found in these images? The second question is how sparse representations over learned dictionary can separate the text/graphic parts in the graphical document? The third question is how we can apply the sparse representation for symbol recognition? We complete this thesis by proposing an approach of spotting symbols that use sparse representations for the coding of a visual vocabulary
Zniyed, Yassine. "Breaking the curse of dimensionality based on tensor train : models and algorithms." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLS330.
Full textMassive and heterogeneous data processing and analysis have been clearly identified by the scientific community as key problems in several application areas. It was popularized under the generic terms of "data science" or "big data". Processing large volumes of data, extracting their hidden patterns, while preforming prediction and inference tasks has become crucial in economy, industry and science.Treating independently each set of measured data is clearly a reductiveapproach. By doing that, "hidden relationships" or inter-correlations between thedatasets may be totally missed. Tensor decompositions have received a particular attention recently due to their capability to handle a variety of mining tasks applied to massive datasets, being a pertinent framework taking into account the heterogeneity and multi-modality of the data. In this case, data can be arranged as a D-dimensional array, also referred to as a D-order tensor.In this context, the purpose of this work is that the following properties are present: (i) having a stable factorization algorithms (not suffering from convergence problems), (ii) having a low storage cost (i.e., the number of free parameters must be linear in D), and (iii) having a formalism in the form of a graph allowing a simple but rigorous mental visualization of tensor decompositions of tensors of high order, i.e., for D> 3.Therefore, we rely on the tensor train decomposition (TT) to develop new TT factorization algorithms, and new equivalences in terms of tensor modeling, allowing a new strategy of dimensionality reduction and criterion optimization of coupled least squares for the estimation of parameters named JIRAFE.This methodological work has had applications in the context of multidimensional spectral analysis and relay telecommunications systems
Hijazi, Samah. "Semi-supervised Margin-based Feature Selection for Classification." Thesis, Littoral, 2019. http://www.theses.fr/2019DUNK0546.
Full textFeature selection is a preprocessing step crucial to the performance of machine learning algorithms. It allows reducing computational costs, improving classification performances and building simple and understandable models. Recently, using pairwise constraints, a cheaper kind of supervision information that does not need to reveal the class labels of data points, received a great deal of interest in the domain of feature selection. Accordingly, we first proposed a semi-supervised margin-based constrained feature selection algorithm called Relief-Sc. It is a modification of the well-known Relief algorithm from its optimization perspective. It utilizes cannot-link constraints only to solve a simple convex problem in a closed-form giving a unique solution. However, we noticed that in the literature these pairwise constraints are generally provided passively and generated randomly over multiple algorithmic runs by which the results are averaged. This leads to the need for a large number of constraints that might be redundant, unnecessary, and under some circumstances even inimical to the algorithm’s performance. It also masks the individual effect of each constraint set and introduces a human labor-cost burden. Therefore, we suggested a framework for actively selecting and then propagating constraints for feature selection. For that, we made use of the similarity matrix based on Laplacian graph. We assumed that when a small perturbation of the similarity value between a data couple leads to a more well-separated cluster indicator based on the second eigenvector of the graph Laplacian, this couple is expected to be a pairwise query of higher and more significant impact. Constraints propagation, on the other side, ensures increasing supervision information while decreasing the cost of human labor. Besides, for the sake of handling feature redundancy, we proposed extending Relief- Sc to a feature selection approach that combines feature clustering and hypothesis margin maximization. This approach is able to deal with the two core aspects of feature selection i.e. maximizing relevancy while minimizing redundancy (maximizing diversity) among features. Eventually, we experimentally validate our proposed algorithms in comparison to other known feature selection methods on multiple well-known UCI benchmark datasets which proved to be prominent. Only with little supervision information, the proposed algorithms proved to be comparable to supervised feature selection algorithms and were superior to the unsupervised ones
Ribault, Clément. "Méthode d'optimisation multicritère pour l'aide à la conception des projets de densification urbaine." Thesis, Lyon, 2017. http://www.theses.fr/2017LYSEI084/document.
Full textThe world’s population is facing an expansive urbanization. This urban sprawl, which is often not well managed, is endangering the environment as well as human health, quality of life and food security. It can be controlled by favouring urban densification. Nonetheless, the complexity of the phenomena involved in such a context leads us to think that supervisors of urban densification operations need some tools to help them make the most relevant choices. This thesis begins with a literature review that shows the ideal tool does not exist, and explains why multi-objective optimization using a genetic algorithm is a suitable technique for building design aid. Then we clarify the desirable features of an assistance method for urban densification projects designers. We recommend to base this method on the coupling of a genetic algorithm with a district-scale dynamic thermal simulation (DTS) tool. We compare capabilities of EnergyPlus (E+) and Pleiades+COMFIE (P+C) DTS software with these requirements, then we present a first urban densification project optimization test associating EnergyPlus with a genetic algorithm. The platform under development in the ANR MERUBBI project can offset certain shortcomings of this method. Hence, in a second phase we analyze the results of a comparative study of P+C, E+ and the MERUBBI tool, carried out using a high-density district densification project as a test case. It shows that the latter is reliable and particularly relevant to precisely assess interactions between buildings. In a third phase we address the problematic of reducing the computing time, a major issue to make our design aid method truly accessible to building professionals. We propose a way of reducing the operating period length and present it in detail. Finally, our optimization method is used to solve various design problems of the above-mentioned project, using E+. We show how the use of the MERUBBI platform will enrich this approach before concluding with development ideas to make our method more user-friendly and interactive
Edwige, Stéphie. "Modal analysis and flow control for drag reduction on a Sport Utility Vehicle." Thesis, Paris, CNAM, 2019. http://www.theses.fr/2019CNAM1233/document.
Full textThe automotive industry dedicates a lot of effort to improve the aerodynamical performances of road vehicles in order to reduce its carbon footprint. In this context, the target of the present work is to analyze the origin of aerodynamic losses on a reduced scale generic Sport Utility Vehicle and to achieve a drag reduction using an active flow control strategy. After an experimental characterization of the flow past the POSUV, a cross-modal DMD analysis is used to identify the correlated periodical features responsible for the tailgate pressure loss. Thanks to a genetic algorithm procedure, 20% gain on the tailgate pressure is obtained with optimal pulsed blowing jets on the rear bumper. The same cross-modal methodology allows to improve our understanding of the actuation mechanism. After a preliminary study of the 25° inclined ramp and of the Ahmed Body computations, the numerical simulation of the POSUV is corroborated with experiments using the cross-modal method. Deeper investigations on the three-dimensional flow characteristics explain more accurately the wake flow behavior. Finally, the controlled flow simulations propose additional insights on the actuation mechanisms allowing to reduce the aerodynamic losses
Negrea, Andrei Liviu. "Optimization of energy efficiency for residential buildings by using artificial intelligence." Thesis, Lyon, 2020. http://www.theses.fr/2020LYSEI090.
Full textConsumption, in general, represents the process of using a type of resource where savings needs to be done. Energy consumption has become one the main issue of urbanization and energy crisis as the fossil depletion and global warming put under threat the planet energy utilization. In this thesis, an automatic control of energy was developed to reduce energy consumption in residential area and passive house buildings. A mathematical model founded on empirical measurements was developed to emphasize the behavior of a testing laboratory from Universitatea Politehnica din București - Université Politechnica de Bucarest - Roumanie. The experimental protocol was carried out following actions such as: building parameters database, collecting weather data, intake of auxiliary flows while considering the controlling factors. The control algorithm is controlling the system which can maintain a comfortable temperature within the building with minimum energy consumption. Measurements and data acquisition have been setup on two different levels: weather and buildings data. The data collection is gathered on a server which was implemented into the testing facility running a complex algorithm which can control energy consumption. The thesis reports several numerical methods for estimating the energy consumption that is further used with the control algorithm. An experimental showcase based on dynamic calculation methods for building energy performance assessments was made in Granada, Spain, information which was later used in this thesis. Estimation of model parameters (resistances and capacities) with prediction of heat flow was made using nodal method, based on physical elements, input data and weather information. Prediction of energy consumption using state-space modeling show improved results while IoT data collection was uploaded on a Raspberry Pi system. All these results were stable showing impressive progress in the prediction of energy consumption and their application in energy field
Gokpi, Kossivi. "Modélisation et Simulation des Ecoulements Compressibles par la Méthode des Eléments Finis Galerkin Discontinus." Thesis, Pau, 2013. http://www.theses.fr/2013PAUU3005/document.
Full textThe aim of this thesis is to deal with compressible Navier-Stokes flows discretized by Discontinuous Galerkin Finite Elements Methods. Several aspects has been considered. One is to show the optimal convergence of the DGFEM method when using high order polynomial. Second is to design shock-capturing methods such as slope limiters and artificial viscosity to suppress numerical oscillation occurring when p>0 schemes are used. Third aspect is to design an a posteriori error estimator for adaptive mesh refinement in order to optimize the mesh in the computational domain. And finally, we want to show the accuracy and the robustness of the DG method implemented when we reach very low mach numbers. Usually when simulating compressible flows at very low mach numbers at the limit of incompressible flows, there occurs many kind of problems such as accuracy and convergence of the solution. To be able to run low Mach number problems, there exists solution like preconditioning. This method usually modifies the Euler. Here the Euler equations are not modified and with a robust time scheme and good boundary conditions imposed one can have efficient and accurate results
Ben, Kahla Haithem. "Sur des méthodes préservant les structures d'une classe de matrices structurées." Thesis, Littoral, 2017. http://www.theses.fr/2017DUNK0463/document.
Full textThe classical linear algebra methods, for calculating eigenvalues and eigenvectors of a matrix, or lower-rank approximations of a solution, etc....do not consider the structures of matrices. Such structures are usually destroyed in the numerical process. Alternative structure-preserving methods are the subject of an important interest mattering to the community. This thesis establishes a contribution in this field. The SR decomposition is usually implemented via the symplectic Gram-Schmidt algorithm. As in the classical case, a loss of orthogonality can occur. To remedy this, we have proposed two algorithms RSGSi and RMSGSi, where the reorthogonalization of a current set of vectors against the previously computed set is performed twice. The loss of J-orthogonality has significantly improved. A direct rounding error analysis of symplectic Gram-Schmidt algorithm is very hard to accomplish. We managed to get around this difficulty and give the error bounds on the loss of the J-orthogonality and on the factorization. Another way to implement the SR decomposition is based on symplectic Householder transformations. An optimal choice of free parameters provided an optimal version of the algorithm SROSH. However, the latter may be subject to numerical instability. We have proposed a new modified version SRMSH, which has the advantage of being numerically more stable. By a detailes study, we are led to two new variants numerically more stables : SRMSH and SRMSH2. In order to build a SR algorithm of complexity O(n³), where 2n is the size of the matrix, a reduction to the condensed matrix form (upper J-Hessenberg form) via adequate similarities is crucial. This reduction may be handled via the algorithm JHESS. We have shown that it is possible to perform a reduction of a general matrix, to an upper J-Hessenberg form, based only on the use of symplectic Householder transformations. The new algorithm, which will be called JHSH algorithm, is based on an adaptation of SRSH algorithm. We are led to two news variants algorithms JHMSH and JHMSH2 which are significantly more stable numerically. We found that these algortihms behave quite similarly to JHESS algorithm. The main drawback of all these algorithms (JHESS, JHMSH, JHMSH2) is that they may encounter fatal breakdowns or may suffer from a severe form of near-breakdowns, causing a brutal stop of the computations, the algorithm breaks down, or leading to a serious numerical instability. This phenomenon has no equivalent in the Euclidean case. We sketch out a very efficient strategy for curing fatal breakdowns and treating near breakdowns. Thus, the new algorithms incorporating this modification will be referred to as MJHESS, MJHSH, JHM²SH and JHM²SH2. These strategies were then incorporated into the implicit version of the SR algorithm to overcome the difficulties encountered by the fatal breakdown or near-breakdown. We recall that without these strategies, the SR algorithms breaks. Finally ans in another framework of structured matrices, we presented a robust algorithm via FFT and a Hankel matrix, based on computing approximate greatest common divisors (GCD) of polynomials, for solving the problem pf blind image deconvolution. Specifically, we designe a specialized algorithm for computing the GCD of bivariate polynomials. The new algorithm is based on the fast GCD algorithm for univariate polynomials , of quadratic complexity O(n²) flops. The complexitiy of our algorithm is O(n²log(n)) where the size of blurred images is n x n. The experimental results with synthetically burred images are included to illustrate the effectiveness of our approach
Boisvert, Maryse. "Réduction de dimension pour modèles graphiques probabilistes appliqués à la désambiguïsation sémantique." Thèse, 2004. http://hdl.handle.net/1866/16639.
Full textChartrand-Lefebvre, Carl. "Réduction des artéfacts de tuteur coronarien au moyen d’un algorithme de reconstruction avec renforcement des bords : étude prospective transversale en tomodensitométrie 256 coupes." Thèse, 2015. http://hdl.handle.net/1866/13870.
Full textMetallic artifacts can result in an artificial thickening of the coronary stent wall which can significantly impair computed tomography (CT) imaging in patients with coronary stents. The purpose of this study is to assess the in vivo visualization of coronary stent wall and lumen with an edge-enhancing CT reconstruction kernel, as compared to a standard kernel. This is a prospective cross-sectional study of 24 consecutive patients with 71 coronary stents, using a repeated measure design and blinded observers, approved by the Local Institutional Review Board. 256-slice CT angiography was used, as well as standard and edge-enhancing reconstruction kernels. Stent wall thickness was measured with orthogonal and circumference methods, averaging wall thickness from stent diameter and circumference measurements, respectively. Stent image quality was assessed on an ordinal scale. Statistical analysis used linear and proportional odds models. Stent wall thickness was inferior using the edge-enhancing kernel compared to the standard kernel, either with the orthogonal (0.97±0.02 versus 1.09±0.03 mm, respectively; p<0.001) or circumference method (1.13±0.02 versus 1.21±0.02 mm, respectively; p<0.001). The edge-enhancing kernel generated less overestimation from nominal thickness compared to the standard kernel, both with orthogonal (0.89±0.19 versus 1.00±0.26 mm, respectively; p<0.001) and circumference (1.06±0.26 versus 1.13±0.31 mm, respectively; p=0.005) methods. The average decrease in stent wall thickness overestimation with an edge-enhancing kernel was 6%. Image quality scores were higher with the edge-enhancing kernel (odds ratio 3.71, 95% CI 2.33–5.92; p<0.001). In conclusion, the edge-enhancing CT reconstruction kernel generated thinner stent walls, less overestimation from nominal thickness, and better image quality scores than the standard kernel.
Martel, Yannick. "Efficacité de l’algorithme EM en ligne pour des modèles statistiques complexes dans le contexte des données massives." Thesis, 2020. http://hdl.handle.net/1866/25477.
Full textThe EM algorithm Dempster et al. (1977) yields a sequence of estimators that converges to the maximum likelihood estimator for missing data models whose maximum likelihood estimator is not directly tractable. The EM algorithm is remarkable given its numerous applications in statistical learning. However, it may suffer from its computational cost. Cappé and Moulines (2009) proposed an online version of the algorithm in models whose likelihood belongs to the exponential family that provides an upgrade in computational efficiency in large data sets. However, the conditional expected value of the sufficient statistic is often intractable for complex models and/or when the missing data is of a high dimension. In those cases, it is replaced by an estimator. Many questions then arise naturally: do the convergence results pertaining to the initial estimator hold when the expected value is substituted by an estimator? In particular, does the asymptotic normality property remain in this case? How does the variance of the estimator of the expected value affect the asymptotic variance of the EM estimator? Are Monte-Carlo and MCMC estimators suitable in this situation? Could variance reduction tools such as control variates provide variance relief? These questions will be tackled by the means of examples containing latent data models. This master’s thesis’ main contributions are the presentation of a unified framework for stochastic approximation EM algorithms, an illustration of the impact that the estimation of the conditional expected value has on the variance and the introduction of online EM algorithms which reduce the additional variance stemming from the estimation of the conditional expected value.
Atchadé, Yves F. "Quelques contributions sur les méthodes de Monte Carlo." Thèse, 2003. http://hdl.handle.net/1866/14581.
Full text