Tesis sobre el tema "Algorithme de réduction de la dispersion"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte los 50 mejores tesis para su investigación sobre el tema "Algorithme de réduction de la dispersion".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.
Khoder, Jihan. "Nouvel Algorithme pour la Réduction de la Dimensionnalité en Imagerie Hyperspectrale". Phd thesis, Université de Versailles-Saint Quentin en Yvelines, 2013. http://tel.archives-ouvertes.fr/tel-00939018.
Texto completoKhoder, Jihan Fawaz. "Nouvel algorithme pour la réduction de la dimensionnalité en imagerie hyperspectrale". Versailles-St Quentin en Yvelines, 2013. http://www.theses.fr/2013VERS0037.
Texto completoIn hyperspectral imaging, the volumes of data acquired often reach the gigabyte for a single scene observed. Therefore, the analysis of these data complex physical content must go with a preliminary step of dimensionality reduction. Therefore, the analyses of these data of physical content complex go preliminary with a step of dimensionality reduction. This reduction has two objectives, the first is to reduce redundancy and the second facilitates post-treatment (extraction, classification and recognition) and therefore the interpretation of the data. Automatic classification is an important step in the process of knowledge extraction from data. It aims to discover the intrinsic structure of a set of objects by forming groups that share similar characteristics. In this thesis, we focus on dimensionality reduction in the context of unsupervised classification of spectral bands. Different approaches exist, such as those based on projection (linear or nonlinear) of high-dimensional data in a representation subspaces or on the techniques of selection of spectral bands exploiting of the criteria of complementarity-redundant information do not allow to preserve the wealth of information provided by this type of data
Bélanger, Manon. "Algorithme de contrôle d'erreurs appliqué au phénomène de dispersion chromatique". Mémoire, Université de Sherbrooke, 2008. http://savoirs.usherbrooke.ca/handle/11143/1415.
Texto completoBélanger, Manon. "Algorithme de contrôle d'erreurs appliqué au phénomène de dispersion chromatique". [S.l. : s.n.], 2008.
Buscar texto completoDessimond, Boris. "Exposition individuelle à la pollution de l’air : mesure par capteurs miniatures, modélisation et évaluation des risques sanitaires associés". Electronic Thesis or Diss., Sorbonne université, 2021. http://www.theses.fr/2021SORUS297.
Texto completoAir pollution contributes to the degradation of the quality of life and the reduction of life expectancy of the populations. The World Health Organization estimates that air pollution is responsible for 7 million deaths per year worldwide. It contributes to the aggravation of respiratory diseases, causes lung cancer and heart attacks. Air pollution has therefore significant health consequences on human life and biodiversity. Over the last few years, considerable progress has been made in the field of microcontrollers and telecommunications modules. These are more energy efficient, powerful, affordable, accessible, and are responsible for the growth of connected objects. In the meantime, the recent development of microelectromechanical systems and electrochemical sensors has allowed the miniaturization of technologies measuring many environmental parameters including air quality. These technological breakthroughs have enabled the design and production in an academic environment, of portable, connected, autonomous air quality sensors capable of performing acquisitions at a high temporal frequency. Until recently, one of the major obstacles to understanding the impact of air pollution on human health was the inability to track the real exposure of individuals during their daily lives; air pollution is complex, and varies according to the habits, activities and environments in which individuals spend their lives. Portable air quality sensors completely remove this obstacle as well as a number of other important constraints. These are designed to be used in mobility, over long periods of time, and produce immediately available granular data, which describes the exposure to air pollution of the person wearing it. Although the measurement modules embedded in these sensors are not currently as reliable as reference tools or remote sensing, when it comes to assessing individual exposure to air pollution, because they are as close as possible to the wearer, they provide the most accurate information, and are therefore an indispensable tool for the future of epidemiological research. In this context, we have been involved in the development and improvement of two air quality sensors; the CANARIN II and the CANARIN nano. The CANARIN II is a connected sensor communicating via Wi-Fi, which reports the concentration of 10, 2.5 and 1 micrometer diameter particles, as well as the environmental parameters of temperature, humidity, and pressure, every minute, making them available in real time. The CANARIN nano is a smaller sensor with the same capabilities of the CANARIN II, while additionally sensing volatile organic compounds levels. The CANARIN nano is able to operate autonomously, as it communicates through the cellular network. Two types of results have been obtained with the CANARIN sensors; on one hand, results produced from their use in real life conditions, and on the other hand, results related to the interpretation and understanding of the measurements produced by the particle sensors. These two sensors were both used in two research projects, in which we have helped deploy several heterogeneous sensor fleets and analyzed the acquired data. Firstly, in the POLLUSCOPE project funded by the French National Research Agency, where 86 volunteers from the general population wore a set of air pollution sensors for a total of 101 weeks, 35 of which the volunteers were also equipped with health sensors. Secondly, in the POLLAR project, where 43 subjects underwent polysomnography and then wore one CANARIN sensor for 10 days, thus allowing for the first time to explore the link between sleep apnea and particulate matter exposure. [...]
Zapien, Durand-Viel Karina. "Algorithme de chemin de régularisation pour l'apprentissage statistique". Phd thesis, INSA de Rouen, 2009. http://tel.archives-ouvertes.fr/tel-00557888.
Texto completoDriant, Thomas. "Réduction de la traînée aérodynamique et refroidissement d'un tricycle hybride par optimisation paramétrique". Thèse, Université de Sherbrooke, 2015. http://hdl.handle.net/11143/6990.
Texto completoKarina, Zapien. "Algorithme de Chemin de Régularisation pour l'apprentissage Statistique". Phd thesis, INSA de Rouen, 2009. http://tel.archives-ouvertes.fr/tel-00422854.
Texto completoL'approche habituelle pour déterminer ces hyperparamètres consiste à utiliser une "grille". On se donne un ensemble de valeurs possibles et on estime, pour chacune de ces valeurs, l'erreur de généralisation du meilleur modèle. On s'intéresse, dans cette thèse, à une approche alternative consistant à calculer l'ensemble des solutions possibles pour toutes les valeurs des hyperparamètres. C'est ce qu'on appelle le chemin de régularisation. Il se trouve que pour les problèmes d'apprentissage qui nous intéressent, des programmes quadratiques paramétriques, on montre que le chemin de régularisation associé à certains hyperparamètres est linéaire par morceaux et que son calcul a une complexité numérique de l'ordre d'un multiple entier de la complexité de calcul d'un modèle avec un seul jeu hyper-paramètres.
La thèse est organisée en trois parties. La première donne le cadre général des problèmes d'apprentissage de type SVM (Séparateurs à Vaste Marge ou Support Vector Machines) ainsi que les outils théoriques et algorithmiques permettant d'appréhender ce problème. La deuxième partie traite du problème d'apprentissage supervisé pour la classification et l'ordonnancement dans le cadre des SVM. On montre que le chemin de régularisation de ces problèmes est linéaire par morceaux. Ce résultat nous permet de développer des algorithmes originaux de discrimination et d'ordonnancement. La troisième partie aborde successivement les problèmes d'apprentissage semi supervisé et non supervisé. Pour l'apprentissage semi supervisé, nous introduisons un critère de parcimonie et proposons l'algorithme de chemin de régularisation associé. En ce qui concerne l'apprentissage non supervisé nous utilisons une approche de type "réduction de dimension". Contrairement aux méthodes à base de graphes de similarité qui utilisent un nombre fixe de voisins, nous introduisons une nouvelle méthode permettant un choix adaptatif et approprié du nombre de voisins.
Nouet, Christophe. "Réduction de l'ordre des systèmes continus, linéaires, via un processus d'orthogonalisation et un algorithme de gauss newton". Brest, 1994. http://www.theses.fr/1994BRES2040.
Texto completoNdongo, Fokoua Georges. "Etude expérimentale de la réduction de traînée par injection de bulles". Thesis, Brest, 2013. http://www.theses.fr/2013BRES0095/document.
Texto completoThis work presents experimental study of drag reduction by injection of bubbles. Injection of bubbles into the developing boundary layer along the hulls of ships could help to reduce significantly the frictional resistance by lowering a fluid along the hull and interacting with the near-wall turbulent structures. We investigate the interactions between bubbles, the coherent motion and the viscous torque in a Taylor-Couette flow for outer cylinder at rest, while bubbles are injected constantly through a needle. The Reynolds number ranges up to Re≤20.104, for these values of the Reynolds number, Taylor vortices are persistent leading to an axial periodicity of the flow. Bubbles size varies between 0.05 and 0.12 times the width of the gap, depending on the needle and the liquid used. An original method for tracking bubbles in a meridian plane coupled with measures of overall torque applied to the inner cylinder helped to highlight two regimes of drag reduction and various types of arrangements of bubbles, depending on their size and Reynolds number. Bubbles could have a sliding motion, wavering, be captured by the Taylor cells or in the outflow areas near the inner cylinder. Characterization of the liquid velocity by PIV both in single phase and two-phase flow helped to study the modifications induced by the bubbles on the liquid phase and to discuss about the mechanisms involved in the changes induced by the bubbles in the overall torque. The study show that for the Reynolds number before the capture, bubbles could help to stabilize the flow in agreement to the reduction of the viscous torque up to -30% for lowest void fraction (<1%). For the Reynolds number after the capture, bubbles trapped by the Taylor cells lead to a reduction of the axial wavelength and increasing of the vorticity of the cells, associated to an increasing of the rms. This configuration leads to an increasing of the viscous torque. However, bubbles trapped in the outflow areas near the inner cylinder lead to an increasing of the axial wavelength, associated to a decreasing of the vorticity. The configuration supports a smaller reduction of the viscous torque than in the case without captured
Simon, Frank. "Contrôle actif appliqué à la réduction du bruit interne d'aéronefs". Toulouse, ENSAE, 1997. http://www.theses.fr/1997ESAE0002.
Texto completoChouippe, Agathe. "Étude numérique de la réduction de traînée par injection de bulles en écoulement de Taylor-Couette". Thesis, Toulouse, INPT, 2012. http://www.theses.fr/2012INPT0052/document.
Texto completoThe study deals with drag reduction induced by bubble injection, its application concerns naval transport. The aim of the study is to shed more light on mechanisms that are involved in this wall friction reduction. The study is based on a numerical approach, and use the JADIM code with an Euler-Lagrange approach: the continuous phase is solved by Direct Numerical Simulation, and the disperse phase by a tracking of each bubble. Within the framework of this study we consider the Taylor-Couette flow configuration (flow between two concentric cylinders in rotation). The first part of the study deals with the modification of the numerical tool, in order to take into account the influence of the disperse phase on the continuous one through forcing terms in the mass and momentum balance equations. The aim of the second part is to study de Taylor-Couette flow in its monophasic configuration, for the purpose of providing a reference of the undisturbed flow. The third part deals with the passive dispersion of bubble in Taylor-Couette flow, in order to analyze the migration mechanisms involved. And the aim of the last part is to study the effects of the disperse phase on the continuous one, by analyzing the influence of bubbly phase parameters (like void fraction and buoyancy)
Zapién, Arreola Karina. "Algorithme de chemin de régularisation pour l'apprentissage statistique". Thesis, Rouen, INSA, 2009. http://www.theses.fr/2009ISAM0001/document.
Texto completoThe selection of a proper model is an essential task in statistical learning. In general, for a given learning task, a set of parameters has to be chosen, each parameter corresponds to a different degree of “complexity”. In this situation, the model selection procedure becomes a search for the optimal “complexity”, allowing us to estimate a model that assures a good generalization. This model selection problem can be summarized as the calculation of one or more hyperparameters defining the model complexity in contrast to the parameters that allow to specify a model in the chosen complexity class. The usual approach to determine these parameters is to use a “grid search”. Given a set of possible values, the generalization error for the best model is estimated for each of these values. This thesis is focused in an alternative approach consisting in calculating the complete set of possible solution for all hyperparameter values. This is what is called the regularization path. It can be shown that for the problems we are interested in, parametric quadratic programming (PQP), the corresponding regularization path is piece wise linear. Moreover, its calculation is no more complex than calculating a single PQP solution. This thesis is organized in three chapters, the first one introduces the general setting of a learning problem under the Support Vector Machines’ (SVM) framework together with the theory and algorithms that allow us to find a solution. The second part deals with supervised learning problems for classification and ranking using the SVM framework. It is shown that the regularization path of these problems is piecewise linear and alternative proofs to the one of Rosset [Ross 07b] are given via the subdifferential. These results lead to the corresponding algorithms to solve the mentioned supervised problems. The third part deals with semi-supervised learning problems followed by unsupervised learning problems. For the semi-supervised learning a sparsity constraint is introduced along with the corresponding regularization path algorithm. Graph-based dimensionality reduction methods are used for unsupervised learning problems. Our main contribution is a novel algorithm that allows to choose the number of nearest neighbors in an adaptive and appropriate way contrary to classical approaches based on a fix number of neighbors
Périnet, Amandine. "Analyse distributionnelle appliquée aux textes de spécialité : réduction de la dispersion des données par abstraction des contextes". Thesis, Sorbonne Paris Cité, 2015. http://www.theses.fr/2015USPCD056/document.
Texto completoIn specialised domains, the applications such as information retrieval for machine translation rely on terminological resources for taking into account terms or semantic relations between terms or groupings of terms. In order to face up to the cost of building these resources, automatic methods have been proposed. Among those methods, the distributional analysis uses the repeated information in the contexts of the terms to detect a relation between these terms. While this hypothesis is usually implemented with vector space models, those models suffer from a high number of dimensions and data sparsity in the matrix of contexts. In specialised corpora, this contextual information is even sparser and less frequent because of the smaller size of the corpora. Likewise, complex terms are usually ignored because of their very low number of occurrences. In this thesis, we tackle the problem of data sparsity on specialised texts. We propose a method that allows making the context matrix denser, by performing an abstraction of distributional contexts. Semantic relations acquired from corpora are used to generalise and normalise those contexts. We evaluated the method robustness on four corpora of different sizes, different languages and different domains. The analysis of the results shows that, while taking into account complex terms in distributional analysis, the abstraction of distributional contexts leads to defining semantic clusters of better quality, that are also more consistent and more homogeneous
Oulefki, Abdelhakim. "Réduction de modèles thermiques par amalgame modal". Phd thesis, Ecole Nationale des Ponts et Chaussées, 1993. http://tel.archives-ouvertes.fr/tel-00523620.
Texto completoRiondet, Philippe. "Optimisation de la résolution des équations de dispersion : application au calcul des discontinuités". Toulouse, ENSAE, 1997. http://www.theses.fr/1997ESAE0024.
Texto completoChristophe, Jennifer. "Etude de l'électroactivité du cuivre pour la réduction du dioxyde de carbone et pour la réduction de l'ion nitrate". Doctoral thesis, Universite Libre de Bruxelles, 2007. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210680.
Texto completoDans la première partie du travail, nous avons comparé, à l’aide de la voltampérométrie, d’électrolyses à potentiel contrôlé et d’analyses chromatographiques, l'activité d'électrodes polycristalline et monocristallines de Cu et d'alliages AuxCuy de différentes orientations cristallines pour l'électroréduction de CO2.
Nous avons pu établir une étroite corrélation entre l'activité de l'électrode de Cu et l'arrangement atomique de sa surface. L'activité et la sélectivité pour CH4 de Cu décroissent selon la séquence :Cu (111) > Cu (100) > Cu (poly). La réduction du CO2 et la formation de CH4 sont favorisées sur les surfaces lisses et denses à l'échelle atomique.
La sélectivité des alliages AuxCuy est considérablement orientée vers CO quand la fraction superficielle de Au augmente. L'alliage de composition 50-50 conduit à la formation exclusive de CO.
La seconde partie de la thèse est consacrée à l'étude de l'activité d'électrodes massives de Cu polycristallin et monocristallin d’orientation (111) et de dispersions de Cu pour la réduction électrochimique de NO3-.
Nous avons mis en évidence l'importance des conditions de pH pour le déroulement des processus à l'électrode de cuivre. En milieu acide, NO3- est directement réduit en NH4+ alors qu’en milieu neutre, les réactions de réduction de NO3- en NO2- et de NO2- en NH4+ s’observent successivement en fonction du potentiel.
L'activité des dispersions de Cu dans un film de polyaniline dépend fortement des conditions de dépôt. Le cuivre incorporé au film sous sa forme réduite est plus actif que le cuivre incorporé au film initialement oxydé. Par ailleurs, nous avons montré que la concentration de H+ dans le polymère est limitée. En conséquence, le processus de réduction de NO3- sur le cuivre dispersé dans un film de polyaniline est modifié en milieu acide.
L’utilisation d’un film de poly-3,4-éthylènedioxythiophène déposé sur une surface de Pt s'est quant à elle révélée inadéquate comme support pour l'incorporation de Cu dans le cadre de l'étude de la réduction de NO3- en milieu acide.
Doctorat en Sciences
info:eu-repo/semantics/nonPublished
Song, Jié. "Réduction de la dispersion numérique par correction des flux massiques : application au problème de la récupération d'hydricarbures par procédés chimiques". Paris 6, 1992. http://www.theses.fr/1992PA066334.
Texto completoAudoux, Yohann. "Développement d’une nouvelle méthode de réduction de modèle basée sur les hypersurfaces NURBS (Non-Uniform Rational B-Splines)". Thesis, Paris, ENSAM, 2019. http://www.theses.fr/2019ENAM0016/document.
Texto completoDespite undeniable progress achieved in computer sciences over the last decades, some problems remain intractable either by their numerical complexity (optimisation problems, …) or because they are subject to specific constraints such as real-time processing (virtual and augmented reality, …). In this context, metamodeling techniques can minimise the computational effort to realize complex multi-field and/or multi-scale simulations. The metamodeling process consists of setting up a metamodel that needs less resources to be evaluated than the complex one that is extracted from by guaranteeing, meanwhile, a minimal accuracy. Current methods generally require either the user’s expertise or arbitrary choices. Moreover, they are often tailored for a specific application, but they can be hardly transposed to other fields. Thus, even if it is not the best, our approach aims at obtaining a metamodel that remains a good one for whatever problem at hand. The developed strategy relies on NURBS hypersurfaces and stands out from existing ones by avoiding the use of empiric criteria to set its parameters. To do so, a metaheuristic (a genetic algorithm) able to deal with optimisation problems defined over a variable number of optimisation variables sets automatically all the hypersurface parameters so that the complexity is not transferred to the user
Gama, Nicolas. "Géométrie des nombres et cryptanalyse de NTRU". Paris 7, 2008. http://www.theses.fr/2008PA077199.
Texto completoPublic-key cryptography, invented by Diffie and Hellman in 1976, is now part of everyday life: credit cards, game consoles and electronic commerce are using public key schemes. The security of certain cryptosystems, like NTRU, is based on problems arising from the geometry of numbers, including the shortest vector problem or the closest vector problem in Euclidean lattices. While these problems are mostly NP-hard, it is still possible to compute good approximations in practice. In this thesis, we study approximation algorithms for these lattice reduction problems, which operate either in proved polynomial time, or more generally in reasonable time. We first analyze the functioning of these algorithms from a theoretical point of view, which allows us to build for example, the best proved algorithm for its complexity and the quality of its results. But we also study the practical aspects, through a lot of simulations, which allows us to highlight an important difference between properties of complexity and quality that we can prove, and those (much better) that can be achieved in practice. These simulations also allow us to correctly predict the actual behavior of lattice reduction algorithms. We study these algorithms first in the general case, and then we show how to make specialized versions for the very particular lattices drawn from NTRU cryptosystem
Baili, Ghaya. "Contribution à la réduction du bruit d'intensité relatif des lasers à semiconducteurs pour des applications aux radars". Paris 11, 2008. http://www.theses.fr/2008PA112362.
Texto completoThe objective of the following thesis is to study two original techniques aiming at reducing the Relative Intensity Noise (RIN) of semiconductor lasers used in optical links for transmission of radar signals. Within the first technique, a dispersion compensating fiber exhibiting low losses is used to study the phase to amplitude noise and amplitude to phase noise conversion mechanisms with a very good signal-to-noise ratio over à 20 GHz bandwidth. The second technique consists in increasing the photon lifetime well above the carrier lifetime in order to eliminate adiabatically the carrier population effects, leading to a relaxation oscillation free class-A laser operation. Two laser architectures have been proposed, theoretically analyzed and experimentally validated. The first configuration is based on a semiconductor optical amplifier in a long fibred cavity. The second one uses a ½-VCSEL in a high-Q external cavity. For both configurations, we demonstrated that class-A laser operation leads to a shot-noise-limited RIN (at -155 dB/Hz for 1 mA detected) over a frequency bandwidth from 100 MHz to 18 GHz
Vezard, Laurent. "Réduction de dimension en apprentissage supervisé : applications à l’étude de l’activité cérébrale". Thesis, Bordeaux 1, 2013. http://www.theses.fr/2013BOR15005/document.
Texto completoThe aim of this work is to develop a method able to automatically determine the alertness state of humans. Such a task is relevant to diverse domains, where a person is expected or required to be in a particular state. For instance, pilots, security personnel or medical personnel are expected to be in a highly alert state, and this method could help to confirm this or detect possible problems. In this work, electroencephalographic data (EEG) of 58 subjects in two distinct vigilance states (state of high and low alertness) were collected via a cap with $58$ electrodes. Thus, a binary classification problem is considered. In order to use of this work on a real-world applications, it is necessary to build a prediction method that requires only a small number of sensors (electrodes) in order to minimize the time needed by the cap installation and the cap cost. During this thesis, several approaches have been developed. A first approach involves use of a pre-processing method for EEG signals based on the use of a discrete wavelet decomposition in order to extract the energy of each frequency in the signal. Then, a linear regression is performed on the energies of some of these frequencies and the slope of this regression is retained. A genetic algorithm (GA) is used to optimize the selection of frequencies on which the regression is performed. Moreover, the GA is used to select a single electrode .A second approach is based on the use of the Common Spatial Pattern method (CSP). This method allows to define linear combinations of the original variables to obtain useful synthetic signals for the task classification. In this work, a GA and a sequential search method have been proposed to select a subset of electrode which are keep in the CSP calculation.Finally, a sparse CSP algorithm, based on the use of existing work in the sparse principal component analysis, was developed.The results of the different approaches are detailed and compared. This work allows us to obtaining a reliable model to obtain fast prediction of the alertness of a new individual
Hadjee, Gino Eric. "Gain environnemental lié à une gestion coordonnée de la charge sur les réseaux de distribution d'énergie électrique". Paris 11, 2006. http://www.theses.fr/2006PA112044.
Texto completoThe electricity demand has increased with the new developments in the world. This has put the pressure on the power utilities to meet the increasing demand of costumers. On electric power distribution, one simple way for meet this demand is to reinforce the power system capacity. However, this solution need not only expensive investments for electricity distributors but also increases the cost per for the costumers. An innovative way is to operate on costumer level. Then, it is interesting to study the management strategies for smoothing electricity demand curve without compromising the consumers comfort. Generally, the load control located at the level of the house or a group of houses. On this level, it consists in analyzing how a coordinated load management, being based on the typology of the principal electrical appliances, would allow reducing the load peak on distribution network level. On the distributor level, it consists in studying the influence of the tariff profile and proposing an optimization method in order to modify the consumer's behaviour to shift part of the peak load during off peak periods
Vezard, Laurent. "Réduction de dimension en apprentissage supervisé : applications à l'étude de l'activité cérébrale". Phd thesis, Université Sciences et Technologies - Bordeaux I, 2013. http://tel.archives-ouvertes.fr/tel-00944790.
Texto completoNdongo, Fokoua Georges. "Étude expérimentale de la réduction de traînée par injection de bulles". Phd thesis, Université de Bretagne occidentale - Brest, 2013. http://tel.archives-ouvertes.fr/tel-00931382.
Texto completoDelaroque, Aurélie. "Élaboration d’un outil numérique pour la réduction et l’optimisation des mécanismes cinétiques pour les systèmes de combustion". Thesis, Sorbonne université, 2018. http://www.theses.fr/2018SORUS417.
Texto completoIn the modeling of a combustion process, obtention of global data such as flame speed can, under certain circumstances, be achieved through extremely reduced mechanisms. On the contrary, prediction of detailed data such as polluant species requires the use of detailed kinetic mechanisms involving many chemical species. Due to the size and to the presence of many differents time scales, the integration of those models to complex numerical simulations is a non trivial task. A reduction tool based on Directed Relation Graph and sensitivity analysis methods is proposed to tackle this issue. Reduced mechanisms fitting user defined tolerances for quantities of interest such as global (flame speed, ignition delay, etc) and detailed data (concentration profiles) are automatically generated. The reduction process is paired up with an optimisation of reaction rates through a genetic algorithm to make up for the error induced by the loss of information. This process can use both numerical and experimental reference entries. The complete numerical tool has been tested on several canonical configurations for several fuels (methane, ethane and n-heptane) and reduction rates up to 90% have been observed
Stehlé, Damien. "Algorithmique de la réduction de réseaux et application à la recherche de pires cas pour l'arrondi defonctions mathématiques". Phd thesis, Université Henri Poincaré - Nancy I, 2005. http://tel.archives-ouvertes.fr/tel-00011150.
Texto completoplusieurs domaines de l'algorithmique, en cryptographie et en théorie
algorithmique des nombres par exemple. L'objet du présent mémoire est dual : nous améliorons les algorithmes de réduction des réseaux,
et nous développons une nouvelle application dans le domaine
de l'arithmétique des ordinateurs. En ce qui concerne l'aspect algorithmique, nous nous intéressons aux cas des petites dimensions (en dimension un, où il s'agit du calcul de pgcd, et aussi en dimensions 2 à 4), ainsi qu'à la description d'une nouvelle variante de l'algorithme LLL, en dimension quelconque. Du point de vue de l'application, nous utilisons la méthode
de Coppersmith permettant de trouver les petites racines de polynômes modulaires multivariés, pour calculer les pires cas pour l'arrondi des fonctions mathématiques, quand la fonction, le mode d'arrondi, et la précision sont donnés. Nous adaptons aussi notre technique aux mauvais cas simultanés pour deux fonctions. Ces deux méthodes sont des pré-calculs coûteux, qui une fois
effectués permettent d'accélérer les implantations des fonctions mathématiques élémentaires en précision fixée, par exemple en double précision.
La plupart des algorithmes décrits dans ce mémoire ont été validés
expérimentalement par des implantations, qui sont
disponibles à l'url http://www.loria.fr/~stehle.
Vezard, Laurent. "Réduction de dimension en apprentissage supervisé. Application à l'étude de l'activité cérébrale". Phd thesis, Université Sciences et Technologies - Bordeaux I, 2013. http://tel.archives-ouvertes.fr/tel-00926845.
Texto completoDenton, Patricia. "Réduction des NOx par C3H6 en atmosphère oxydante en présence de catalyseurs Pt-SiO2 et Pt-Al2O3 à porosité et dispersion de platine variables : modifications sous mélange réactionnel et approche mécanistique". Lyon 1, 1999. http://www.theses.fr/1999LYO10329.
Texto completoKorsakissok, Irène. "Changements d'échelle en modélisation de la qualité de l'air et estimation des incertitudes associées". Phd thesis, Université Paris-Est, 2009. http://tel.archives-ouvertes.fr/tel-00596384.
Texto completoSalazar, Veronica. "Etude des propriétés physiques des aérosols de la moyenne et haute atmosphère à partir d'une nouvelle analyse des observations du GOMOS-ENVISAT pour la période 2002-2006". Phd thesis, Université d'Orléans, 2010. http://tel.archives-ouvertes.fr/tel-00608052.
Texto completoGiacofci, Joyce. "Classification non supervisée et sélection de variables dans les modèles mixtes fonctionnels. Applications à la biologie moléculaire". Thesis, Grenoble, 2013. http://www.theses.fr/2013GRENM025/document.
Texto completoMore and more scientific studies yield to the collection of a large amount of data that consist of sets of curves recorded on individuals. These data can be seen as an extension of longitudinal data in high dimension and are often modeled as functional data in a mixed-effects framework. In a first part we focus on performing unsupervised clustering of these curves in the presence of inter-individual variability. To this end, we develop a new procedure based on a wavelet representation of the model, for both fixed and random effects. Our approach follows two steps : a dimension reduction step, based on wavelet thresholding techniques, is first performed. Then a clustering step is applied on the selected coefficients. An EM-algorithm is used for maximum likelihood estimation of parameters. The properties of the overall procedure are validated by an extensive simulation study. We also illustrate our method on high throughput molecular data (omics data) like microarray CGH or mass spectrometry data. Our procedure is available through the R package "curvclust", available on the CRAN website. In a second part, we concentrate on estimation and dimension reduction issues in the mixed-effects functional framework. Two distinct approaches are developed according to these issues. The first approach deals with parameters estimation in a non parametrical setting. We demonstrate that the functional fixed effects estimator based on wavelet thresholding techniques achieves the expected rate of convergence toward the true function. The second approach is dedicated to the selection of both fixed and random effects. We propose a method based on a penalized likelihood criterion with SCAD penalties for the estimation and the selection of both fixed effects and random effects variances. In the context of variable selection we prove that the penalized estimators enjoy the oracle property when the signal size diverges with the sample size. A simulation study is carried out to assess the behaviour of the two proposed approaches
Fahlaoui, Tarik. "Réduction de modèles et apprentissage de solutions spatio-temporelles paramétrées à partir de données : application à des couplages EDP-EDO". Thesis, Compiègne, 2020. http://www.theses.fr/2020COMP2535.
Texto completoIn this thesis, an algorithm for learning an accurate reduced order model from data generated by a high fidelity solver (HF solver) is proposed. To achieve this goal, we use both Dynamic Mode Decomposition (DMD) and Proper Orthogonal Decomposition (POD). Anomaly detection, during the learning process, can be easily done by performing an a posteriori spectral analysis on the reduced order model learnt. Several extensions are presented to make the method as general as possible. Thus, we handle the case of coupled ODE/PDE systems or the case of second order hyperbolic equations. The method is also extended to the case of switched control systems, where the switching rule is learnt by using an Artificial Neural Network (ANN). The reduced order model learnt allows to predict time evolution of the POD coefficients. However, the POD coefficients have no interpretable meaning. To tackle this issue, we propose an interpretable reduction method using the Empirical Interpolation Method (EIM). This reduction method is then adapted to the case of third-order tensors, and combining with the Kernel Ridge Regression (KRR) we can learn the solution manifold in the case of parametrized PDEs. In this way, we can learn a parametrized reduced order model. The case of non-linear PDEs or disturbed data is finally presented in the opening
De, Barros Louis. "Sensibilité et inversion des formes d'ondes sismiques en milieu poreux stratifié". Grenoble 1, 2007. http://www.theses.fr/2007GRE10275.
Texto completoCharacterisation of porous media parameters, and particularly the porosity, penneability and fluid properties are very useful in many applications. The aim ofthis study is to evaluate the possibility to detennine the se properties from the full seismic wave fields. First, 1 present the poro-elastic parameters and the specific properties of the seismic waves in porous media. Seismic wave propagation are then computed in fluid saturated stratified porous media with a reflectivity method. This computation is first used to study the possibilities to detennine the carbon dioxide concentration and localisation in the case of a deep geological storage. The sensitivity of the seismic response to the poro-elastic parameters are then generalized by the analytical computation of the Fréchet derivatives which are expressed in tenns of the Green's functions of the unperturbed medium. The sensitivity operators are then introduced in a inversion algorithm based on iterative modelling ofthe full wavefonn. The classical algorithm of generalized least-square inverse problem is solved by the quasi-Newton technique (Tarantola, 1984). The inversion of synthetic data show that we can invert for the porosity and the fluid and solid parameters can be correctly rebuilt if the other parameters are well known. However, the strong seismic coupling of the porous parameters leads to difficulties to invert simultaneously for several parameters. Finally, 1 apply this algorithm on real data recorded on the coastal site of Maguelonne (Hérault)
Barkouki, Houda. "Rational Lanczos-type methods for model order reduction". Thesis, Littoral, 2016. http://www.theses.fr/2016DUNK0440/document.
Texto completoNumerical solution of dynamical systems have been a successful means for studying complex physical phenomena. However, in large-scale setting, the system dimension makes the computations infeasible due to memory and time limitations, and ill-conditioning. The remedy of this problem is model reductions. This dissertations focuses on projection methods to efficiently construct reduced order models for large linear dynamical systems. Especially, we are interesting by projection onto unions of Krylov subspaces which lead to a class of reduced order models known as rational interpolation. Based on this theoretical framework that relate Krylov projection to rational interpolation, four rational Lanczos-type algorithms for model reduction are proposed. At first, an adaptative rational block Lanczos-type method for reducing the order of large scale dynamical systems is introduced, based on a rational block Lanczos algorithm and an adaptive approach for choosing the interpolation points. A generalization of the first algorithm is also given where different multiplicities are consider for each interpolation point. Next, we proposed another extension of the standard Krylov subspace method for Multiple-Input Multiple-Output (MIMO) systems, which is the global Krylov subspace, and we obtained also some equations that describe this process. Finally, an extended block Lanczos method is introduced and new algebraic properties for this algorithm are also given. The accuracy and the efficiency of all proposed algorithms when applied to model order reduction problem are tested by means of different numerical experiments that use a collection of well known benchmark examples
Luu, Thi Hieu. "Amélioration du modèle de sections efficaces dans le code de cœur COCAGNE de la chaîne de calculs d'EDF". Thesis, Paris 6, 2017. http://www.theses.fr/2017PA066120/document.
Texto completoIn order to optimize the operation of its nuclear power plants, the EDF's R&D department iscurrently developing a new calculation chain to simulate the nuclear reactors core with state of the art tools. These calculations require a large amount of physical data, especially the cross-sections. In the full core simulation, the number of cross-section values is of the order of several billions. These cross-sections can be represented as multivariate functions depending on several physical parameters. The determination of cross-sections is a long and complex calculation, we can therefore pre-compute them in some values of parameters (online calculations), then evaluate them at all desired points by an interpolation (online calculations). This process requires a model of cross-section reconstruction between the two steps. In order to perform a more faithful core simulation in the new EDF's chain, the cross-sections need to be better represented by taking into account new parameters. Moreover, the new chain must be able to calculate the reactor in more extensive situations than the current one. The multilinear interpolation is currently used to reconstruct cross-sections and to meet these goals. However, with this model, the number of points in its discretization increases exponentially as a function of the number of parameters, or significantly when adding points to one of the axes. Consequently, the number and time of online calculations as well as the storage size for this data become problematic. The goal of this thesis is therefore to find a new model in order to respond to the following requirements: (i)-(online) reduce the number of pre-calculations, (ii)-(online) reduce stored data size for the reconstruction and (iii)-(online) maintain (or improve) the accuracy obtained by multilinear interpolation. From a mathematical point of view, this problem involves approaching multivariate functions from their pre-calculated values. We based our research on the Tucker format - a low-rank tensor approximation in order to propose a new model called the Tucker decomposition . With this model, a multivariate function is approximated by a linear combination of tensor products of one-variate functions. These one-variate functions are constructed by a technique called higher-order singular values decomposition (a « matricization » combined with an extension of the Karhunen-Loeve decomposition). The so-called greedy algorithm is used to constitute the points related to the resolution of the coefficients in the combination of the Tucker decomposition. The results obtained show that our model satisfies the criteria required for the reduction of the data as well as the accuracy. With this model, we can eliminate a posteriori and a priori the coefficients in the Tucker decomposition in order to further reduce the data storage in online steps but without reducing significantly the accuracy
Tran, Quang Huy. "Résolution et étude numériques de quelques problèmes de propagation d'ondes acoustiques en géophysique". Paris, ENMP, 1994. http://www.theses.fr/1994ENMP0494.
Texto completoGiacofci, Madison. "Classification non supervisée et sélection de variables dans les modèles mixtes fonctionnels. Applications à la biologie moléculaire". Phd thesis, Université de Grenoble, 2013. http://tel.archives-ouvertes.fr/tel-00987441.
Texto completoGirardet, Brunilde. "Trafic aérien : détermination optimale et globale des trajectoires d'avion en présence de vent". Thesis, Toulouse, INSA, 2014. http://www.theses.fr/2014ISAT0027/document.
Texto completoIn the context of the future Air Traffic Management system (ATM), one objective is to reduce the environmental impact of air traffic. With respect to this criterion, the “freeroute” concept, introduced in the mid 1990’s, is well suited to improve over nowadays airspace based ATM. Aircraft will no longer be restricted to fly along airways and may fly along fuel-optimal routes. The objective of this thesis is to introduce a novel pretactical trajectory planning methodology which aims at minimizing airspace congestion while taking into account weather conditions so as to minimize also fuel consumption.The development of the method was divided in two steps. The first step is dedicated to compute a time-optimal route for one aircraft taking into account wind conditions. This optimization is based on an adaptation of the Ordered Upwind Method on the sphere.The second step introduces a hybrid algorithm, based on simulated annealing and on the deterministic algorithm developed in the first step, in order to minimize congestion. Thus the algorithm combines the ability to reach a globally-optimal solution with a local-search procedure that speeds up the convergence
Joubert, Christophe. "Vérification distribuée à la volée de grands espaces d'états". Phd thesis, Grenoble INPG, 2005. http://tel.archives-ouvertes.fr/tel-00011939.
Texto completoDo, Thanh Ha. "Sparse representations over learned dictionary for document analysis". Thesis, Université de Lorraine, 2014. http://www.theses.fr/2014LORR0021/document.
Texto completoIn this thesis, we focus on how sparse representations can help to increase the performance of noise removal, text region extraction, pattern recognition and spotting symbols in graphical documents. To do that, first of all, we give a survey of sparse representations and its applications in image processing. Then, we present the motivation of building learning dictionary and efficient algorithms for constructing a learning dictionary. After describing the general idea of sparse representations and learned dictionary, we bring some contributions in the field of symbol recognition and document processing that achieve better performances compared to the state-of-the-art. These contributions begin by finding the answers to the following questions. The first question is how we can remove the noise of a document when we have no assumptions about the model of noise found in these images? The second question is how sparse representations over learned dictionary can separate the text/graphic parts in the graphical document? The third question is how we can apply the sparse representation for symbol recognition? We complete this thesis by proposing an approach of spotting symbols that use sparse representations for the coding of a visual vocabulary
Zniyed, Yassine. "Breaking the curse of dimensionality based on tensor train : models and algorithms". Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLS330.
Texto completoMassive and heterogeneous data processing and analysis have been clearly identified by the scientific community as key problems in several application areas. It was popularized under the generic terms of "data science" or "big data". Processing large volumes of data, extracting their hidden patterns, while preforming prediction and inference tasks has become crucial in economy, industry and science.Treating independently each set of measured data is clearly a reductiveapproach. By doing that, "hidden relationships" or inter-correlations between thedatasets may be totally missed. Tensor decompositions have received a particular attention recently due to their capability to handle a variety of mining tasks applied to massive datasets, being a pertinent framework taking into account the heterogeneity and multi-modality of the data. In this case, data can be arranged as a D-dimensional array, also referred to as a D-order tensor.In this context, the purpose of this work is that the following properties are present: (i) having a stable factorization algorithms (not suffering from convergence problems), (ii) having a low storage cost (i.e., the number of free parameters must be linear in D), and (iii) having a formalism in the form of a graph allowing a simple but rigorous mental visualization of tensor decompositions of tensors of high order, i.e., for D> 3.Therefore, we rely on the tensor train decomposition (TT) to develop new TT factorization algorithms, and new equivalences in terms of tensor modeling, allowing a new strategy of dimensionality reduction and criterion optimization of coupled least squares for the estimation of parameters named JIRAFE.This methodological work has had applications in the context of multidimensional spectral analysis and relay telecommunications systems
Hijazi, Samah. "Semi-supervised Margin-based Feature Selection for Classification". Thesis, Littoral, 2019. http://www.theses.fr/2019DUNK0546.
Texto completoFeature selection is a preprocessing step crucial to the performance of machine learning algorithms. It allows reducing computational costs, improving classification performances and building simple and understandable models. Recently, using pairwise constraints, a cheaper kind of supervision information that does not need to reveal the class labels of data points, received a great deal of interest in the domain of feature selection. Accordingly, we first proposed a semi-supervised margin-based constrained feature selection algorithm called Relief-Sc. It is a modification of the well-known Relief algorithm from its optimization perspective. It utilizes cannot-link constraints only to solve a simple convex problem in a closed-form giving a unique solution. However, we noticed that in the literature these pairwise constraints are generally provided passively and generated randomly over multiple algorithmic runs by which the results are averaged. This leads to the need for a large number of constraints that might be redundant, unnecessary, and under some circumstances even inimical to the algorithm’s performance. It also masks the individual effect of each constraint set and introduces a human labor-cost burden. Therefore, we suggested a framework for actively selecting and then propagating constraints for feature selection. For that, we made use of the similarity matrix based on Laplacian graph. We assumed that when a small perturbation of the similarity value between a data couple leads to a more well-separated cluster indicator based on the second eigenvector of the graph Laplacian, this couple is expected to be a pairwise query of higher and more significant impact. Constraints propagation, on the other side, ensures increasing supervision information while decreasing the cost of human labor. Besides, for the sake of handling feature redundancy, we proposed extending Relief- Sc to a feature selection approach that combines feature clustering and hypothesis margin maximization. This approach is able to deal with the two core aspects of feature selection i.e. maximizing relevancy while minimizing redundancy (maximizing diversity) among features. Eventually, we experimentally validate our proposed algorithms in comparison to other known feature selection methods on multiple well-known UCI benchmark datasets which proved to be prominent. Only with little supervision information, the proposed algorithms proved to be comparable to supervised feature selection algorithms and were superior to the unsupervised ones
Ribault, Clément. "Méthode d'optimisation multicritère pour l'aide à la conception des projets de densification urbaine". Thesis, Lyon, 2017. http://www.theses.fr/2017LYSEI084/document.
Texto completoThe world’s population is facing an expansive urbanization. This urban sprawl, which is often not well managed, is endangering the environment as well as human health, quality of life and food security. It can be controlled by favouring urban densification. Nonetheless, the complexity of the phenomena involved in such a context leads us to think that supervisors of urban densification operations need some tools to help them make the most relevant choices. This thesis begins with a literature review that shows the ideal tool does not exist, and explains why multi-objective optimization using a genetic algorithm is a suitable technique for building design aid. Then we clarify the desirable features of an assistance method for urban densification projects designers. We recommend to base this method on the coupling of a genetic algorithm with a district-scale dynamic thermal simulation (DTS) tool. We compare capabilities of EnergyPlus (E+) and Pleiades+COMFIE (P+C) DTS software with these requirements, then we present a first urban densification project optimization test associating EnergyPlus with a genetic algorithm. The platform under development in the ANR MERUBBI project can offset certain shortcomings of this method. Hence, in a second phase we analyze the results of a comparative study of P+C, E+ and the MERUBBI tool, carried out using a high-density district densification project as a test case. It shows that the latter is reliable and particularly relevant to precisely assess interactions between buildings. In a third phase we address the problematic of reducing the computing time, a major issue to make our design aid method truly accessible to building professionals. We propose a way of reducing the operating period length and present it in detail. Finally, our optimization method is used to solve various design problems of the above-mentioned project, using E+. We show how the use of the MERUBBI platform will enrich this approach before concluding with development ideas to make our method more user-friendly and interactive
Edwige, Stéphie. "Modal analysis and flow control for drag reduction on a Sport Utility Vehicle". Thesis, Paris, CNAM, 2019. http://www.theses.fr/2019CNAM1233/document.
Texto completoThe automotive industry dedicates a lot of effort to improve the aerodynamical performances of road vehicles in order to reduce its carbon footprint. In this context, the target of the present work is to analyze the origin of aerodynamic losses on a reduced scale generic Sport Utility Vehicle and to achieve a drag reduction using an active flow control strategy. After an experimental characterization of the flow past the POSUV, a cross-modal DMD analysis is used to identify the correlated periodical features responsible for the tailgate pressure loss. Thanks to a genetic algorithm procedure, 20% gain on the tailgate pressure is obtained with optimal pulsed blowing jets on the rear bumper. The same cross-modal methodology allows to improve our understanding of the actuation mechanism. After a preliminary study of the 25° inclined ramp and of the Ahmed Body computations, the numerical simulation of the POSUV is corroborated with experiments using the cross-modal method. Deeper investigations on the three-dimensional flow characteristics explain more accurately the wake flow behavior. Finally, the controlled flow simulations propose additional insights on the actuation mechanisms allowing to reduce the aerodynamic losses
Negrea, Andrei Liviu. "Optimization of energy efficiency for residential buildings by using artificial intelligence". Thesis, Lyon, 2020. http://www.theses.fr/2020LYSEI090.
Texto completoConsumption, in general, represents the process of using a type of resource where savings needs to be done. Energy consumption has become one the main issue of urbanization and energy crisis as the fossil depletion and global warming put under threat the planet energy utilization. In this thesis, an automatic control of energy was developed to reduce energy consumption in residential area and passive house buildings. A mathematical model founded on empirical measurements was developed to emphasize the behavior of a testing laboratory from Universitatea Politehnica din București - Université Politechnica de Bucarest - Roumanie. The experimental protocol was carried out following actions such as: building parameters database, collecting weather data, intake of auxiliary flows while considering the controlling factors. The control algorithm is controlling the system which can maintain a comfortable temperature within the building with minimum energy consumption. Measurements and data acquisition have been setup on two different levels: weather and buildings data. The data collection is gathered on a server which was implemented into the testing facility running a complex algorithm which can control energy consumption. The thesis reports several numerical methods for estimating the energy consumption that is further used with the control algorithm. An experimental showcase based on dynamic calculation methods for building energy performance assessments was made in Granada, Spain, information which was later used in this thesis. Estimation of model parameters (resistances and capacities) with prediction of heat flow was made using nodal method, based on physical elements, input data and weather information. Prediction of energy consumption using state-space modeling show improved results while IoT data collection was uploaded on a Raspberry Pi system. All these results were stable showing impressive progress in the prediction of energy consumption and their application in energy field
Delestre, Barbara. "Reconstruction 3D de particules dans un écoulement par imagerie interférométrique". Electronic Thesis or Diss., Normandie, 2022. http://www.theses.fr/2022NORMR116.
Texto completoFor many industrial or environmental applications, it is important to measure the size and volume of irregularly shaped particles. This is for example the case in the context of aircraft icing which occurs during flights, where it is necessary to measure in situ the water content and the ice content in the troposphere in order to detect and avoid risk areas. Our interest has been on interferometric out-of-focus imaging, an optical technique offering many advantages (wide measurement field, extended range of sizes studied [50 μm: a few millimeters], distance particle / measuring device several tens of centimeters ...). During this thesis, we showed that the 3D reconstruction of a particle can be done from a set of three interferometric images of this particle (under three perpendicular viewing angles). This can be done using the error reduction (ER) algorithm which allows to obtain the function f(x,y) from the measurements of the modulus of its 2D Fourier transform |F(u,v)| , by reconstructing the phase of its 2D Fourier transform. The implementation of this algorithm allowed us to reconstruct the shape of irregular particles from their interferometric images. Experimental demonstrations were carried out using a specific assembly based on the use of a micro-mirror array (DMD) which generates the interferometric images of programmable rough particles. The results obtained are very encouraging. The volumes obtained remain quite close to the real volume of the particle and the reconstructed 3D shapes give us a good idea of the general shape of the particle studied even in the most extreme cases where the orientation of the particle is arbitrary. Finally, we showed that an accurate 3D reconstruction of a "programmed" rough particle can be performed from a set of 120 interferometric images
Ben, Kahla Haithem. "Sur des méthodes préservant les structures d'une classe de matrices structurées". Thesis, Littoral, 2017. http://www.theses.fr/2017DUNK0463/document.
Texto completoThe classical linear algebra methods, for calculating eigenvalues and eigenvectors of a matrix, or lower-rank approximations of a solution, etc....do not consider the structures of matrices. Such structures are usually destroyed in the numerical process. Alternative structure-preserving methods are the subject of an important interest mattering to the community. This thesis establishes a contribution in this field. The SR decomposition is usually implemented via the symplectic Gram-Schmidt algorithm. As in the classical case, a loss of orthogonality can occur. To remedy this, we have proposed two algorithms RSGSi and RMSGSi, where the reorthogonalization of a current set of vectors against the previously computed set is performed twice. The loss of J-orthogonality has significantly improved. A direct rounding error analysis of symplectic Gram-Schmidt algorithm is very hard to accomplish. We managed to get around this difficulty and give the error bounds on the loss of the J-orthogonality and on the factorization. Another way to implement the SR decomposition is based on symplectic Householder transformations. An optimal choice of free parameters provided an optimal version of the algorithm SROSH. However, the latter may be subject to numerical instability. We have proposed a new modified version SRMSH, which has the advantage of being numerically more stable. By a detailes study, we are led to two new variants numerically more stables : SRMSH and SRMSH2. In order to build a SR algorithm of complexity O(n³), where 2n is the size of the matrix, a reduction to the condensed matrix form (upper J-Hessenberg form) via adequate similarities is crucial. This reduction may be handled via the algorithm JHESS. We have shown that it is possible to perform a reduction of a general matrix, to an upper J-Hessenberg form, based only on the use of symplectic Householder transformations. The new algorithm, which will be called JHSH algorithm, is based on an adaptation of SRSH algorithm. We are led to two news variants algorithms JHMSH and JHMSH2 which are significantly more stable numerically. We found that these algortihms behave quite similarly to JHESS algorithm. The main drawback of all these algorithms (JHESS, JHMSH, JHMSH2) is that they may encounter fatal breakdowns or may suffer from a severe form of near-breakdowns, causing a brutal stop of the computations, the algorithm breaks down, or leading to a serious numerical instability. This phenomenon has no equivalent in the Euclidean case. We sketch out a very efficient strategy for curing fatal breakdowns and treating near breakdowns. Thus, the new algorithms incorporating this modification will be referred to as MJHESS, MJHSH, JHM²SH and JHM²SH2. These strategies were then incorporated into the implicit version of the SR algorithm to overcome the difficulties encountered by the fatal breakdown or near-breakdown. We recall that without these strategies, the SR algorithms breaks. Finally ans in another framework of structured matrices, we presented a robust algorithm via FFT and a Hankel matrix, based on computing approximate greatest common divisors (GCD) of polynomials, for solving the problem pf blind image deconvolution. Specifically, we designe a specialized algorithm for computing the GCD of bivariate polynomials. The new algorithm is based on the fast GCD algorithm for univariate polynomials , of quadratic complexity O(n²) flops. The complexitiy of our algorithm is O(n²log(n)) where the size of blurred images is n x n. The experimental results with synthetically burred images are included to illustrate the effectiveness of our approach
Boisvert, Maryse. "Réduction de dimension pour modèles graphiques probabilistes appliqués à la désambiguïsation sémantique". Thèse, 2004. http://hdl.handle.net/1866/16639.
Texto completoChartrand-Lefebvre, Carl. "Réduction des artéfacts de tuteur coronarien au moyen d’un algorithme de reconstruction avec renforcement des bords : étude prospective transversale en tomodensitométrie 256 coupes". Thèse, 2015. http://hdl.handle.net/1866/13870.
Texto completoMetallic artifacts can result in an artificial thickening of the coronary stent wall which can significantly impair computed tomography (CT) imaging in patients with coronary stents. The purpose of this study is to assess the in vivo visualization of coronary stent wall and lumen with an edge-enhancing CT reconstruction kernel, as compared to a standard kernel. This is a prospective cross-sectional study of 24 consecutive patients with 71 coronary stents, using a repeated measure design and blinded observers, approved by the Local Institutional Review Board. 256-slice CT angiography was used, as well as standard and edge-enhancing reconstruction kernels. Stent wall thickness was measured with orthogonal and circumference methods, averaging wall thickness from stent diameter and circumference measurements, respectively. Stent image quality was assessed on an ordinal scale. Statistical analysis used linear and proportional odds models. Stent wall thickness was inferior using the edge-enhancing kernel compared to the standard kernel, either with the orthogonal (0.97±0.02 versus 1.09±0.03 mm, respectively; p<0.001) or circumference method (1.13±0.02 versus 1.21±0.02 mm, respectively; p<0.001). The edge-enhancing kernel generated less overestimation from nominal thickness compared to the standard kernel, both with orthogonal (0.89±0.19 versus 1.00±0.26 mm, respectively; p<0.001) and circumference (1.06±0.26 versus 1.13±0.31 mm, respectively; p=0.005) methods. The average decrease in stent wall thickness overestimation with an edge-enhancing kernel was 6%. Image quality scores were higher with the edge-enhancing kernel (odds ratio 3.71, 95% CI 2.33–5.92; p<0.001). In conclusion, the edge-enhancing CT reconstruction kernel generated thinner stent walls, less overestimation from nominal thickness, and better image quality scores than the standard kernel.