Dissertationen zum Thema „Accélération méthodes à noyaux“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Accélération méthodes à noyaux" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.
Cherfaoui, Farah. „Echantillonnage pour l'accélération des méthodes à noyaux et sélection gloutonne pour les représentations parcimonieuses“. Electronic Thesis or Diss., Aix-Marseille, 2022. http://www.theses.fr/2022AIXM0256.
Der volle Inhalt der QuelleThe contributions of this thesis are divided into two parts. The first part is dedicated to the acceleration of kernel methods and the second to optimization under sparsity constraints. Kernel methods are widely known and used in machine learning. However, the complexity of their implementation is high and they become unusable when the number of data is large. We first propose an approximation of Ridge leverage scores. We then use these scores to define a probability distribution for the sampling process of the Nyström method in order to speed up the kernel methods. We then propose a new kernel-based framework for representing and comparing discrete probability distributions. We then exploit the link between our framework and the maximum mean discrepancy to propose an accurate and fast approximation of the latter. The second part of this thesis is devoted to optimization with sparsity constraint for signal optimization and random forest pruning. First, we prove under certain conditions on the coherence of the dictionary, the reconstruction and convergence properties of the Frank-Wolfe algorithm. Then, we use the OMP algorithm to reduce the size of random forests and thus reduce the size needed for its storage. The pruned forest consists of a subset of trees from the initial forest selected and weighted by OMP in order to minimize its empirical prediction error
Loosli, Gaëlle. „Méthodes à noyaux pour la détection de contexte : vers un fonctionnement autonome des méthodes à noyaux“. Rouen, INSA, 2006. http://www.theses.fr/2006ISAM0009.
Der volle Inhalt der QuelleLoustau, Sébastien. „Performances statistiques de méthodes à noyaux“. Phd thesis, Université de Provence - Aix-Marseille I, 2008. http://tel.archives-ouvertes.fr/tel-00343377.
Der volle Inhalt der QuelleLes méthodes de régularisation ont montrées leurs intérêts pour résoudre des problèmes de classification. L'algorithme des Machines à Vecteurs de Support (SVM) est aujourd'hui le représentant le plus populaire. Dans un premier temps, cette thèse étudie les performances statistiques de cet algorithme, et considère le problème d'adaptation à la marge et à la complexité. On étend ces résultats à une nouvelle procédure de minimisation de risque empirique pénalisée sur les espaces de Besov. Enfin la dernière partie se concentre sur une nouvelle procédure de sélection de modèles : la minimisation de l'enveloppe du risque (RHM). Introduite par L.Cavalier et Y.Golubev dans le cadre des problèmes inverses, on cherche à l'appliquer au contexte de la classification.
Belley, Philippe. „Noyaux discontinus et méthodes sans maillage en hydrodynamique“. Mémoire, Université de Sherbrooke, 2007. http://savoirs.usherbrooke.ca/handle/11143/4815.
Der volle Inhalt der QuelleBietti, Alberto. „Méthodes à noyaux pour les réseaux convolutionnels profonds“. Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAM051.
Der volle Inhalt der QuelleThe increased availability of large amounts of data, from images in social networks, speech waveforms from mobile devices, and large text corpuses, to genomic and medical data, has led to a surge of machine learning techniques. Such methods exploit statistical patterns in these large datasets for making accurate predictions on new data. In recent years, deep learning systems have emerged as a remarkably successful class of machine learning algorithms, which rely on gradient-based methods for training multi-layer models that process data in a hierarchical manner. These methods have been particularly successful in tasks where the data consists of natural signals such as images or audio; this includes visual recognition, object detection or segmentation, and speech recognition.For such tasks, deep learning methods often yield the best known empirical performance; yet, the high dimensionality of the data and large number of parameters of these models make them challenging to understand theoretically. Their success is often attributed in part to their ability to exploit useful structure in natural signals, such as local stationarity or invariance, for instance through choices of network architectures with convolution and pooling operations. However, such properties are still poorly understood from a theoretical standpoint, leading to a growing gap between the theory and practice of machine learning. This thesis is aimed towards bridging this gap, by studying spaces of functions which arise from given network architectures, with a focus on the convolutional case. Our study relies on kernel methods, by considering reproducing kernel Hilbert spaces (RKHSs) associated to certain kernels that are constructed hierarchically based on a given architecture. This allows us to precisely study smoothness, invariance, stability to deformations, and approximation properties of functions in the RKHS. These representation properties are also linked with optimization questions when training deep networks with gradient methods in some over-parameterized regimes where such kernels arise. They also suggest new practical regularization strategies for obtaining better generalization performance on small datasets, and state-of-the-art performance for adversarial robustness on image tasks
Suard, Frédéric. „Méthodes à noyaux pour la détection de piétons“. Phd thesis, INSA de Rouen, 2006. http://tel.archives-ouvertes.fr/tel-00375617.
Der volle Inhalt der QuelleSuard, Frédéric. „Méthodes à noyaux pour la détection de piétons“. Phd thesis, Rouen, INSA, 2006. http://www.theses.fr/2006ISAM0024.
Der volle Inhalt der QuelleSadok, Hassane. „Accélération de la convergence de suites vectorielles et méthodes de point fixe“. Lille 1, 1988. http://www.theses.fr/1988LIL10146.
Der volle Inhalt der QuellePothin, Jean-Baptiste. „Décision par méthodes à noyaux en traitement du signal : techniques de sélection et d'élaboration de noyaux adaptés“. Troyes, 2007. http://www.theses.fr/2007TROY0016.
Der volle Inhalt der QuelleAmong the large family of kernel methods, one should admit that, up to now, research was guided by applications, neglecting the study of the kernels themselves. This observations of particularly surprising since these later determine the performance of the machine by their ability to reveal similarities between data samples. The main objective of this thesis is to provide a methodology for the design of data-dependant kernels. The first part of this manuscript is about kernel learning. We study the problem consisting in optimizing the free parameters of several well-known kernel families. We propose a greedy algorithm for learning a linear combination of kernels without training any kernel machine at each step. The improved kernel is then used to train a standard SVM classifier. Applications in regression are also presented. In the second part, we develop methods for data representation learning. We propose an algorithm for maximizing the alignment over linear transform of the input space, which suppose vectorial representation of the data. To deal with the so-called curse of dimensionality, we suggest to learn data representation by distance metric learning. This approach can be used to optimize efficiently any reproducing kernel Hilbert space. We show its application in a text classification context. The last part concerns the use of prior information in the form of ellipsoidal knowledge sets. By considering bounding ellipsoids instead of the usual sample vectors, one can include into SVM invariance properties
Tawk, Melhem. „Accélération de la simulation par échantillonnage dans les architectures multiprocesseurs embarquées“. Valenciennes, 2009. http://ged.univ-valenciennes.fr/nuxeo/site/esupversions/860a8e09-e347-4f85-83bd-d94ca890483d.
Der volle Inhalt der QuelleEmbedded system design relies heavily on simulation to evaluate and validate new platforms before implementation. Nevertheless, as technological advances allow the realization of more complex circuits, simulation time of these systems is considerably increasing. This problem arises mostly in the case of embedded multiprocessor architectures (MPSoC) which offer high performances (in terms of instructions/Joule) but which require powerful simulators. For such systems, simultion should be accelerated in order to speed up their design flow thus reducing the time-to-market. In this thesis, we proposed a series of solutions aiming at accelerating the simulation of MPSoC. The proposed methods are based on application sampling. Thus, the parallel applications are first analyzed in order to detect the different phases which compose them. Thereafter and during the simulation, the phases executed in parallel are combined together in order to generate clusters of phases. We developed techniques that facilitate generating clusters, detecting repeated ones and recording their statistics in an efficient way. Each cluster represents a sample of similar execution intervals of the application. The detection of these similar intervals saves us simulating several times the same sample. To reduce the number of clusters in the applications and to increase the occurrence number of simulated clusters, an optimization of the method was proposed to dynamically adapt phase size of the applications. This makes it possible to easily detect the scenarios of the executed clusters when a repetition in the behavior of the applications takes place. Finally, to make our methodology viable in an MPSoC design environment, we proposed efficient techniques to construct the real system state at the simulation starting point (checkpoint) of the cluster
Touag, Athmane. „Accélération de la génération des tests de protocoles par agrégation de méthodes hétérogènes“. Paris 6, 2000. http://www.theses.fr/2000PA066458.
Der volle Inhalt der QuelleVillain, Jonathan. „Estimation de l'écotoxicité de substances chimiques par des méthodes à noyaux“. Thesis, Lorient, 2016. http://www.theses.fr/2016LORIS404/document.
Der volle Inhalt der QuelleIn chemistry and more particularly in chemoinformatics, QSAR models (Quantitative Structure Activity Relationship) are increasingly studied. They provide an in silico estimation of the properties of chemical compounds including ecotoxicological properties. These models are theoretically valid only for a class of compounds (validity domain) and are sensitive to the presence of outliers. This PhD thesis is focused on the construction of robust global models (including a maximum of compounds) to predict ecotoxicity of chemical compounds on algae P. subcapitata and to determine a validity domain in order to deduce the capacity of a model to predict the toxicity of a compound. These robust statistical models are based on quantile approach in linear regression and regression Support Vector Machine
Barbillon, Pierre. „Méthodes d'interpolation à noyaux pour l'approximation de fonctions type boîte noire coûteuses“. Phd thesis, Université Paris Sud - Paris XI, 2010. http://tel.archives-ouvertes.fr/tel-00559502.
Der volle Inhalt der QuelleGiffon, Luc. „Approximations parcimonieuses et méthodes à noyaux pour la compression de modèles d'apprentissage“. Electronic Thesis or Diss., Aix-Marseille, 2020. http://www.theses.fr/2020AIXM0354.
Der volle Inhalt der QuelleThis thesis aims at studying and experimentally validating the benefits, in terms of amount of computation and data needed, that kernel methods and sparse approximation methods can bring to existing machine learning algorithms. In a first part of this thesis, we propose a new type of neural architecture that uses a kernel function to reduce the number of learnable parameters, thus making it robust to overfiting in a regime where few labeled observations are available. In a second part of this thesis, we seek to reduce the complexity of existing machine learning models by including sparse approximations. First, we propose an alternative algorithm to the K-means algorithm which allows to speed up the inference phase by expressing the centroids as a product of sparse matrices. In addition to the convergence guarantees of the proposed algorithm, we provide an experimental validation of both the quality of the centroids thus expressed and their benefit in terms of computational cost. Then, we explore the compression of neural networks by replacing the matrices that constitute its layers with sparse matrix products. Finally, we hijack the Orthogonal Matching Pursuit (OMP) sparse approximation algorithm to make a weighted selection of decisiontrees from a random forest, we analyze the effect of the weights obtained and we propose a non-negative alternative to the method that outperforms all other tree selectiontechniques considered on a large panel of data sets
Jbilou, Khalid. „Méthodes d'extrapolation et de projection : applications aux suites de vecteurs“. Lille 1, 1988. http://www.theses.fr/1988LIL10150.
Der volle Inhalt der QuelleLe, Calvez Caroline. „Accélération de méthodes de Krylov pour la résolution de systèmes linéaires creux sur machines parallèles“. Lille 1, 1998. https://pepite-depot.univ-lille.fr/LIBRE/Th_Num/1998/50376-1998-225.pdf.
Der volle Inhalt der QuelleVazquez, Emmanuel. „Modélisation comportementale de systèmes non-linéaires multivariables par méthodes à noyaux et applications“. Phd thesis, Université Paris Sud - Paris XI, 2005. http://tel.archives-ouvertes.fr/tel-00010199.
Der volle Inhalt der QuelleGuigue, Vincent. „Méthodes à noyaux pour la représentation et la discrimination de signaux non-stationnaires“. INSA de Rouen, 2005. http://www.theses.fr/2005ISAM0014.
Der volle Inhalt der QuelleKallas, Maya. „Méthodes à noyaux en reconnaissance de formes, prédiction et classification : applications aux biosignaux“. Troyes, 2012. http://www.theses.fr/2012TROY0026.
Der volle Inhalt der QuelleThe proliferation of kernel methods lies essentially on the kernel trick, which induces an implicit nonlinear transformation with reduced computational cost. Still, the inverse transformation is often necessary. The resolution of this so-called pre-image problem enables new fields of applications of these methods. The main purpose of this thesis is to show that recent advances in statistical learning theory provide relevant solutions to several issues raised in signal and image processing. The first part focuses on the pre-image problem, and on solutions with constraints imposed by physiology. The non-negativity is probably the most commonly stated constraints when dealing with natural signals and images. Nonnegativity constraints on the result, as well as on the additivity of the contributions, are studied. The second part focuses on time series analysis according to a predictive approach. Autoregressive models are developed in the transformed space, while the prediction requires solving the pre-image problem. Two kernelbased predictive models are considered: the first one is derived by solving a least-squares problem, and the second one by providing the adequate Yule-Walker equations. The last part deals with the classification task for electrocardiograms, in order to detect anomalies. Detection and multi-class classification are explored in the light of support vector machines and self-organizing maps
Ziani, Mohammed. „Accélération de la convergence des méthodes de type Newton pour la résolution des systèmes non-linéaires“. Rennes 1, 2008. ftp://ftp.irisa.fr/techreports/theses/2008/ziani.pdf.
Der volle Inhalt der QuelleIn this thesis, we propose, on one hand, a new Broyden like method called Broyden autoadaptive limited memory Broyden method. The key point of this method is that only the necessary Broyden directions for the convergence are stored. The method starts with a minimal memory, but when a lack of convergence is detected, the size of the approximation subspace is automatically increased. Unlike classical limited memory methods, its advantage is that it does not require the parameter for the dimension of the approximating subspace. The autoadaptive method reduces efficiently computational time and storage cost. Moreover, under classic assumptions, we prove occurrence of superlinear convergence. On the other hand, we solve two nonlinear partial differential equations which arise in two contexts. The first problem consists in solving nonlinear models in image processing. In that application, the autoadaptive method converges better than the other variants of Newton method. When nonuniform noise is introduced, Newton type methods cannot converge. Actually, nonlinearities of the system are unbalanced. We so apply a nonlinear preconditioner to the problem. We use in particular the nonlinear preconditioner based on the nonlinear additive Schwarz algorithm. The second application concerns the solution of a nonlinear problem modelling the displacement of a pile inserted in a ground
Gaüzère, Benoit. „Application des méthodes à noyaux sur graphes pour la prédiction des propriétés des molécules“. Phd thesis, Université de Caen, 2013. http://tel.archives-ouvertes.fr/tel-00933187.
Der volle Inhalt der QuelleGaüzère, Benoît. „Application des méthodes à noyaux sur graphes pour la prédiction des propriétés des molécules“. Caen, 2013. http://www.theses.fr/2013CAEN2043.
Der volle Inhalt der QuelleThis work deals with the application of graph kernel methods to the prediction of molecular properties. In this document, we first present a state of the art of graph kernels used in chemoinformatics and particurlarly those which are based on bags of patterns. Within this framework, we introduce the treelet kernel based on a set of trees which allows to encode most of the structural information encoded in molecular graphs. We also propose a combination of this kernel with multiple kernel learning methods in order to extract a subset of relevant patterns. This kernel is then extended by including cyclic information using two molecular representations defined by the relevant cycle graph and the relevant cycle hypergraph. Relevant cycle graph allows to encode the cyclic system of a molecule
Laouar, Abdelhamid. „Aspaect de l'analyse numérique de méthodes itératives de point fixe : : erreurs d'arrondi, accélération de convergence, sous-domaines“. Besançon, 1988. http://www.theses.fr/1988BESA2039.
Der volle Inhalt der QuelleRousselle, François. „Amélioration des méthodes de résolution utilisées en radiosité“. Littoral, 2000. http://www.theses.fr/2000DUNK0047.
Der volle Inhalt der QuelleThe radiosity method simulates the illumination of virtual geometrical scenes. Based on a discrete formulation of the luminance equation restricted to the purely diffuse case, it amounts to the resolution of a system of linear equations. The different existing approaches used to compute radiosity have in common that they use an iterative method to solve this system. The objective of this thesis is to improve this iterative resolution without modifying the system considered by each of these approaches. The first part of this thesis discusses methods that use the whole matrix of the system. To begin with we present the different methods used in the radiosity case. Then we introduce the hybridization technique which combines two sequences of vectors in order to accelerate the convergence to the solution. The second part of this thesis discusses the progressive methods. In order to improve the resolution of the system while keeping the progressive aspect of these methods, we incorporate some group iterations in them. The use of groups implies the resolution of some subsytems which carried out quickly thanks to the results obtained in the first part. We make the proof of the convergence of our method applied to progressive radiosity and show that it can be applied to overshooting methods. In the third part we broach the hierarchical radiosity method. We point out that the Southwell method allows to decrease the total number of iterations on comparison with the Jacobi and Gauss-Seidel methods. Though to date the overhead of this method does not allow to obtain an acceleration in term of computation time actually, the perspective of its application to dynamic hierarchical radiosity seems interesting
Cuturi, Marco. „Etude de noyaux de semigroupe pour objets structurés dans le cadre de l'apprentissage statistique“. Phd thesis, École Nationale Supérieure des Mines de Paris, 2005. http://pastel.archives-ouvertes.fr/pastel-00001823.
Der volle Inhalt der QuelleSpagnol, Adrien. „Indices de sensibilité via des méthodes à noyaux pour des problèmes d'optimisation en grande dimension“. Thesis, Lyon, 2020. http://www.theses.fr/2020LYSEM012.
Der volle Inhalt der QuelleThis thesis treats the optimization under constraints of high-dimensional black-box problems. Common in industrial applications, they frequently have an expensive associated cost which make most of the off-the-shelf techniques impractical. In order to come back to a tractable setup, the dimension of the problem is often reduced using different techniques such as sensitivity analysis. A novel sensitivity index is proposed in this work to distinct influential and negligible subsets of inputs in order to obtain a more tractable problem by solely working with the primer. Our index, relying on the Hilbert Schmidt independence criterion, provides an insight on the impact of a variable on the performance of the output or constraints satisfaction, key information in our study setting. Besides assessing which inputs are influential, several strategies are proposed to deal with negligible parameters. Furthermore, expensive industrial applications are often replaced by cheap surrogate models and optimized in a sequential manner. In order to circumvent the limitations due to the high number of parameters, also known as the curse of dimensionality, we introduce in this thesis an extension of the surrogated-based optimization. Thanks to the aforementioned new sensitivity indices, parameters are detected at each iteration and the optimization is conducted in a reduced space
Lin, Hongzhou. „Algorithmes d'accélération générique pour les méthodes d'optimisation en apprentissage statistique“. Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAM069/document.
Der volle Inhalt der QuelleOptimization problems arise naturally in machine learning for supervised problems. A typical example is the empirical risk minimization (ERM) formulation, which aims to find the best a posteriori estimator minimizing the regularized risk on a given dataset. The current challenge is to design efficient optimization algorithms that are able to handle large amounts of data in high-dimensional feature spaces. Classical optimization methods such as the gradient descent algorithm and its accelerated variants are computationally expensive under this setting, because they require to pass through the entire dataset at each evaluation of the gradient. This was the motivation for the recent development of incremental algorithms. By loading a single data point (or a minibatch) for each update, incremental algorithms reduce the computational cost per-iteration, yielding a significant improvement compared to classical methods, both in theory and in practice. A natural question arises: is it possible to further accelerate these incremental methods? We provide a positive answer by introducing several generic acceleration schemes for first-order optimization methods, which is the main contribution of this manuscript. In chapter 2, we develop a proximal variant of the Finito/MISO algorithm, which is an incremental method originally designed for smooth strongly convex problems. In order to deal with the non-smooth regularization penalty, we modify the update by introducing an additional proximal step. The resulting algorithm enjoys a similar linear convergence rate as the original algorithm, when the problem is strongly convex. In chapter 3, we introduce a generic acceleration scheme, called Catalyst, for accelerating gradient-based optimization methods in the sense of Nesterov. Our approach applies to a large class of algorithms, including gradient descent, block coordinate descent, incremental algorithms such as SAG, SAGA, SDCA, SVRG, Finito/MISO, and their proximal variants. For all of these methods, we provide acceleration and explicit support for non-strongly convex objectives. The Catalyst algorithm can be viewed as an inexact accelerated proximal point algorithm, applying a given optimization method to approximately compute the proximal operator at each iteration. The key for achieving acceleration is to appropriately choose an inexactness criteria and control the required computational effort. We provide a global complexity analysis and show that acceleration is useful in practice. In chapter 4, we present another generic approach called QNing, which applies Quasi-Newton principles to accelerate gradient-based optimization methods. The algorithm is a combination of inexact L-BFGS algorithm and the Moreau-Yosida regularization, which applies to the same class of functions as Catalyst. To the best of our knowledge, QNing is the first Quasi-Newton type algorithm compatible with both composite objectives and the finite sum setting. We provide extensive experiments showing that QNing gives significant improvement over competing methods in large-scale machine learning problems. We conclude the thesis by extending the Catalyst algorithm into the nonconvex setting. This is a joint work with Courtney Paquette and Dmitriy Drusvyatskiy, from University of Washington, and my PhD advisors. The strength of the approach lies in the ability of the automatic adaptation to convexity, meaning that no information about the convexity of the objective function is required before running the algorithm. When the objective is convex, the proposed approach enjoys the same convergence result as the convex Catalyst algorithm, leading to acceleration. When the objective is nonconvex, it achieves the best known convergence rate to stationary points for first-order methods. Promising experimental results have been observed when applying to sparse matrix factorization problems and neural network models
Mignon, Alexis. „Apprentissage de métriques et méthodes à noyaux appliqués à la reconnaissance de personnes dans les images“. Caen, 2012. http://www.theses.fr/2012CAEN2048.
Der volle Inhalt der QuelleOur work is devoted to person recognition in video images and focuses mainly on faces. We are interested in the registration and recognition steps, assuming that the locations of faces in the images are known. The registration step aims at compensating the location and pose variations of the faces, making them easier to compare. We present a method to predict the location of key-points based on sparse regression. It predicts the offset between average and real positions of a key-point from the appearence of the image around the average positions. Our contributions to face recognition rely on the idea that two different representations of faces of the same person should be closer, with respect to a given distance measure, than those of two different persons. We propose a metric learning method that verifies these properties. Besides, the approach is general enough to be able to learn a distance between different modalities. The models we use in our approaches are linear. To alleviate this limitation, they are extended to the non-linear case through the use of the kernel trick. A part of this thesis precisely deals with the properties of additive homogeneous kernels, well adapted for histogram comparisons. We especially present some oringal theoretical results on the feature map of the power mean kernel
Berenguer, Laurent. „Accélération de la convergence de méthodes numériques parallèles pour résoudre des systèmes d’équations différentielles linéaires et transitoires non linéaires“. Thesis, Lyon 1, 2014. http://www.theses.fr/2014LYO10194/document.
Der volle Inhalt der QuelleSolving differential equations (PDEs/ODEs/DAEs) is central to the simulation of physical phenomena. The increase in size and complexity of the models requires the design of methods that are robust and efficient in terms of computational time. The aim of this thesis is to design methods that accelerate the solution of differential equations by domain decomposition methods. We first consider Schwarz domain decomposition methods to solve large-scale linear systems arising from the discretization of PDEs. In order to accelerate the convergence of the Schwarz method, we propose an approximation of the error propagation operator. This approximation preserves the structure of the exact operator. A significant reduction of computational time is obtained for the groundwater flow problem in highly heterogeneous media. The second contribution concerns solving the sequence of linear systems arising from the time-integration of nonlinear problems. We propose two approaches, taking advantage of the fact that the Jacobian matrix does not change dramatically from one system to another. First, we apply Broyden’s update to the Restricted Additive Schwarz (RAS) preconditioner instead of recomputing the local LU factorizations. The second approach consists of dedicating processors to the asynchronous and partial update of the RAS preconditioner. Numerical results for the lid-driven cavity problem, and for a reaction-diffusion problem show that a super-linear speedup may be achieved. The last contribution concerns the simultaneous solution of nonlinear problems associated to consecutive time steps. We study the case where the Broyden method is used to solve these nonlinear problems. In that case, Broyden’s update of the Jacobian matrix may also be propagated from one time step to another. The parallelization through the time steps is also applied to the problem of finding a consistent initial guess for differential-algebraic equations
Mahé, Pierre. „Fonctions noyaux pour molécules et leur application au criblage virtuel par machines à vecteurs de support“. Phd thesis, École Nationale Supérieure des Mines de Paris, 2006. http://pastel.archives-ouvertes.fr/pastel-00002191.
Der volle Inhalt der QuelleMosbeux, Cyrille. „Quantification des processus responsables de l’accélération des glaciers émissaires par méthodes inverses“. Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAI085/document.
Der volle Inhalt der QuelleThe current global warming has direct consequences on ice-sheet mass loss. Reproducing the responsible mechanisms and forecasting the potential ice-sheets contribution to 21st century sea level rise is one of the major challenges in ice-sheet and ice flow modelling. Ice flow models are now routinely used to forecast the potential ice-sheets contribution to sea level rise. Such short term simulations are very sensitive to model initial state, usually build from field observations. However, some parameters, such as the basal friction between icesheet and bedrock as well as the basal topography, are still badly known because of a lake of direct observations or large uncertainty on measurements. Improving the knowledge of these two parameters for Greenland and Antarctica is therefore a prerequisite for making reliable projections. Data assimilation and inverse methods have been developed in order to overcome this problem. This thesis presents two different assimilation algorithms to better constrain simulaneouslybasal friction and bedrock elevation parameters using surface observations. The first algorithm is entierly based on adjoint method while the second algorithm uses a cycling method coupling inversion of basal friction with adjoint method and inversion of bedrock topography with nudging method. Both algorithms have been implemented in the finite element ice sheet and ice flow model Elmer/Ice and tested in a twin experiment showing a clear improvement of both parameters knowledge. The application of both algorithms to regions such as the Wilkes Land in Antartica reduces the uncertainty on basal conditions, for instance providing more details to the bedrock geometry when compared to usual DEM. Moreover,the reconstruction of both bedrock elevation and basal friction significantly decreases ice flux divergence anomalies when compared to classical methods where only friction is inversed. We finaly sudy the impact of such inversion on pronostic simulation in order to compare the efficiency of the two algorithms to better constrain future ice-sheet contribution to sea level rise
Bannwart, Flavio de Campos. „Méthodes d'évaluation de la matrice de transfert des noyaux thermoacoustiques avec application à la conception de moteurs thermoacoustiques“. Thesis, Le Mans, 2014. http://www.theses.fr/2014LEMA1027/document.
Der volle Inhalt der QuelleThe design of a thermoacoustic (TA) engine is improved towards the reliability of its performance prediction. An attempt to succeed in this prediction comes from the knowledge of the TA core (TAC) transfer matrix, which can be exploited in analytical models for the given engine. The transfer (T) matrix itself may be obtained either by analytical modeling or acoustic measurements. The latter consist in an interesting option to avoid thermo-physical or geometrical considerations of complex structures, as the TAC is treated as a black box. However, before proceeding with the experimental approach, an analytical solution is presented for comparison purposes, but it contemplates only cases of materials of simple geometry. Concerning the experimental approach, a classical two-load method is applied in two different configurations and an alternative method based on impedance measurements is here developed and applied. A comparison between these approaches is evaluated by means of a sensitivity analysis. Different materials are tested, each one playing the porous element allotted inside the TAC, which is in its turn submitted to several different regimes of steady state temperature gradient. The alternative method is the only one successful for all materials. In this manner, the measured transfer matrices are applied into a proper modeling devoted to predict both the operating frequency and the intrinsic TA amplification gain. A comparative analysis shows in what conditions the TA threshold is expected or not for each material; it also reveals the limitations of the experimental apparatus in what concerns the appropriate dimensions to better fit the performance investigations
Zhang, Hanyu. „Méthodes itératives à retard pour architecture massivement parallèles“. Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLC068.
Der volle Inhalt der QuelleWith the increase of architectures composed of multi-cores, many algorithms need to revisited and be modified to exploit the power of these new architectures. These algorithms divide the original problem into “small pieces” and distribute these pieces to different processors at disposal, thus communications among them are indispensible to assure the convergence. My thesis mainly focus on solving large sparse systems of linear equations in parallel with new methods. These methods are based on the gradient methods. Two key parameters of the gradient methods are descent direction and step-length of descent for each iteration. Our methods compute the directions locally, which requires less synchronization and computation, leading to faster iterations and make easy asynchronization possible. Convergence can be proved in both synchronized or asynchronized cases. Numerical tests demonstrate the efficiency of these methods. The other part of my thesis deal with the acceleration of the vector sequences generated by classical iterative algorithms. Though general chaotic sequences may not be accelerated, it is possible to prove that with any fixed retard pattern, then the generated sequence can be accelerated. Different numerical tests demonstrate its efficiency
Mahe, Pierre. „Fonctions noyaux pour molécules et leur application au criblage virtuel par machines à vecteurs de support“. Paris, ENMP, 2006. http://www.theses.fr/2006ENMP1381.
Der volle Inhalt der QuelleGauthier, Bertrand. „Approche spectrale pour l'interpolation à noyaux et positivité conditionnelle“. Phd thesis, École Nationale Supérieure des Mines de Saint-Étienne, 2011. http://tel.archives-ouvertes.fr/tel-00631252.
Der volle Inhalt der QuelleFouchet, Arnaud. „Kernel methods for gene regulatory network inference“. Thesis, Evry-Val d'Essonne, 2014. http://www.theses.fr/2014EVRY0058/document.
Der volle Inhalt der QuelleNew technologies in molecular biology, in particular dna microarrays, have greatly increased the quantity of available data. in this context, methods from mathematics and computer science have been actively developed to extract information from large datasets. in particular, the problem of gene regulatory network inference has been tackled using many different mathematical and statistical models, from the most basic ones (correlation, boolean or linear models) to the most elaborate (regression trees, bayesian models with latent variables). despite their qualities when applied to similar problems, kernel methods have scarcely been used for gene network inference, because of their lack of interpretability. in this thesis, two approaches are developed to obtain interpretable kernel methods. firstly, from a theoretical point of view, some kernel methods are shown to consistently estimate a transition function and its partial derivatives from a learning dataset. these estimations of partial derivatives allow to better infer the gene regulatory network than previous methods on realistic gene regulatory networks. secondly, an interpretable kernel methods through multiple kernel learning is presented. this method, called lockni, provides state-of-the-art results on real and realistically simulated datasets
Shen, Ming. „Nouvelles méthodes de RMN des solides pour les corrélations homo- et hétéro-nucléaires et l’observation des noyaux de spin 1“. Thesis, Lille 1, 2015. http://www.theses.fr/2015LIL10066/document.
Der volle Inhalt der QuelleMy PhD work has focused on the development of advanced solid-state NMR methods. We have notably developed homo-nuclear correlation methods compatible with high MAS frequencies and high magnetic fields. First, we have shown that the robustness of finite pulse RadioFrequency Driven Recoupling (fp-RFDR) technique can be improved by the use of nested (XY8)41 super-cycling. Such method has been employed to probe 13C-13C and 31P-31P proximities in solids. Second, we have also introduced a second-order proton-assisted 13C-13C correlation experiment, denoted “Second-order Hamiltonian among Analogous nuclei plus” (SHA+), to observe long-range 13C-13C proximities in solids at fast MAS and high magnetic field. During my PhD, we have also improved the heteronuclear correlation methods for the indirect observation of 14N nuclei via protons. We have shown that the spectral resolution along the indirect dimension of proton-detected Heteronuclear Multiple Quantum Correlation (HMQC) spectra can be enhanced by applying homonuclear dipolar decoupling schemes during the t1 period. We have also proposed the use of centerband-selective radio-frequency (rf) pulses for the excitation of 14N nuclei in 1H{14N} HMQC experiment. The efficiency of these centerband-selective pulse is comparable to that of broadband excitation given the rf field delivered by common solid-state NMR probes. The last part of my PhD focuses on the improvement of the quadrupolar echo sequence for the acquisition of the 2H spectra of solids. The distortions of such spectra were reduced by the introduction of novel composite pulses
Trébosc, Julien Mathieu. „Méthodes d'analyse structurale par RMN haute résolution des noyaux quadripolaires et mesures des couplages à travers les liaisons et l'espace“. Lille 1, 2003. https://pepite-depot.univ-lille.fr/LIBRE/Th_Num/2003/50376-2003-317-318.pdf.
Der volle Inhalt der QuelleFournier, Émilien. „Accélération matérielle de la vérification de sûreté et vivacité sur des architectures reconfigurables“. Electronic Thesis or Diss., Brest, École nationale supérieure de techniques avancées Bretagne, 2022. http://www.theses.fr/2022ENTA0006.
Der volle Inhalt der QuelleModel-Checking is an automated technique used in industry for verification, a major issue in the design of reliable systems, where performance and scalability are critical. Swarm verification improves scalability through a partial approach based on concurrent execution of randomized analyses. Reconfigurable architectures promise significant performance gains. However, existing work suffers from a monolithic design that hinders the exploration of reconfigurable architecture opportunities. Moreover, these studies are limited to safety verification. To adapt the verification strategy to the problem, this thesis first proposes a hardware verification framework, allowing to gain, through a modular architecture, a semantic and algorithmic genericity, illustrated by the integration of 3 specification languages and 6 algorithms. This framework allows efficiency studies of swarm algorithms to obtain a scalable safety verification core. The results, on a high-end FPGA, show gains of an order of magnitude compared to the state-of-the-art. Finally, we propose the first hardware accelerator for safety and liveness verification. The results show an average speed-up of 4875x compared to software
Castellanos, Lopez Clara. „Accélération et régularisation de la méthode d'inversion des formes d'ondes complètes en exploration sismique“. Phd thesis, Université Nice Sophia Antipolis, 2014. http://tel.archives-ouvertes.fr/tel-01064412.
Der volle Inhalt der QuelleEl-Moallem, Rola. „Extrapolation vectorielle et applications aux méthodes itératives pour résoudre des équations algébriques de Riccati“. Thesis, Lille 1, 2013. http://www.theses.fr/2013LIL10180/document.
Der volle Inhalt der QuelleIn this thesis, we are interested in the study of polynomial extrapolation methods and their application as convergence accelerators on iterative methods to solve Algebraic Riccati equations arising in transport theory . In such applications, polynomial extrapolation methods succeed to accelerate the convergence of these iterative methods, even when the convergence turns to be extremely slow.The advantage of these methods of extrapolation is that they use a sequence of vectors which is not necessarily convergent, or which converges very slowly to create a new sequence which can admit a quadratic convergence. Furthermore, the development of restarted (or cyclic) methods allows to limit the cost of computations and storage. An interpretation of the critical case where the Jacobian matrix at the required solution is singular and quadratic convergence turns to linear is made. This problem can be overcome by applying a suitable shift technique. The original equation is transformed into an equivalent Riccati equation where the singularity is removed while the matrix coefficients maintain the same structure as in the original equation. The nice feature of this transformation is that the new equation has the same solution as the original one although the new Jacobian matrix at the solution is nonsingular. Numerical experiments and comparisons which confirm the effectiveness of the new approaches are reported
Orlando, Roberto. „Exploration de nouveaux noyaux d'échange-corrélation dans l'équation de Bethe-Salpeter“. Electronic Thesis or Diss., Toulouse 3, 2023. http://www.theses.fr/2023TOU30275.
Der volle Inhalt der QuelleThe subject of the thesis focuses on new approximations studied in a formalism based on a perturbation theory allowing to describe the electronic properties of many-body systems in an approximate way. We excite a system with a small disturbance, by sending light on it or by applying a weak electric field to it, for example and the system "responds" to the disturbance, in the framework of linear response, which means that the response of the system is proportional to the disturbance. The goal is to determine what we call the neutral excitations or bound states of the system, and more particularly the single excitations. These correspond to the transitions from the ground state to an excited state. To do this, we describe in a simplified way the interactions of the particles of a many-body system using an effective interaction that we average over the whole system. The objective of such an approach is to be able to study a system without having to use the exact formalism which consists in diagonalizing the N-body Hamiltonian, which is not possible for systems with more than two particles
Gauthier, Bertrand. „Approche spectrale pour l’interpolation à noyaux et positivité conditionnelle“. Thesis, Saint-Etienne, EMSE, 2011. http://www.theses.fr/2011EMSE0615/document.
Der volle Inhalt der QuelleWe propose a spectral approach for the resolution of kernel-based interpolation problems of which numerical solution can not be directly computed. Such a situation occurs in particular when the number of data is infinite. We first consider optimal interpolation in Hilbert subspaces. For a given problem, an integral operator is defined from the underlying kernel and a parameterization of the data set based on a measurable space. The spectral decomposition of the operator is used in order to obtain a representation formula for the optimal interpolator and spectral truncation allows its approximation. The choice of the measure on the parameters space introduces a hierarchy onto the data set which allows a tunable precision of the approximation. As an example, we show how this methodology can be used in order to enforce boundary conditions in kernel-based interpolation models. The Gaussian processes conditioning problem is also studied in this context. The last part of this thesis is devoted to the notion of conditionally positive kernels. We propose a general definition of symmetric conditionally positive kernels relative to a given space and exposed the associated theory of semi-Hilbert subspaces. We finally study the optimal interpolation problem in such spaces
Di, Fabio Alice. „Chute libre : étude de mouvement et des méthodes de résolution, proposition didactique“. Thesis, Sorbonne Paris Cité, 2018. http://www.theses.fr/2018USPCC313.
Der volle Inhalt der QuelleThis research targets the teaching and learning of the notion of free fall. It aims at developing a learning sequence intended for high school seniors and which goal is to rebuild the notion of acceleration from the notion of speed variation.The chosen methodology falls within didactic engineering of second generation. Three exploratory studies contribute to preliminary work. The first one focuses on usual practices of free fall teaching in the beginning of the 2Oth century through the analysis of physics textbooks. It shows that the study of falling bodies appears like a content at the crossroads of kinematics and dynamics. It also allows to question the added value of using vectors at the epistemological, methodological and educational level. The second study explores the ability of first year students in drawing vectors in kinematics. It highlights that the use of vectors raises difficulties and is a kinematics skill in itself. The third study is a content analysis of the notion of acceleration and its characteristics in the case of free fall. It leads to the presentation of different semiotic representation registers of acceleration.These preliminary analyses lead to the conception of a sequence which puts the vector representation at the centre of the learning system and which hypothesis is that the representation of several successive velocity vectors is a learning tool. The results show positive effects on student learning especially by enabling to deepen the knowledge of free fall and improve the skills in using vectors. These results also help to identify and describe possible measures for improvement of the learning sequence
Linel, Patrice. „Méthodes de décomposition de domaines en temps et en espace pour la résolution de systèmes d'EDOs non-linéaires“. Phd thesis, Université Claude Bernard - Lyon I, 2011. http://tel.archives-ouvertes.fr/tel-00721037.
Der volle Inhalt der QuelleMalzac, Julien. „Modélisation de l'émission X et Gamma des objets compacts par les méthodes Monte-Carlo“. Phd thesis, Université Paul Sabatier - Toulouse III, 1999. http://tel.archives-ouvertes.fr/tel-00010420.
Der volle Inhalt der QuelleDurrande, Nicolas. „Étude de classes de noyaux adaptées à la simplification et à l'interprétation des modèles d'approximation. Une approche fonctionnelle et probabiliste“. Phd thesis, Ecole Nationale Supérieure des Mines de Saint-Etienne, 2001. http://tel.archives-ouvertes.fr/tel-00770625.
Der volle Inhalt der QuelleDurrande, Nicolas. „Étude de classes de noyaux adaptées à la simplification et à l'interprétation des modèles d'approximation. Une approche fonctionnelle et probabiliste“. Phd thesis, Ecole Nationale Supérieure des Mines de Saint-Etienne, 2011. http://tel.archives-ouvertes.fr/tel-00844747.
Der volle Inhalt der QuelleChahine, Elie. „Etude mathématique et numérique de méthodes d'éléments finis étendues pour le calcul en domaines fissurés“. Toulouse, INSA, 2008. http://eprint.insa-toulouse.fr/archive/00000223/.
Der volle Inhalt der QuelleIn the first part of this thesis, we introduce two XFEM variants allowing to obtain optimal convergence results for XFEM with a reduced computational cost. The first one, the XFEM with a cutoff function, consists in the introduction of a globalized singular enrichment via a localization function around the crack tip. In the second variant, the singular enrichment is defined globally over a subdomain containing the crack tip. Then, this subdomain is bonded with the rest of the cracked domain using a weak integral matching condition. This approach enhances the approximation with respect to the first one. The second part is dedicated to the introduction of two other XFEM methods allowing to extend the application field of XFEM, while getting benefit of the advantages of the former variants. In the first one, the Spider XFEM, the dependence in theta of the exact singular enrichment is replaced by an approximation computed over an adapted circular mesh. Meanwhile, in the second approach, the reduced basis XFEM, an approximation of the whole singularity, computed on a very refined mesh of a cracked domain, is used as singular enrichment. These two variants allow to use XFEM in some cases when the singularity is partially or completely unknown, or even when it's exact expansion is complicated. We prove mathematical optimal convergence results for these approaches and we perform different numerical experiments that validate the theoretical study
De, Vitis Alba Chiara. „Méthodes du noyau pour l’analyse des données de grande dimension“. Thesis, Université Côte d'Azur (ComUE), 2019. http://www.theses.fr/2019AZUR4034.
Der volle Inhalt der QuelleSince data are being collected using an increasing number of features, datasets are of increasingly high dimension. Computational problems, related to the apparent dimension, i.e. the dimension of the vectors used to collect data, and theoretical problems, which depends notably on the effective dimension of the dataset, the so called intrinsic dimension, have affected high dimensional data analysis. In order to provide a suitable approach to data analysis in high dimensions, we introduce a more comprehensive scenario in the framework of metric measure spaces. The aim of this thesis, is to show how to take advantage of high dimensionality phenomena in the pure high dimensional regime. In particular, we aim at introducing a new point of view in the use of distances and probability measures defined on the data set. More specifically, we want to show that kernel methods, already used in the intrinsic low dimensional scenario in order to reduce dimensionality, can be investigated under purely high dimensional hypotheses, and further applied to cases not covered by the literature