Dissertationen zum Thema „Apprentissage statistique sur les graphes“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Apprentissage statistique sur les graphes" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.
Rosar, Kós Lassance Carlos Eduardo. „Graphs for deep learning representations“. Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2020. http://www.theses.fr/2020IMTA0204.
Der volle Inhalt der QuelleIn recent years, Deep Learning methods have achieved state of the art performance in a vast range of machine learning tasks, including image classification and multilingual automatic text translation. These architectures are trained to solve machine learning tasks in an end-to-end fashion. In order to reach top-tier performance, these architectures often require a very large number of trainable parameters. There are multiple undesirable consequences, and in order to tackle these issues, it is desired to be able to open the black boxes of deep learning architectures. Problematically, doing so is difficult due to the high dimensionality of representations and the stochasticity of the training process. In this thesis, we investigate these architectures by introducing a graph formalism based on the recent advances in Graph Signal Processing (GSP). Namely, we use graphs to represent the latent spaces of deep neural networks. We showcase that this graph formalism allows us to answer various questions including: ensuring generalization abilities, reducing the amount of arbitrary choices in the design of the learning process, improving robustness to small perturbations added to the inputs, and reducing computational complexity
Dhifli, Wajdi. „Fouille de Sous-graphes Basée sur la Topologie et la Connaissance du Domaine: Application sur les Structures 3D de Protéines“. Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2013. http://tel.archives-ouvertes.fr/tel-00922209.
Der volle Inhalt der QuelleRichard, Émile. „Regularization methods for prediction in dynamic graphs and e-marketing applications“. Phd thesis, École normale supérieure de Cachan - ENS Cachan, 2012. http://tel.archives-ouvertes.fr/tel-00906066.
Der volle Inhalt der QuelleAnakok, Emre. „Prise en compte des effets d'échantillonnage pour la détection de structure des réseaux écologiques“. Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASM049.
Der volle Inhalt der QuelleIn this thesis, we focus on the biases that sampling can cause on the estimation of statistical models and metrics describing ecological interaction networks. First, we propose to combine an observation model that accounts for sampling with a stochastic block model representing the structure of possible interactions. The identifiability of the model is demonstrated and an algorithm is proposed to estimate its parameters. Its relevance and its practical interest are attested on a large dataset of plant-pollinator networks, as we observe structural change on most of the networks. We then examine a large dataset sampled by a citizen science program. Using recent advances in artificial intelligence, we propose a method to reconstruct the ecological network free from sampling effects caused by the varying levels of experience among observers. Finally, we present methods to highlight variables of ecological interest that influence the network's connectivity and show that accounting for sampling effects partially alters the estimation of these effects. Our methods, implemented in either R or Python, are freely accessible
Vialatte, Jean-Charles. „Convolution et apprentissage profond sur graphes“. Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2018. http://www.theses.fr/2018IMTA0118/document.
Der volle Inhalt der QuelleConvolutional neural networks have proven to be the deep learning model that performs best on regularly structured datasets like images or sounds. However, they cannot be applied on datasets with an irregular structure (e.g. sensor networks, citation networks, MRIs). In this thesis, we develop an algebraic theory of convolutions on irregular domains. We construct a family of convolutions that are based on group actions (or, more generally, groupoid actions) that acts on the vertex domain and that have properties that depend on the edges. With the help of these convolutions, we propose extensions of convolutional neural netowrks to graph domains. Our researches lead us to propose a generic formulation of the propagation between layers, that we call the neural contraction. From this formulation, we derive many novel neural network models that can be applied on irregular domains. Through benchmarks and experiments, we show that they attain state-of-the-art performances, and beat them in some cases
Kassel, Adrien. „Laplaciens des graphes sur les surfaces et applications à la physique statistique“. Thesis, Paris 11, 2013. http://www.theses.fr/2013PA112101.
Der volle Inhalt der QuelleWe study the determinant of the Laplacian on vector bundles on graphs and use it, combined with discrete complex analysis, to study models of statistical physics. We compute exact lattice constants, construct scaling limits for excursions of the loop-erased random walk on surfaces, and study some Gaussian fields and determinantal processes
Belilovsky, Eugene. „Apprentissage de graphes structuré et parcimonieux dans des données de haute dimension avec applications à l’imagerie cérébrale“. Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLC027.
Der volle Inhalt der QuelleThis dissertation presents novel structured sparse learning methods on graphs that address commonly found problems in the analysis of neuroimaging data as well as other high dimensional data with few samples. The first part of the thesis proposes convex relaxations of discrete and combinatorial penalties involving sparsity and bounded total variation on a graph as well as bounded `2 norm. These are developed with the aim of learning an interpretable predictive linear model and we demonstrate their effectiveness on neuroimaging data as well as a sparse image recovery problem.The subsequent parts of the thesis considers structure discovery of undirected graphical models from few observational data. In particular we focus on invoking sparsity and other structured assumptions in Gaussian Graphical Models (GGMs). To this end we make two contributions. We show an approach to identify differences in Gaussian Graphical Models (GGMs) known to have similar structure. We derive the distribution of parameter differences under a joint penalty when parameters are known to be sparse in the difference. We then show how this approach can be used to obtain confidence intervals on edge differences in GGMs. We then introduce a novel learning based approach to the problem structure discovery of undirected graphical models from observational data. We demonstrate how neural networks can be used to learn effective estimators for this problem. This is empirically shown to be flexible and efficient alternatives to existing techniques
Brissac, Olivier. „Contributions à l'étude des mécanismes d'apprentissage opérant sur des descriptions à base de graphes“. La Réunion, 1996. http://elgebar.univ-reunion.fr/login?url=http://thesesenligne.univ.run/96_S003_Brissac.pdf.
Der volle Inhalt der QuelleAllard, Antoine. „Percolation sur graphes aléatoires - modélisation et description analytique -“. Thesis, Université Laval, 2014. http://www.theses.ulaval.ca/2014/30822/30822.pdf.
Der volle Inhalt der QuelleGraphs are abstract mathematical objects used to model the interactions between the elements of complex systems. Their use is motivated by the fact that there exists a fundamental relationship between the structure of these interactions and the macroscopic properties of these systems. The structure of these graphs is analyzed within the paradigm of percolation theory whose tools and concepts contribute to a better understanding of the conditions for which these emergent properties appear. The underlying interactions of a wide variety of complex systems share many universal structural properties, and including these properties in a unified theoretical framework is one of the main challenges of the science of complex systems. Capitalizing on a multitype approach, a simple yet powerful idea, we have unified the models of percolation on random graphs published to this day in a single framework, hence yielding the most general and realistic framework to date. More than a mere compilation, this framework significantly increases the structural complexity of the graphs that can now be mathematically handled, and, as such, opens the way to many new research opportunities. We illustrate this assertion by using our framework to validate hypotheses hinted at by empirical results. First, we investigate how the network structure of some complex systems (e.g., power grids, social networks) enhances our ability to monitor them, and ultimately to control them. Second, we test the hypothesis that the “k-core” decomposition can act as an effective structure of graphs extracted from real complex systems. Third, we use our framework to identify the conditions for which a new immunization strategy against infectious diseases is optimal.
Durand, Jean-Sébastien. „Apprentissage et rétention des gestes de réanimation cardiorespiratoire : étude statistique sur 36 élèves“. Bordeaux 2, 1992. http://www.theses.fr/1992BOR2M158.
Der volle Inhalt der QuelleLaloë, Thomas. „Sur quelques problèmes d'apprentissage supervisé et non supervisé“. Phd thesis, Université Montpellier II - Sciences et Techniques du Languedoc, 2009. http://tel.archives-ouvertes.fr/tel-00455528.
Der volle Inhalt der QuelleMaignant, Elodie. „Plongements barycentriques pour l'apprentissage géométrique de variétés : application aux formes et graphes“. Electronic Thesis or Diss., Université Côte d'Azur, 2023. http://www.theses.fr/2023COAZ4096.
Der volle Inhalt der QuelleAn MRI image has over 60,000 pixels. The largest known human protein consists of around 30,000 amino acids. We call such data high-dimensional. In practice, most high-dimensional data is high-dimensional only artificially. For example, of all the images that could be randomly generated by coloring 256 x 256 pixels, only a very small subset would resemble an MRI image of a human brain. This is known as the intrinsic dimension of such data. Therefore, learning high-dimensional data is often synonymous with dimensionality reduction. There are numerous methods for reducing the dimension of a dataset, the most recent of which can be classified according to two approaches.A first approach known as manifold learning or non-linear dimensionality reduction is based on the observation that some of the physical laws behind the data we observe are non-linear. In this case, trying to explain the intrinsic dimension of a dataset with a linear model is sometimes unrealistic. Instead, manifold learning methods assume a locally linear model.Moreover, with the emergence of statistical shape analysis, there has been a growing awareness that many types of data are naturally invariant to certain symmetries (rotations, reparametrizations, permutations...). Such properties are directly mirrored in the intrinsic dimension of such data. These invariances cannot be faithfully transcribed by Euclidean geometry. There is therefore a growing interest in modeling such data using finer structures such as Riemannian manifolds. A second recent approach to dimension reduction consists then in generalizing existing methods to non-Euclidean data. This is known as geometric learning.In order to combine both geometric learning and manifold learning, we investigated the method called locally linear embedding, which has the specificity of being based on the notion of barycenter, a notion a priori defined in Euclidean spaces but which generalizes to Riemannian manifolds. In fact, the method called barycentric subspace analysis, which is one of those generalizing principal component analysis to Riemannian manifolds, is based on this notion as well. Here we rephrase both methods under the new notion of barycentric embeddings. Essentially, barycentric embeddings inherit the structure of most linear and non-linear dimension reduction methods, but rely on a (locally) barycentric -- affine -- model rather than a linear one.The core of our work lies in the analysis of these methods, both on a theoretical and practical level. In particular, we address the application of barycentric embeddings to two important examples in geometric learning: shapes and graphs. In addition to practical implementation issues, each of these examples raises its own theoretical questions, mostly related to the geometry of quotient spaces. In particular, we highlight that compared to standard dimension reduction methods in graph analysis, barycentric embeddings stand out for their better interpretability. In parallel with these examples, we characterize the geometry of locally barycentric embeddings, which generalize the projection computed by locally linear embedding. Finally, algorithms for geometric manifold learning, novel in their approach, complete this work
Sokol, Marina. „Méthodes d'apprentissage semi-supervisé basé sur les graphes et détection rapide des nœuds centraux“. Phd thesis, Université Nice Sophia Antipolis, 2014. http://tel.archives-ouvertes.fr/tel-00998394.
Der volle Inhalt der QuelleAyadi, Hèla. „Opérateur de Gauss-Bonnet semi-Fredholm et propriétés spectrales sur les graphes infinis“. Nantes, 2015. http://archive.bu.univ-nantes.fr/pollux/show.action?id=4e93e5ba-424b-4597-b472-15f4526b70c2.
Der volle Inhalt der QuelleIn the context of an infinite locally finite weighted graph, we are interested in the study of discrete Gauss-Bonnet operator which is a Dirac type operator ( its square is the Laplacian operator ). In particular, we are focused on the conditions to have semi-Fredholmness operator needed to approach the Hodge decomposition theorem, which is important for solving problems such that Kirchhoff’s problem. In fact, we present a discrete version of the work of Gilles Carron which defines a new concept non-parabolicity at infinity to have the Gauss-Bonnet operator with closed range. Another part of this thesis consist to study the spectral properties of the Laplacian operator. We define two Laplacians one as an operator acting on functions on vertices and the other one acting on functions on edges. So, it is a natural question to characterize the relation between their spectrum in terms of a certain geometric property of the graph and properties of the operators. In fact, we show that the nonzero spectrum of the two laplacians are the same, by using Weyl criterion. In addition, we give an extension of the work of John Lott such that with suitable weight conditions, we prove that the spectral value 0 in the spectrum of one of these two Laplacians
Sokol, Marina. „Méthodes d’apprentissage semi-supervisé basé sur les graphes et détection rapide des nœuds centraux“. Thesis, Nice, 2014. http://www.theses.fr/2014NICE4018/document.
Der volle Inhalt der QuelleSemi-supervised learning methods constitute a category of machine learning methods which use labelled points together with unlabeled data to tune the classifier. The main idea of the semi-supervised methods is based on an assumption that the classification function should change smoothly over a similarity graph. In the first part of the thesis, we propose a generalized optimization approach for the graph-based semi-supervised learning which implies as particular cases the Standard Laplacian, Normalized Laplacian and PageRank based methods. Using random walk theory, we provide insights about the differences among the graph-based semi-supervised learning methods and give recommendations for the choice of the kernel parameters and labelled points. We have illustrated all theoretical results with the help of synthetic and real data. As one example of real data we consider classification of content and users in P2P systems. This application demonstrates that the proposed family of methods scales very well with the volume of data. The second part of the thesis is devoted to quick detection of network central nodes. The algorithms developed in the second part of the thesis can be applied for the selections of quality labelled data but also have other applications in information retrieval. Specifically, we propose random walk based algorithms for quick detection of large degree nodes and nodes with large values of Personalized PageRank. Finally, in the end of the thesis we suggest new centrality measure, which generalizes both the current flow betweenness centrality and PageRank. This new measure is particularly well suited for detection of network vulnerability
Hollocou, Alexandre. „Nouvelles approches pour le partitionnement de grands graphes“. Thesis, Paris Sciences et Lettres (ComUE), 2018. http://www.theses.fr/2018PSLEE063.
Der volle Inhalt der QuelleGraphs are ubiquitous in many fields of research ranging from sociology to biology. A graph is a very simple mathematical structure that consists of a set of elements, called nodes, connected to each other by edges. It is yet able to represent complex systems such as protein-protein interaction or scientific collaborations. Graph clustering is a central problem in the analysis of graphs whose objective is to identify dense groups of nodes that are sparsely connected to the rest of the graph. These groups of nodes, called clusters, are fundamental to an in-depth understanding of graph structures. There is no universal definition of what a good cluster is, and different approaches might be best suited for different applications. Whereas most of classic methods focus on finding node partitions, i.e. on coloring graph nodes so that each node has one and only one color, more elaborate approaches are often necessary to model the complex structure of real-life graphs and to address sophisticated applications. In particular, in many cases, we must consider that a given node can belong to more than one cluster. Besides, many real-world systems exhibit multi-scale structures and one much seek for hierarchies of clusters rather than flat clusterings. Furthermore, graphs often evolve over time and are too massive to be handled in one batch so that one must be able to process stream of edges. Finally, in many applications, processing entire graphs is irrelevant or expensive, and it can be more appropriate to recover local clusters in the neighborhood of nodes of interest rather than color all graph nodes. In this work, we study alternative approaches and design novel algorithms to tackle these different problems. The novel methods that we propose to address these different problems are mostly inspired by variants of modularity, a classic measure that accesses the quality of a node partition, and by random walks, stochastic processes whose properties are closely related to the graph structure. We provide analyses that give theoretical guarantees for the different proposed techniques, and endeavour to evaluate these algorithms on real-world datasets and use cases
Aucouturier, Jean-Julien. „Dix expériences sur la modélisation du timbre polyphonique“. Paris 6, 2006. https://hal.archives-ouvertes.fr/tel-01970963.
Der volle Inhalt der QuelleThe majority of systems extracting high-level music descriptions from audio signals rely on a common, implicit model of the global sound or polyphonic timbre of a musical signal. This model represents the timbre of a texture as the long-term distribution of its local spectral features. The underlying assumption is rarely made explicit: the perception of the timbre of a texture is assumed to result from the most statistically significant feature windows. This thesis questions the validity of this assumption. To do so, we construct an explicit measure of the timbre similarity between polyphonic music textures, and variants thereof inspired by previous work in Music Information Retrieval. We show that the precision of such measures is bounded, and that the remaining error rate is not incidental. Notably, this class of algorithms tends to create false positives - which we call hubs - which are mostly always the same songs regardless of the query. Their study shows that the perceptual saliency of feature observations is not necessarily correlated with their statistical significance with respect to the global distribution. In other words, music listeners routinely “hear” things that are not statistically significant in musical signals, but rather are the result of high-level cognitive reasoning, which depends on cultural expectations, a priori knowledge, and context. Much of the music we hear as being “piano music” is really music that we expect to be piano music. Such statistical/ perceptual paradoxes are instrumental in the observed discrepancy between human perception of timbre and the models studied here
Tilière, Béatrice de. „Dimères sur les graphes isoradiaux et modèle d'interfaces aléatoires en dimension 2+2“. Paris 11, 2004. http://www.theses.fr/2004PA112268.
Der volle Inhalt der QuelleThe dimer model represents diatomic molecules adsorbed on the surface of a crystal. We suppose that the lattice satisfies a geometric condition called isoradiality, moreover we assume that the critical weight function is assigned to edges of the lattice. The model then has a "critical" behavior, i. E. It can be in 2 different phases, solid or liquid, instead of 3 in general. Our three main results on the isoradial dimer model are the following. We prove an explicit formula for the growth rate of the partition function of the natural exhaustion of the infinite lattice, and for the maximal entropy Gibbs measure. The interesting feature of those two formulas lies in the fact that they only depend on the local structure of the graph. We believe this locality property to be specific of the isoradial case. Geometrically, dimer configurations can be interpreted as discrete surfaces described by one height function. We show that when the surfaces are chosen with respect to the maximal entropy Gibbs measure, the height function converges to a Gaussian free field. We introduce the triangular quadri-tile dimer model, where quadri-tilings are tilings by quadrilaterals made of adjacent right triangles. We show that this model is the superposition of two dimer models, and interpret it geometrically as surfaces of dimension 2 in a space of dimension 4. We study this model in the "critical" phase. We prove an explicit formula for the growth rate of the total partition function, and for a measure on the space of all quadri-tilings. It is the first random interface model in dimension 2+2 for which those kind of results can be obtained
Gaüzère, Benoît. „Application des méthodes à noyaux sur graphes pour la prédiction des propriétés des molécules“. Caen, 2013. http://www.theses.fr/2013CAEN2043.
Der volle Inhalt der QuelleThis work deals with the application of graph kernel methods to the prediction of molecular properties. In this document, we first present a state of the art of graph kernels used in chemoinformatics and particurlarly those which are based on bags of patterns. Within this framework, we introduce the treelet kernel based on a set of trees which allows to encode most of the structural information encoded in molecular graphs. We also propose a combination of this kernel with multiple kernel learning methods in order to extract a subset of relevant patterns. This kernel is then extended by including cyclic information using two molecular representations defined by the relevant cycle graph and the relevant cycle hypergraph. Relevant cycle graph allows to encode the cyclic system of a molecule
Allart, Thibault. „Apprentissage statistique sur données longitudinales de grande taille et applications au design des jeux vidéo“. Thesis, Paris, CNAM, 2017. http://www.theses.fr/2017CNAM1136/document.
Der volle Inhalt der QuelleThis thesis focuses on longitudinal time to event data possibly large along the following tree axes : number of individuals, observation frequency and number of covariates. We introduce a penalised estimator based on Cox complete likelihood with data driven weights. We introduce proximal optimization algorithms to efficiently fit models coefficients. We have implemented thoses methods in C++ and in the R package coxtv to allow everyone to analyse data sets bigger than RAM; using data streaming and online learning algorithms such that proximal stochastic gradient descent with adaptive learning rates. We illustrate performances on simulations and benchmark with existing models. Finally, we investigate the issue of video game design. We show that using our model on large datasets available in video game industry allows us to bring to light ways of improving the design of studied games. First we have a look at low level covariates, such as equipment choices through time and show that this model allows us to quantify the effect of each game elements, giving to designers ways to improve the game design. Finally, we show that the model can be used to extract more general design recommendations such as dificulty influence on player motivations
Allart, Thibault. „Apprentissage statistique sur données longitudinales de grande taille et applications au design des jeux vidéo“. Electronic Thesis or Diss., Paris, CNAM, 2017. http://www.theses.fr/2017CNAM1136.
Der volle Inhalt der QuelleThis thesis focuses on longitudinal time to event data possibly large along the following tree axes : number of individuals, observation frequency and number of covariates. We introduce a penalised estimator based on Cox complete likelihood with data driven weights. We introduce proximal optimization algorithms to efficiently fit models coefficients. We have implemented thoses methods in C++ and in the R package coxtv to allow everyone to analyse data sets bigger than RAM; using data streaming and online learning algorithms such that proximal stochastic gradient descent with adaptive learning rates. We illustrate performances on simulations and benchmark with existing models. Finally, we investigate the issue of video game design. We show that using our model on large datasets available in video game industry allows us to bring to light ways of improving the design of studied games. First we have a look at low level covariates, such as equipment choices through time and show that this model allows us to quantify the effect of each game elements, giving to designers ways to improve the game design. Finally, we show that the model can be used to extract more general design recommendations such as dificulty influence on player motivations
Pouthier, Baptiste. „Apprentissage profond et statistique sur données audiovisuelles dédié aux systèmes embarqués pour l'interface homme-machine“. Electronic Thesis or Diss., Université Côte d'Azur, 2024. http://www.theses.fr/2024COAZ4019.
Der volle Inhalt der QuelleIn the rapidly evolving landscape of human-machine interfaces, deep learning has been nothing short of revolutionary. It has ushered in a new era of audio-visual algorithms, which, in turn, have expanded the horizons of potential applications and strengthened the performance of traditional systems. However, these remarkable advancements come with a caveat - many of these algorithms are computationally demanding, rendering their integration onto embedded devices a formidable task. The primary focus of this thesis is to surmount this limitation through a comprehensive optimization effort, addressing the critical factors of latency and accuracy in audio-visual algorithms. Our approach entails a meticulous examination and enhancement of key components in the audio-visual human-machine interaction pipeline; we investigate and make contributions to fundamental aspects of audio-visual technology in Active Speaker Detection and Audio-visual Speech Recognition tasks. By tackling these critical building blocks, we aim to bridge the gap between the vast potential of audio-visual algorithms and their practical application in embedded systems. Our research introduces efficient models in Active Speaker Detection. On the one hand, our novel audio-visual fusion strategy yields significant improvements over other state-of-the-art systems, featuring a relatively simpler model. On the other hand, we explore neural architecture search, resulting in the development of a compact yet efficient architecture for the Active Speaker Detection problem. Furthermore, we present our work on audio-visual speech recognition, with a specific emphasis on keyword spotting. Our main contribution targets the visual aspect of speech recognition with a graph-based approach designed to streamline the visual processing pipeline, promising simpler audio-visual recognition systems
Sevi, Harry. „Analyse harmonique sur graphes dirigés et applications : de l'analyse de Fourier aux ondelettes“. Thesis, Lyon, 2018. http://www.theses.fr/2018LYSEN068/document.
Der volle Inhalt der QuelleThe research conducted in this thesis aims to develop a harmonic analysis for functions defined on the vertices of an oriented graph. In the era of data deluge, much data is in the form of graphs and data on this graph. In order to analyze and exploit this graph data, we need to develop mathematical and numerically efficient methods. This development has led to the emergence of a new theoretical framework called signal processing on graphs, which aims to extend the fundamental concepts of conventional signal processing to graphs. Inspired by the multi-scale aspect of graphs and graph data, many multi-scale constructions have been proposed. However, they apply only to the non-directed framework. The extension of a harmonic analysis on an oriented graph, although natural, is complex. We, therefore, propose a harmonic analysis using the random walk operator as the starting point for our framework. First, we propose Fourier-type bases formed by the eigenvectors of the random walk operator. From these Fourier bases, we determine a frequency notion by analyzing the variation of its eigenvectors. The determination of a frequency analysis from the basis of the vectors of the random walk operator leads us to multi-scale constructions on oriented graphs. More specifically, we propose a wavelet frame construction as well as a decimated wavelet construction on directed graphs. We illustrate our harmonic analysis with various examples to show its efficiency and relevance
Gabillon, Victor. „Algorithmes budgétisés d'itérations sur les politiques obtenues par classification“. Thesis, Lille 1, 2014. http://www.theses.fr/2014LIL10032/document.
Der volle Inhalt der QuelleThis dissertation is motivated by the study of a class of reinforcement learning (RL) algorithms, called classification-based policy iteration (CBPI). Contrary to the standard RL methods, CBPI do not use an explicit representation for value function. Instead, they use rollouts and estimate the action-value function of the current policy at a collection of states. Using a training set built from these rollout estimates, the greedy policy is learned as the output of a classifier. Thus, the policy generated at each iteration of the algorithm, is no longer defined by a (approximated) value function, but instead by a classifier. In this thesis, we propose new algorithms that improve the performance of the existing CBPI methods, especially when they have a fixed budget of interaction with the environment. Our improvements are based on the following two shortcomings of the existing CBPI algorithms: 1) The rollouts that are used to estimate the action-value functions should be truncated and their number is limited, and thus, we have to deal with bias-variance tradeoff in estimating the rollouts, and 2) The rollouts are allocated uniformly over the states in the rollout set and the available actions, while a smarter allocation strategy could guarantee a more accurate training set for the classifier. We propose CBPI algorithms that address these issues, respectively, by: 1) the use of a value function approximation to improve the accuracy (balancing the bias and variance) of the rollout estimates, and 2) adaptively sampling the rollouts over the state-action pairs
Laloë, Thomas. „Sur quelques problèmes d'apprentissage supervisé et non supervisé“. Phd thesis, Montpellier 2, 2009. http://www.theses.fr/2009MON20145.
Der volle Inhalt der QuelleThe goal of this thesis is to contribute to the domain of statistical learning, and includes the development of methods that can deal with functional data. In the first section, we develop a Nearest Neighbor approach for functional regression. In the second, we study the properties of a quantization method in infinitely-dimensional spaces. We then apply this approach to a behavioral study of schools of anchovies. The last section is dedicated to the problem of estimating level sets of the regression function in a multivariate context
Ben-Hamou, Anna. „Concentration et compression sur alphabets infinis, temps de mélange de marches aléatoires sur des graphes aléatoires“. Thesis, Sorbonne Paris Cité, 2016. http://www.theses.fr/2016USPCC197/document.
Der volle Inhalt der QuelleThis document presents the problems I have been interested in during my PhD thesis. I begin with a concise presentation of the main results, followed by three relatively independent parts. In the first part, I consider statistical inference problems on an i.i.d. sample from an unknown distribution over a countable alphabet. The first chapter is devoted to the concentration properties of the sample's profile and of the missing mass. This is a joint work with Stéphane Boucheron and Mesrob Ohannessian. After obtaining bounds on variances, we establish Bernstein-type concentration inequalities and exhibit a vast domain of sampling distributions for which the variance factor in these inequalities is tight. The second chapter presents a work in progress with Stéphane Boucheron and Elisabeth Gassiat, on the problem of universal adaptive compression over countable alphabets. We give bounds on the minimax redundancy of envelope classes, and construct a quasi-adaptive code on the collection of classes defined by a regularly varying envelope. In the second part, I consider random walks on random graphs with prescribed degrees. I first present a result obtained with Justin Salez, establishing the cutoff phenomenon for non-backtracking random walks. Under certain degree assumptions, we precisely determine the mixing time, the cutoff window, and show that the profile of the distance to equilibrium converges to the Gaussian tail function. Then I consider the problem of comparing the mixing times of the simple and non-backtracking random walks. The third part is devoted to the concentration properties of weighted sampling without replacement and corresponds to a joint work with Yuval Peres and Justin Salez
Cuturi, Marco. „Etude de noyaux de semigroupe sur objets structurés dans le cadre de l’apprentissage statistique“. Paris, ENMP, 2005. http://www.theses.fr/2005ENMP1329.
Der volle Inhalt der QuelleKernel methods refer to a new family of data analysis tools which may be used in standardized learning contexts such as classification or regression. Such tools are grounded on an a priori similarity measure between the objects to be handled, which have been named kernel in the functional anlysys literature. The problem of selecting the right kernel for a task is known to be tricky notably when the objetcs have complex structures. We propose in this work various families of generic kernels for composite objects such as strings graphs or images based on a theoretical framework that blends elements of reproducing kernel Hilbert spaces theory, information geometry and harmonic analysis on semigroups. These kernels are also tested on datasets studied considered in the fields of bioinformatics and image analysis
Trouillon, Théo. „Modèles d'embeddings à valeurs complexes pour les graphes de connaissances“. Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAM048/document.
Der volle Inhalt der QuelleThe explosion of widely available relational datain the form of knowledge graphsenabled many applications, including automated personalagents, recommender systems and enhanced web search results.The very large size and notorious incompleteness of these data basescalls for automatic knowledge graph completion methods to make these applicationsviable. Knowledge graph completion, also known as link-prediction,deals with automatically understandingthe structure of large knowledge graphs---labeled directed graphs---topredict missing entries---labeled edges. An increasinglypopular approach consists in representing knowledge graphs as third-order tensors,and using tensor factorization methods to predict their missing entries.State-of-the-art factorization models propose different trade-offs between modelingexpressiveness, and time and space complexity. We introduce a newmodel, ComplEx---for Complex Embeddings---to reconcile both expressivenessand complexity through the use of complex-valued factorization, and exploreits link with unitary diagonalization.We corroborate our approach theoretically and show that all possibleknowledge graphs can be exactly decomposed by the proposed model.Our approach based on complex embeddings is arguably simple,as it only involves a complex-valued trilinear product,whereas other methods resort to more and more complicated compositionfunctions to increase their expressiveness. The proposed ComplEx model isscalable to large data sets as it remains linear in both space and time, whileconsistently outperforming alternative approaches on standardlink-prediction benchmarks. We also demonstrateits ability to learn useful vectorial representations for other tasks,by enhancing word embeddings that improve performanceson the natural language problem of entailment recognitionbetween pair of sentences.In the last part of this thesis, we explore factorization models abilityto learn relational patterns from observed data.By their vectorial nature, it is not only hard to interpretwhy this class of models works so well,but also to understand where they fail andhow they might be improved. We conduct an experimentalsurvey of state-of-the-art models, not towardsa purely comparative end, but as a means to get insightabout their inductive abilities.To assess the strengths and weaknesses of each model, we create simple tasksthat exhibit first, atomic properties of knowledge graph relations,and then, common inter-relational inference through synthetic genealogies.Based on these experimental results, we propose new researchdirections to improve on existing models, including ComplEx
Bouveyron, Charles. „Contributions à l'apprentissage statistique en grande dimension, adaptatif et sur données atypiques“. Habilitation à diriger des recherches, Université Panthéon-Sorbonne - Paris I, 2012. http://tel.archives-ouvertes.fr/tel-00761130.
Der volle Inhalt der QuelleKalunga, Emmanuel. „Vers des interfaces cérébrales adaptées aux utilisateurs : interaction robuste et apprentissage statistique basé sur la géométrie riemannienne“. Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLV041/document.
Der volle Inhalt der QuelleIn the last two decades, interest in Brain-Computer Interfaces (BCI) has tremendously grown, with a number of research laboratories working on the topic. Since the Brain-Computer Interface Project of Vidal in 1973, where BCI was introduced for rehabilitative and assistive purposes, the use of BCI has been extended to more applications such as neurofeedback and entertainment. The credit of this progress should be granted to an improved understanding of electroencephalography (EEG), an improvement in its measurement techniques, and increased computational power.Despite the opportunities and potential of Brain-Computer Interface, the technology has yet to reach maturity and be used out of laboratories. There are several challenges that need to be addresses before BCI systems can be used to their full potential. This work examines in depth some of these challenges, namely the specificity of BCI systems to users physical abilities, the robustness of EEG representation and machine learning, and the adequacy of training data. The aim is to provide a BCI system that can adapt to individual users in terms of their physical abilities/disabilities, and variability in recorded brain signals.To this end, two main avenues are explored: the first, which can be regarded as a high-level adjustment, is a change in BCI paradigms. It is about creating new paradigms that increase their performance, ease the discomfort of using BCI systems, and adapt to the user’s needs. The second avenue, regarded as a low-level solution, is the refinement of signal processing and machine learning techniques to enhance the EEG signal quality, pattern recognition and classification.On the one hand, a new methodology in the context of assistive robotics is defined: it is a hybrid approach where a physical interface is complemented by a Brain-Computer Interface (BCI) for human machine interaction. This hybrid system makes use of users residual motor abilities and offers BCI as an optional choice: the user can choose when to rely on BCI and could alternate between the muscular- and brain-mediated interface at the appropriate time.On the other hand, for the refinement of signal processing and machine learning techniques, this work uses a Riemannian framework. A major limitation in this filed is the EEG poor spatial resolution. This limitation is due to the volume conductance effect, as the skull bones act as a non-linear low pass filter, mixing the brain source signals and thus reducing the signal-to-noise ratio. Consequently, spatial filtering methods have been developed or adapted. Most of them (i.e. Common Spatial Pattern, xDAWN, and Canonical Correlation Analysis) are based on covariance matrix estimations. The covariance matrices are key in the representation of information contained in the EEG signal and constitute an important feature in their classification. In most of the existing machine learning algorithms, covariance matrices are treated as elements of the Euclidean space. However, being Symmetric and Positive-Definite (SPD), covariance matrices lie on a curved space that is identified as a Riemannian manifold. Using covariance matrices as features for classification of EEG signals and handling them with the tools provided by Riemannian geometry provide a robust framework for EEG representation and learning
Sourty, Raphael. „Apprentissage de représentation de graphes de connaissances et enrichissement de modèles de langue pré-entraînés par les graphes de connaissances : approches basées sur les modèles de distillation“. Electronic Thesis or Diss., Toulouse 3, 2023. http://www.theses.fr/2023TOU30337.
Der volle Inhalt der QuelleNatural language processing (NLP) is a rapidly growing field focusing on developing algorithms and systems to understand and manipulate natural language data. The ability to effectively process and analyze natural language data has become increasingly important in recent years as the volume of textual data generated by individuals, organizations, and society as a whole continues to grow significantly. One of the main challenges in NLP is the ability to represent and process knowledge about the world. Knowledge graphs are structures that encode information about entities and the relationships between them, they are a powerful tool that allows to represent knowledge in a structured and formalized way, and provide a holistic understanding of the underlying concepts and their relationships. The ability to learn knowledge graph representations has the potential to transform NLP and other domains that rely on large amounts of structured data. The work conducted in this thesis aims to explore the concept of knowledge distillation and, more specifically, mutual learning for learning distinct and complementary space representations. Our first contribution is proposing a new framework for learning entities and relations on multiple knowledge bases called KD-MKB. The key objective of multi-graph representation learning is to empower the entity and relation models with different graph contexts that potentially bridge distinct semantic contexts. Our approach is based on the theoretical framework of knowledge distillation and mutual learning. It allows for efficient knowledge transfer between KBs while preserving the relational structure of each knowledge graph. We formalize entity and relation inference between KBs as a distillation loss over posterior probability distributions on aligned knowledge. Grounded on this finding, we propose and formalize a cooperative distillation framework where a set of KB models are jointly learned by using hard labels from their own context and soft labels provided by peers. Our second contribution is a method for incorporating rich entity information from knowledge bases into pre-trained language models (PLM). We propose an original cooperative knowledge distillation framework to align the masked language modeling pre-training task of language models and the link prediction objective of KB embedding models. By leveraging the information encoded in knowledge bases, our proposed approach provides a new direction to improve the ability of PLM-based slot-filling systems to handle entities
Gaüzère, Benoit. „Application des méthodes à noyaux sur graphes pour la prédiction des propriétés des molécules“. Phd thesis, Université de Caen, 2013. http://tel.archives-ouvertes.fr/tel-00933187.
Der volle Inhalt der QuelleFranco, Ana. „Impact de l'expertise linguistique sur le traitement statistique de la parole“. Doctoral thesis, Universite Libre de Bruxelles, 2012. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209565.
Der volle Inhalt der QuelleDans un premier temps, la question de la disponibilité des connaissances acquises à la conscience a été traitée (Etude 1 et 2). L'étude 1 présente une adaptation d’une méthode largement utilisée dans le domaine de l’apprentissage implicite pour rendre compte du caractère conscient ou inconscient des connaissances acquises lors d’un apprentissage, la procédure de dissociation des processus (Jacoby, 1991). Nous avons adapté cette méthode à une situation de traitement des probabilités transitionnelles entre des syllabes afin de déterminer si les représentations acquises suite à l’exposition à un langage artificiel sont disponibles à la conscience. Nous nous sommes ensuite intéressés à la question de savoir comment le caractère conscient des connaissances acquises peut être modulé par l’expertise linguistique. Les résultats suggèrent que bien que les sujets apprennent de manière semblable, les connaissances acquises semblent être moins disponibles à la conscience chez les sujets bilingues.
Dans un deuxième temps nous nous sommes intéressés au décours temporel de l’apprentissage statistique (Etude 3 et 4). L'étude 3 présente une adaptation de la Click location task (Fodor & Bever, 1965) comme mesure online du traitement des probabilités transitionnelles lors de la segmentation de la parole. Nous nous sommes ensuite intéressés à comment le traitement des régularités du langage pouvait être modulé par l’expertise linguistique (Etude 4) et les résultats suggèrent que les deux groupes ne diffèrent pas en termes de décours temporel du traitement statistique.
Dans un troisième temps, nous avons posé la question de ce qui est appris dans une situation d’apprentissage statistique. Est-ce que le produit de cet apprentissage correspond à des fragments d’information, des « candidats mots » ?Ou est-ce que, au contraire, l’apprentissage résulte en une sensibilité aux probabilités de transition entre les éléments ?L’Etude 5 propose une méthode pour déterminer la nature des représentations formées lors de l’apprentissage statistique. Le but de cette étude était d’opposer deux modèles d’apprentissage de régularités statistiques afin de déterminer lequel rend mieux compte des résultats observés lors d’une situation d’apprentissage statistique. Dans l’étude 6, nous nous sommes intéressés à l’influence de l’expertise linguistique sur la nature des représentations formées. Les résultats suggèrent que les sujets bilingues forment des représentations plus fidèles à la réalité du matériel, comparé aux monolingues.
Enfin l'étude 7 avait pour but d'explorer une situation d'apprentissage statistique plus complexe, à savoir l'apprentissage d'une grammaire artificielle. La comparaison entre des sujets monolingues et bilingues suggère que les sujets ne diffèrent pas en termes de décours temporel de l'apprentissage. Par contre, les sujets bilingues semblent former de meilleures représentations du matériel présenté et posséder des connaissances non disponibles à la conscience, alors que les monolingues se basent sur des connaissances conscientes pour effectuer la tâche.
Ainsi, les études présentées dans ce travail suggèrent que l'expertise linguistique ne module pas la vitesse de traitement de l'information statistique. Par contre, dans certaines situations, le fait d'être bilingue pourrait constituer un avantage en termes d'acquisition de connaissances sur base d'un traitement statistique et aurait également un impact sur la disponibilité des connaissances à la conscience. / The aim of this thesis was to determine whether linguistic expertise can modulate learning abilities, and more specifically statistical learning abilities. The regular use of two languages by bilingual individuals has been shown to have a broad impact on language and cognitive functioning. However, little is known about the effect of bilingualism on learning abilities. Language acquisition is a complex process that depends substantially on the processing of statistical regularities contained in speech. Because statistical information is language-specific, this information must be learned from scratch when one learns a new language. Unlike monolinguals, individuals who know more than one language, such as bilinguals or multilinguals, therefore face the challenge of having to master more than one set of statistical contingencies. Does bilingualism and increased experience with statistical processing of speech confer an advantage in terms of learning abilities? In this thesis, we address these questions at three different levels. We compared monolinguals and bilinguals in terms of (1) the nature of the representations formed during learning, (2) the time course of statistical processing, and (3) the availability of statistical knowledge to consciousness. Exploring how linguistic expertise modulates statistical learning will contribute to a better understanding of the cognitive consequences of bilingualism, but could also provide clues regarding the link between statistical learning and language.
First, the present work aimed to determine whether knowledge acquired based on statistical regularities is amenable to conscious control (Study 1 and 2). Study 1 presents an adaptation of the Process Dissociation Procedure (PDP, Jacoby, 1991), a widely used method in the field of implicit learning to account for the conscious nature of knowledge acquired during a learning situation. We adapted this method to a statistical learning paradigm in which participants had to extract artificial words from a continuous speech stream. In Study 2, we used the PDP to explore the extent to which conscious access to the acquired knowledge is modulated by linguistic expertise. Our results suggest that although monolinguals and bilinguals learned the words similarly, knowledge seems to be less available to consciousness for bilingual participants.
Second, in Studies 3 & 4, we investigated the time course of statistical learning. Study 3 introduces a novel online measure of transitional probabilities processing during speech segmentation, — an adaptation of the Click Localizaton Task (Fodor & Bever, 1965) as. In Study 4, explored whether processing of statistical regularities of speech could be modulated by linguistic expertise. The results suggest that the two groups did not differ in terms of time course of statistical processing.
Third, we aimed at exploring what is learned in a statistical learning situation. Two different kinds of mechanisms may account for performance. Participants may either parse the material into smaller chunks that correspond to the words of the artificial language, or they may become progressively sensitive to the actual values of the transitional probabilities between syllables. Study 5 proposes a method to determine the nature of the representations formed during learning. The purpose of this study was to compare two models of statistical learning (PARSER vs. SRN) in order to determine which better reflects the representations formed as a result of statistical learning. In study 6, we investigated the influence of linguistic expertise on the nature of the representations formed. The results suggests that bilinguals tend to form representations of the learned sequences that are more faithful to the reality of the material, compared to monolinguals.
Finally, Study 7 investigates how linguistic expertise influences a more complex statistical learning situation, namely artificial grammar learning. Comparison between monolingual and bilingual subjects suggests that subjects did not differ in terms of the time course of learning. However, bilinguals outperformed monolinguals in learning the grammar and seem to possess both conscious and unconscious knowledge, whereas monolinguals’ performance was only based on conscious knowledge.
To sum up, the studies presented in the present work suggest that linguistic expertise does not modulate the speed of processing of statistical information. However, bilinguals seem have make better use of the learned regularities and outperformed monolinguals in some specific situations. Moreover, linguistic expertise also seems to have an impact on the availability of knowledge to consciousness.
Doctorat en Sciences Psychologiques et de l'éducation
info:eu-repo/semantics/nonPublished
Li, Huihua. „Généralisation de l'ordre et des paramètres de macro-actions par apprentissage basé sur l'explication. Extension de l'apprentissage par explications sur l'ordre partiel“. Paris 6, 1992. http://www.theses.fr/1992PA066233.
Der volle Inhalt der QuelleGhrissi, Amina. „Ablation par catheter de fibrillation atriale persistante guidée par dispersion spatiotemporelle d’électrogrammes : Identification automatique basée sur l’apprentissage statistique“. Thesis, Université Côte d'Azur, 2021. http://www.theses.fr/2021COAZ4026.
Der volle Inhalt der QuelleCatheter ablation is increasingly used to treat atrial fibrillation (AF), the most common sustained cardiac arrhythmia encountered in clinical practice. A recent patient-tailored AF ablation therapy, giving 95% of procedural success rate, is based on the use of a multipolar mapping catheter called PentaRay. It targets areas of spatiotemporal dispersion (STD) in the atria as potential AF drivers. STD stands for a delay of the cardiac activation observed in intracardiac electrograms (EGMs) across contiguous leads.In practice, interventional cardiologists localize STD sites visually using the PentaRay multipolar mapping catheter. This thesis aims to automatically characterize and identify ablation sites in STD-based ablation of persistent AF using machine learning (ML) including deep learning (DL) techniques. In the first part, EGM recordings are classified into STD vs. non-STD groups. However, highly imbalanced dataset ratio hampers the classification performance. We tackle this issue by using adapted data augmentation techniques that help achieve good classification. The overall performance is high with values of accuracy and AUC around 90%. First, two approaches are benchmarked, feature engineering and automatic feature extraction from a time series, called maximal voltage absolute values at any of the bipoles (VAVp). Statistical features are extracted and fed to ML classifiers but no important dissimilarity is obtained between STD and non-STD categories. Results show that the supervised classification of raw VAVp time series itself into the same categories is promising with values of accuracy, AUC, sensi-tivity and specificity around 90%. Second, the classification of raw multichannel EGM recordings is performed. Shallow convolutional arithmetic circuits are investigated for their promising theoretical interest but experimental results on synthetic data are unsuccessful. Then, we move forward to more conventional supervised ML tools. We design a selection of data representations adapted to different ML and DL models, and benchmark their performance in terms of classification and computational cost. Transfer learning is also assessed. The best performance is achieved with a convolutional neural network (CNN) model for classifying raw EGM matrices. The average performance over cross-validation reaches 94% of accuracy and AUC added to an F1-score of 60%. In the second part, EGM recordings acquired during mapping are labeled ablated vs. non-ablated according to their proximity to the ablation sites then classified into the same categories. STD labels, previously defined by interventional cardiologists at the ablation procedure, are also aggregated as a prior probability in the classification task.Classification results on the test set show that a shallow CNN gives the best performance with an F1-score of 76%. Aggregating STD label does not help improve the model’s performance. Overall, this work is among the first attempts at the application of statistical analysis and ML tools to automatically identify successful ablation areas in STD-based ablation. By providing interventional cardiologists with a real-time objective measure of STD, the proposed solution offers the potential to improve the efficiency and effectiveness of this fully patient-tailored catheter ablation approach for treating persistent AF
Mazari, Ahmed. „Apprentissage profond pour la reconnaissance d’actions en vidéos“. Electronic Thesis or Diss., Sorbonne université, 2020. http://www.theses.fr/2020SORUS171.
Der volle Inhalt der QuelleNowadays, video contents are ubiquitous through the popular use of internet and smartphones, as well as social media. Many daily life applications such as video surveillance and video captioning, as well as scene understanding require sophisticated technologies to process video data. It becomes of crucial importance to develop automatic means to analyze and to interpret the large amount of available video data. In this thesis, we are interested in video action recognition, i.e. the problem of assigning action categories to sequences of videos. This can be seen as a key ingredient to build the next generation of vision systems. It is tackled with AI frameworks, mainly with ML and Deep ConvNets. Current ConvNets are increasingly deeper, data-hungrier and this makes their success tributary of the abundance of labeled training data. ConvNets also rely on (max or average) pooling which reduces dimensionality of output layers (and hence attenuates their sensitivity to the availability of labeled data); however, this process may dilute the information of upstream convolutional layers and thereby affect the discrimination power of the trained video representations, especially when the learned action categories are fine-grained
Pineau, Edouard. „Contributions to representation learning of multivariate time series and graphs“. Electronic Thesis or Diss., Institut polytechnique de Paris, 2020. http://www.theses.fr/2020IPPAT037.
Der volle Inhalt der QuelleMachine learning (ML) algorithms are designed to learn models that have the ability to take decisions or make predictions from data, in a large panel of tasks. In general, the learned models are statistical approximations of the true/optimal unknown decision models. The efficiency of a learning algorithm depends on an equilibrium between model richness, complexity of the data distribution and complexity of the task to solve from data. Nevertheless, for computational convenience, the statistical decision models often adopt simplifying assumptions about the data (e.g. linear separability, independence of the observed variables, etc.). However, when data distribution is complex (e.g. high-dimensional with nonlinear interactions between observed variables), the simplifying assumptions can be counterproductive. In this situation, a solution is to feed the model with an alternative representation of the data. The objective of data representation is to separate the relevant information with respect to the task to solve from the noise, in particular if the relevant information is hidden (latent), in order to help the statistical model. Until recently and the rise of modern ML, many standard representations consisted in an expert-based handcrafted preprocessing of data. Recently, a branch of ML called deep learning (DL) completely shifted the paradigm. DL uses neural networks (NNs), a family of powerful parametric functions, as learning data representation pipelines. These recent advances outperformed most of the handcrafted data in many domains.In this thesis, we are interested in learning representations of multivariate time series (MTS) and graphs. MTS and graphs are particular objects that do not directly match standard requirements of ML algorithms. They can have variable size and non-trivial alignment, such that comparing two MTS or two graphs with standard metrics is generally not relevant. Hence, particular representations are required for their analysis using ML approaches. The contributions of this thesis consist of practical and theoretical results presenting new MTS and graphs representation learning frameworks.Two MTS representation learning frameworks are dedicated to the ageing detection of mechanical systems. First, we propose a model-based MTS representation learning framework called Sequence-to-graph (Seq2Graph). Seq2Graph assumes that the data we observe has been generated by a model whose graphical representation is a causality graph. It then represents, using an appropriate neural network, the sample on this graph. From this representation, when it is appropriate, we can find interesting information about the state of the studied mechanical system. Second, we propose a generic trend detection method called Contrastive Trend Estimation (CTE). CTE learns to classify pairs of samples with respect to the monotony of the trend between them. We show that using this method, under few assumptions, we identify the true state underlying the studied mechanical system, up-to monotone scalar transform.Two graph representation learning frameworks are dedicated to the classification of graphs. First, we propose to see graphs as sequences of nodes and create a framework based on recurrent neural networks to represent and classify them. Second, we analyze a simple baseline feature for graph classification: the Laplacian spectrum. We show that this feature matches minimal requirements to classify graphs when all the meaningful information is contained in the structure of the graphs
Papa, Guillaume. „Méthode d'échantillonnage appliqué à la minimisation du risque empirique“. Electronic Thesis or Diss., Paris, ENST, 2018. http://www.theses.fr/2018ENST0005.
Der volle Inhalt der QuelleIn this manuscript, we present and study applied sampling strategies, with problems related to statistical learning. The goal is to deal with the problems that usually arise in a context of large data when the number of observations and their dimensionality constrain the learning process. We therefore propose to address this problem using two sampling strategies: - Accelerate the learning process by sampling the most helpful. - Simplify the problem by discarding some observations to reduce complexity and the size of the problem. We first consider the context of the binary classification, when the observations used to form a classifier come from a sampling / survey scheme and present a complex dependency structure. for which we establish bounds of generalization. Then we study the implementation problem of stochastic gradient descent when observations are drawn non uniformly. We conclude this thesis by studying the problem of graph reconstruction for which we establish new theoretical results
Simonovsky, Martin. „Deep learning on attributed graphs“. Thesis, Paris Est, 2018. http://www.theses.fr/2018PESC1133/document.
Der volle Inhalt der QuelleGraph is a powerful concept for representation of relations between pairs of entities. Data with underlying graph structure can be found across many disciplines, describing chemical compounds, surfaces of three-dimensional models, social interactions, or knowledge bases, to name only a few. There is a natural desire for understanding such data better. Deep learning (DL) has achieved significant breakthroughs in a variety of machine learning tasks in recent years, especially where data is structured on a grid, such as in text, speech, or image understanding. However, surprisingly little has been done to explore the applicability of DL on graph-structured data directly.The goal of this thesis is to investigate architectures for DL on graphs and study how to transfer, adapt or generalize concepts working well on sequential and image data to this domain. We concentrate on two important primitives: embedding graphs or their nodes into a continuous vector space representation (encoding) and, conversely, generating graphs from such vectors back (decoding). To that end, we make the following contributions.First, we introduce Edge-Conditioned Convolutions (ECC), a convolution-like operation on graphs performed in the spatial domain where filters are dynamically generated based on edge attributes. The method is used to encode graphs with arbitrary and varying structure.Second, we propose SuperPoint Graph, an intermediate point cloud representation with rich edge attributes encoding the contextual relationship between object parts. Based on this representation, ECC is employed to segment large-scale point clouds without major sacrifice in fine details.Third, we present GraphVAE, a graph generator allowing to decode graphs with variable but upper-bounded number of nodes making use of approximate graph matching for aligning the predictions of an autoencoder with its inputs. The method is applied to the task of molecule generation
Colin, Igor. „Adaptation des méthodes d’apprentissage aux U-statistiques“. Electronic Thesis or Diss., Paris, ENST, 2016. http://www.theses.fr/2016ENST0070.
Der volle Inhalt der QuelleWith the increasing availability of large amounts of data, computational complexity has become a keystone of many machine learning algorithms. Stochastic optimization algorithms and distributed/decentralized methods have been widely studied over the last decade and provide increased scalability for optimizing an empirical risk that is separable in the data sample. Yet, in a wide range of statistical learning problems, the risk is accurately estimated by U-statistics, i.e., functionals of the training data with low variance that take the form of averages over d-tuples. We first tackle the problem of sampling for the empirical risk minimization problem. We show that empirical risks can be replaced by drastically computationally simpler Monte-Carlo estimates based on O(n) terms only, usually referred to as incomplete U-statistics, without damaging the learning rate. We establish uniform deviation results and numerical examples show that such approach surpasses more naive subsampling techniques. We then focus on the decentralized estimation topic, where the data sample is distributed over a connected network. We introduce new synchronous and asynchronous randomized gossip algorithms which simultaneously propagate data across the network and maintain local estimates of the U-statistic of interest. We establish convergence rate bounds with explicit data and network dependent terms. Finally, we deal with the decentralized optimization of functions that depend on pairs of observations. Similarly to the estimation case, we introduce a method based on concurrent local updates and data propagation. Our theoretical analysis reveals that the proposed algorithms preserve the convergence rate of centralized dual averaging up to an additive bias term. Our simulations illustrate the practical interest of our approach
Maes, Francis. „Learning in Markov decision processes for structured prediction : applications to sequence labeling, tree transformation and learning for search“. Paris 6, 2009. http://www.theses.fr/2009PA066500.
Der volle Inhalt der QuellePierrefeu, Amicie de. „Apprentissage automatique avec parcimonie structurée : application au phénotypage basé sur la neuroimagerie pour la schizophrénie“. Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS329/document.
Der volle Inhalt der QuelleSchizophrenia is a disabling chronic mental disorder characterized by various symptoms such as hallucinations, delusions as well as impairments in high-order cognitive functions. Over the years, Magnetic Resonance Imaging (MRI) has been increasingly used to gain insights on the structural and functional abnormalities inherent to the disorder. Recent progress in machine learning together with the availability of large datasets now pave the way to capture complex relationships to make inferences at an individual level in the perspective of computer-aided diagnosis/prognosis or biomarkers discovery. Given the limitations of state-of-the-art sparse algorithms to produce stable and interpretable predictive signatures, we have pushed forward the regularization approaches extending classical algorithms with structural constraints issued from the known biological structure (spatial structure of the brain) in order to force the solution to adhere to biological priors, producing more plausible interpretable solutions. Such structured sparsity constraints have been leveraged to identify first, a neuroanatomical signature of schizophrenia and second a neuroimaging functional signature of hallucinations in patients with schizophrenia. Additionally, we also extended the popular PCA (Principal Component Analysis) with spatial regularization to identify interpretable patterns of the neuroimaging variability in either functional or anatomical meshes of the cortical surface
Mahé, Pierre. „Fonctions noyaux pour molécules et leur application au criblage virtuel par machines à vecteurs de support“. Phd thesis, École Nationale Supérieure des Mines de Paris, 2006. http://pastel.archives-ouvertes.fr/pastel-00002191.
Der volle Inhalt der QuelleHubert, Nicolas. „Mesure et enrichissement sémantiques des modèles à base d'embeddings pour la prédiction de liens dans les graphes de connaissances“. Electronic Thesis or Diss., Université de Lorraine, 2024. http://www.theses.fr/2024LORR0059.
Der volle Inhalt der QuelleKnowledge graph embedding models (KGEMs) have gained considerable traction in recent years. These models learn a vector representation of knowledge graph entities and relations, a.k.a. knowledge graph embeddings (KGEs). This thesis specifically explores the advancement of KGEMs for the link prediction (LP) task, which is of utmost importance as it underpins several downstream applications such as recommender systems. In this thesis, various challenges around the use of KGEMs for LP are identified: the scarcity of semantically rich resources, the unidimensional nature of evaluation frameworks, and the lack of semantic considerations in prevailing machine learning-based approaches. Central to this thesis is the proposition of novel solutions to these challenges. Firstly, the thesis contributes to the development of semantically rich resources: mainstream datasets for link prediction are enriched using schema-based information, EducOnto and EduKG are proposed to overcome the paucity of resources in the educational domain, and PyGraft is introduced as an innovative open-source tool for generating synthetic ontologies and knowledge graphs. Secondly, the thesis proposes a new semantic-oriented evaluation metric, Sem@K, offering a multi-dimensional perspective on model performance. Importantly, popular models are reassessed using Sem@K, which reveals essential insights into their respective capabilities and highlights the need for multi-faceted evaluation frameworks. Thirdly, the thesis delves into the development of neuro-symbolic approaches, transcending traditional machine learning paradigms. These approaches do not only demonstrate improved semantic awareness but also extend their utility to diverse applications such as recommender systems. In summary, the present work not only redefines the evaluation and functionality of knowledge graph embedding models but also sets the stage for more versatile, interpretable AI systems, underpinning future explorations at the intersection of machine learning and symbolic reasoning
Bordes, Antoine. „Nouveaux Algorithmes pour l'Apprentissage de Machines à Vecteurs Supports sur de Grandes Masses de Données“. Phd thesis, Université Pierre et Marie Curie - Paris VI, 2010. http://tel.archives-ouvertes.fr/tel-00464007.
Der volle Inhalt der QuelleKhaleghi, Azadeh. „Sur quelques problèmes non-supervisés impliquant des séries temporelles hautement dépendantes“. Phd thesis, Université des Sciences et Technologie de Lille - Lille I, 2013. http://tel.archives-ouvertes.fr/tel-00920333.
Der volle Inhalt der QuelleDumora, Christophe. „Estimation de paramètres clés liés à la gestion d'un réseau de distribution d'eau potable : Méthode d'inférence sur les noeuds d'un graphe“. Thesis, Bordeaux, 2020. http://www.theses.fr/2020BORD0325.
Der volle Inhalt der QuelleThe rise of data generated by sensors and operational tools around water distribution network (WDN) management make these systems more and more complex and in general the events more difficult to predict. The history of data related to the quality of distributed water crossed with the knowledge of network assets, contextual data and temporal parameters lead to study a complex system due to its volume and the existence of interactions between these various type of data which may vary in time and space. This big variety of data is grouped by the use of mathematical graph and allow to represent WDN as a whole and all the events that may arise therein or influence their proper functioning. The graph theory associated with these mathematical graphs allow a structural and spectral analysis of WDN to answer to specific needs and enhance existing process. These graphs are then used to answer the probleme of inference on the nodes of large graph from the observation of data on a small number of nodes. An approach by optminisation algorithm is used to construct a variable of flow on every nodes of a graph (therefore at any point of a physical network) using flow algorithm and data measured in real time by flowmeters. Then, a kernel prediction approach based on a Ridge estimator, which raises spectral analysis problems of a large sparse matrix, allow the inference of a signal measured on specific nodes of a graph at any point of a WDN
Owusu, Patrick Asante. „Modélisation de dépendances dans des séries temporelles co-évolutives“. Electronic Thesis or Diss., Université de Lorraine, 2024. http://www.theses.fr/2024LORR0104.
Der volle Inhalt der QuelleCurrent research in time series analysis shows that there are insufficient formal approaches for modelling the dependencies of multiple or co-evolving time series as they change over time. In this dissertation, we develop a formal approach for analysing the temporality and evolution of dependencies via the definitions of sub-time series, where a sub-time series is a segment of the original time series data. In general, we design an approach based on the principle of sliding windows to analyse the temporal nature and dependency changes between evolving time series. More precisely, each sub-time series is analysed independently to understand the local dependencies and how these dependencies shift as the window moves forward in time. This, therefore, allows us to model the temporal evolution of dependencies with finer granularity. Our contributions relating to the modelling of dependencies highlight the significance of understanding the dynamic interconnections between multiple time series that evolve together over time. The primary objective is to develop robust techniques to effectively capture these evolving dependencies, thereby improving the analysis and prediction of complex systems such as financial markets, climate systems, and other domains generating voluminous time series data. The dissertation explores the use of autoregressive models and proposes novel methods for identifying and modelling these dependencies, addressing the limitations of traditional methods that often overlook the temporal dynamics and scalability required for handling large datasets. A core aspect of the research is the development of a two-step approach to detect and model evolving effects in multiple time series. The first step involves identifying patterns to recreate series variations over various time intervals using finite linear models. This step is crucial for capturing the temporal dependencies within the data. By leveraging a sequence of bipartite graphs, the study models change across multiple time series, linking repetitive and new dependencies at varying time durations in sub-series. This approach not only simplifies the process of identifying dependencies but also provides a scalable solution for analysing large datasets, as demonstrated through experiments with, for example, real-world financial market data. The dissertation further emphasises the importance of interpretability in modelling co-evolving time series. By integrating large language models (LLMs) and context-aware techniques, the research enhances the understanding of the underlying factors driving changes in time series data. This interpretability is achieved through the construction of temporal graphs and the serialisation of these graphs into natural language, providing clear and comprehensive insights into the dependencies and interactions within the data. The combination of autoregressive models and LLMs enables the generation of plausible and interpretable predictions, making the approach suitable for real-world applications where trust and clarity in model outputs are paramount
Cortijo, Aragon Santiago José. „Sécurité pour des infrastructures critiques SCADA fondée sur des modèles graphiques probabilistes“. Electronic Thesis or Diss., Sorbonne université, 2018. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2018SORUS502.pdf.
Der volle Inhalt der QuelleIn this thesis two new Bayesian-Network-based models are proposed: conditional truncated densities Bayesian networks (ctdBN) and conditional densities Bayesian networks (cdBN). They model joint probability distributions of systems combining discrete and continuous random variables. We analyze the complexity of exact inference for the proposed models, concluding that they are in the same order of the one for the classical Bayesian Network model. We also analyze the challenge of learning cdBNs, proposing a score function based in the BD score as well as a whole learning algorithm based on the structural EM algorithm, assuming the existence of discrete latent variables corresponding to each continuous variable. In addition, we proof theoretically that the cdBN and ctdBN models can approximate well any Lipschitz joint probability distribution, which shows the expressiveness of these models. Within the framework of the European project SCISSOR, whose goal is cyber-security, we use the cdBN model to describe the dynamics of a SCADA system and to diagnose anomalies in observations taken in real time, interpreting an anomaly as a potential threat to the integrity of the system
Usunier, Nicolas. „Apprentissage de fonctions d'ordonnancement : une étude théorique de la réduction à la classification et deux applications à la recherche d'information“. Paris 6, 2006. http://www.theses.fr/2006PA066425.
Der volle Inhalt der Quelle