Dissertations / Theses on the topic 'Graphes généralisés'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 36 dissertations / theses for your research on the topic 'Graphes généralisés.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Mouhri, Abderrahim. "Etude structurelle des systèmes généralisés par l'approche bond graph." Lille 1, 2000. http://www.theses.fr/2000LIL10208.
Nous avons egalement interprete graphiquement des criteres algebriques associes au sous systeme rapide dans la forme de rosenbrock du systeme generalise initial. La caracteristique principale d'un systeme generalise est ses modes impulsionnels s'ils existent. La modelisation des systemes generalises par bond graph ne permet pas de les faire apparaitre d'un point de vue structurel mais ceci est possible d'un point de vue formel. Une procedure graphique permettant de determiner formellement le nombre de modes impulsionnels que le systeme generalise peut exhiber a ete proposee. La modelisation par bond graph des systemes a commutation a permis de caracteriser leur modes impulsionnels. Nous avons interprete le nombre de ces modes a l'aide des chemins causaux apparus apres commutation entre element commutant et element i ou c statiquement dependant. Aussi, nous avons propose une interpretation graphique basee sur la notion de chemins causaux et de rang bond graph pour l'etude de la i- commandabilite et de la i- observabilite du systeme a commutation modelise par bond graph
Scapellato, Raffaele. "Contributions à la théorie des groupes et à la théorie des graphes : groupes finis matroidaux et graphes géodétiques généralisés." Toulouse 3, 1990. http://www.theses.fr/1990TOU30213.
Hujsa, Thomas. "Contribution à l'étude des réseaux de Petri généralisés." Thesis, Paris 6, 2014. http://www.theses.fr/2014PA066342/document.
Many real systems and applications, including flexible manufacturing systems and embedded systems, are composed of communicating tasks and may be modeled by weighted Petri nets. The behavior of these systems can be checked on their model early on at the design phase, thus avoiding costly simulations on the designed systems. Usually, the models should exhibit three basic properties: liveness, boundedness and reversibility.Liveness preserves the possibility of executing every task, while boundedness ensures that the operations can be performed with a bounded amount ofresources. Reversibility avoids a costly initialization phase and allows resets of the system.Most existing methods to analyse these properties have exponential time complexity.By focusing on several expressive subclasses of weighted Petri nets, namely Fork-Attribution, Choice-Free, Join-Free and Equal-Conflict nets,the first polynomial algorithms that ensure liveness, boundednessand reversibility for these classes have been developed in this thesis.First, we provide several polynomial time transformations that preserve structural andbehavioral properties of weighted Petri nets, while simplifying the study of their behavior.Second, we use these transformations to obtain several polynomial sufficient conditions of livenessfor the subclasses considered. Finally, the transformations also prove useful for the study of the reversibility propertyunder the liveness assumption. We provide several characterizations and polynomial sufficient conditionsof reversibility for the same subclasses. All our conditions are scalable and can be easily implemented in real systems
Hujsa, Thomas. "Contribution à l'étude des réseaux de Petri généralisés." Electronic Thesis or Diss., Paris 6, 2014. http://www.theses.fr/2014PA066342.
Many real systems and applications, including flexible manufacturing systems and embedded systems, are composed of communicating tasks and may be modeled by weighted Petri nets. The behavior of these systems can be checked on their model early on at the design phase, thus avoiding costly simulations on the designed systems. Usually, the models should exhibit three basic properties: liveness, boundedness and reversibility.Liveness preserves the possibility of executing every task, while boundedness ensures that the operations can be performed with a bounded amount ofresources. Reversibility avoids a costly initialization phase and allows resets of the system.Most existing methods to analyse these properties have exponential time complexity.By focusing on several expressive subclasses of weighted Petri nets, namely Fork-Attribution, Choice-Free, Join-Free and Equal-Conflict nets,the first polynomial algorithms that ensure liveness, boundednessand reversibility for these classes have been developed in this thesis.First, we provide several polynomial time transformations that preserve structural andbehavioral properties of weighted Petri nets, while simplifying the study of their behavior.Second, we use these transformations to obtain several polynomial sufficient conditions of livenessfor the subclasses considered. Finally, the transformations also prove useful for the study of the reversibility propertyunder the liveness assumption. We provide several characterizations and polynomial sufficient conditionsof reversibility for the same subclasses. All our conditions are scalable and can be easily implemented in real systems
Blazere, Melanie. "Inférence statistique en grande dimension pour des modèles structurels. Modèles linéaires généralisés parcimonieux, méthode PLS et polynômes orthogonaux et détection de communautés dans des graphes." Thesis, Toulouse, INSA, 2015. http://www.theses.fr/2015ISAT0018/document.
This thesis falls within the context of high-dimensional data analysis. Nowadays we have access to an increasing amount of information. The major challenge relies on our ability to explore a huge amount of data and to infer their dependency structures.The purpose of this thesis is to study and provide theoretical guarantees to some specific methods that aim at estimating dependency structures for high-dimensional data. The first part of the thesis is devoted to the study of sparse models through Lasso-type methods. In Chapter 1, we present the main results on this topic and then we generalize the Gaussian case to any distribution from the exponential family. The major contribution to this field is presented in Chapter 2 and consists in oracle inequalities for a Group Lasso procedure applied to generalized linear models. These results show that this estimator achieves good performances under some specific conditions on the model. We illustrate this part by considering the case of the Poisson model. The second part concerns linear regression in high dimension but the sparsity assumptions is replaced by a low dimensional structure underlying the data. We focus in particular on the PLS method that attempts to find an optimal decomposition of the predictors given a response. We recall the main idea in Chapter 3. The major contribution to this part consists in a new explicit analytical expression of the dependency structure that links the predictors to the response. The next two chapters illustrate the power of this formula by emphasising new theoretical results for PLS. The third and last part is dedicated to graphs modelling and especially to community detection. After presenting the main trends on this topic, we draw our attention to Spectral Clustering that allows to cluster nodes of a graph with respect to a similarity matrix. In this thesis, we suggest an alternative to this method by considering a $l_1$ penalty. We illustrate this method through simulations
Jiang, Yiting. "Many aspects of graph coloring." Electronic Thesis or Diss., Université Paris Cité, 2022. https://wo.app.u-paris.fr/cgi-bin/WebObjects/TheseWeb.woa/wa/show?t=5613&f=42533.
Graph coloring is a central topic in graph theory, and various coloring concepts have been studied in the literature. This thesis studies some of the coloring concepts and related problems. These include coloring of generalized signed graphs, strong fractional choice number of graphs, generalized coloring number of graphs, twin-width of graphs, (combinatorial) discrepancy of definable set systems and chi_p-bounded classes of graphs. A signed graph is a pair (G, sigma), where G is a graph and sigma: E(G) to {+,-} is a signature. A generalized signed graph is also a pair (G, sigma), where the signature sigma assigns to each edge e a permutation sigma(e) in a set S as its sign. In a coloring of a signed graph or a generalized signed graph (G, sigma), the sign sigma(e) determines the pairs of colors that need to be avoided as the colors of the end vertices of e. Let S_k be the set of all permutations of [k]. A natural question motivated by the four color theorem is for which subsets S of S_4, every planar graph is S-4-colorable. This question is now completely answered: only S={id} has this property, which means that the four color theorem is tight in the sense of generalized signed graph coloring. This answer is obtained in a sequence of six papers, by different groups of authors. The contribution of this thesis is the results in one of the papers, which shows that many sets S do not have the desired property. The thesis also considers similar questions for triangle-free planar graphs, which can be viewed as exploring the tightness of Grötzsch Theorem. Our result shows that for any subset S of S_3, if S is not conjugate to a subset of {id, (12)}, then there exists a triangle-free planar graph which is not S-3-colorable. Another attempt to strengthen Grötzsch Theorem is to consider multiple list coloring of triangle-free planar graphs. It was proved by Voigt that there are triangle-free planar graphs that are not 3-choosable. This thesis strengthens Voigt's result by considering the strong fractional choice number of graphs and proves that the supremum of the strong fractional choice number of triangle-free planar graphs is at least 3+1/17. One important subject in structural graph theory is to study the structural complexity of graphs or classes of graphs and a few concepts and graph invariants are studied extensively in the literature. These include treewidth of graphs, generalized coloring number, etc. Recently, the concept of twin-width was introduced by Bonnet, Kim, Thomassé and Watrigant. In this thesis, we study the relation between twin-width and generalized coloring number. We prove that a graph $G$ with no K_{s,s}-subgraph and twin-width d has strong(weak) r-coloring numbers bounded by an exponential function of r and that we can construct graphs achieving such a dependency in r. One of the two central notions in structural theory of classes of sparse graphs is the classes with bounded expansion. These classes have strong algorithmic and structural properties and enjoy numerous characterizations and applications. Combinatorial discrepancy is a significant subject in its own right. It measures the inevitable irregularities of set systems and the inherent difficulty to approximate them. In this thesis, we give a new characterization of bounded expansion classes in terms of discrepancy of definable set systems. The notion of chi-boundedness is a central topic in chromatic graph theory. This thesis studies chi-bounded classes in the context of star colorings and, more generally, chi_p-colorings, say chi_s-bounded and (strongly and weakly) chi_p-bounded classes. This fits to a general scheme of sparsity and leads to natural extensions of the notion of bounded expansion class. Here we solve two conjectures related to star coloring ({i.e.} chi_2) boundedness and give structural characterizations of strongly chi_p-bounded classes and weakly chi_p-bounded classes
Aïder, Méziane. "Réseaux d'interconnexion bipartis : colorations généralisées dans les graphes." Phd thesis, Grenoble 1, 1987. http://tel.archives-ouvertes.fr/tel-00325779.
Begeot, Jocelyn. "Autour de la stabilité de différents modèles d'appariements aléatoires." Electronic Thesis or Diss., Université de Lorraine, 2022. http://www.theses.fr/2022LORR0201.
Stochastic matching models represent many concrete stochastic systems in which elements of different classes are matched according to specified compatibility rules. For example, we can cite systems dedicated to organs allocation, job search sites, housing allocation programs, etc. Such models are typically associated to a triplet of elements: a connected graph, called compatibility graph, whose vertices represent the classes of elements that can enter the system and whose edges connect two compatible classes, amatching policy which decides the matches to be concretely executed, in case of multiple choices, and an arrival rate according to which the elements enter within it. In this thesis, we consider generalized graphs, meaning that we allow the matching of two elements of the same class, and we therefore extend to this framework some results already known in the case of simple graphs.The stability of a system governed by a matching model is a very important property. It ensures that the admissions within the system under study, are regulated, so that the elements do not accumulate in the system in the long run. It is therefore essential that the arrival rate of the elements allows the system to be stable. In this manuscript, we characterize, algebraically, this stability region for some matching models (general, general with reneging, bipartite, extended bipartite) or skill-based queueing systems.Moreover, we demonstrate that the matching policy called First Come, First Matched (FCFM) has the property of being (generalized) maximal, meaning that the stability region of the general matching model associated with a compatibility graph and with any policy is always included in the one associated with this same graph and ruled by FCFM. Note that this latter then coincides with a set of measures defined by purely algebraic conditions. In this case, the study of stability of the matching model at hand boils down to the more elementary question of characterizing of a deterministic set of measures. We then givea (simple) way to construct the measures belonging to the latter set. This turns out to be very useful for admission control, as checking the algebraic conditions requires a number of operations which is polynomial in the number of vertices of the considered compatibility graph, and therefore becomes very expensive as the number of vertices grows large.We also give, in a product form, the expression of the stationary distribution of the number-in-system process of a stable system governed by a general matching model and under the FCFM policy, allowing, in particular, to explicitly calculate characteristics at equilibrium of concrete systems and to estimate their long-time performance. We can thus, for example, calculate the size average at equilibrium of a waiting list in the case of cross-donation of kidneys, or even, estimate the average waiting time on a peer-to-peerinterface or on a dating website.Finally, the matching rates associated with a stable matching model (general or extended bipartite) are studied. They are defined as the asymptotic frequencies of the executed respective matchings, and provide an insightful performance criterion for the corresponding matching systems, as well as the policy-insensitivity and fairness properties of the matching rates, which are also discussed
Gallais, Cécilia. "Formalisation et analyse algébrique et combinatoire de scénarios d'attaques généralisées." Thesis, Paris, ENSAM, 2017. http://www.theses.fr/2017ENAM0064/document.
The current definitions of a critical infrastructure are not adapted to the actual attacks which are observed these days. The problem is the same for the definition of an attack and therefore, the term « cyber attack » tends to reduce the conceptual and operational field of the person in charge of the security. Most of the approaches are reduced to identify the technical and IT domain only, and they forget the others domains specific to the intelligence. Then, the main methodologies to identify and to manage risk (EBIOS or some similar methodologies) take into account a definition of a critical infrastructure which is restrictive, static and local. The model of attacker and attack is also extremely narrowed as the technical approaches and the angles of attack of an attacker tend to be restricted to the IT domain only, even if the « cyber » angles may not exist or may only be a small part of an attack scenario.Therefore, it is necessary to have a new definition of a critical infrastructure, more complete and made according to the attacker point of view. Indeed, critical infrastructures can be protected by assessing the threats and vulnerability. This thesis aims to develop new models of infrastructure and attack accurately, models which will based on graph theory, with or without the cyber part. This graph-based representation is already used a lot to describe infrastructure, it will be enriched in order to have a more exhaustive view of an infrastructure environment. The dependencies with other entities (people, others critical infrastructures, etc.) have to be taken into account in order to obtain pertinent attack scenarios. This enriched representation must lead to new models of attackers, more realistic and implementing external components of the infrastructure which belong to its immediate environment. The main objective is the research of optimal paths or other mathematical structures which can be translated into attack scenarios. This global approach provides a finer (and therefore more realistic) definition of security as the lowest cost of the attack path.The research program is structured in five stages. The first two steps are aimed at defining the models and objects representing the security infrastructures as well as the attackers they are confronted with. The major difficulty encountered in developing a relevant infrastructure model is its ability to describe. Indeed, the more the model is rich, the more it can describe the infrastructure and the adversaries that attack it. The counterpart of developing a relevant model is its exponential characteristic. In these security models, we therefore expect that the problem of finding the vulnerabilities of a security infrastructure is equivalent to difficult problems, i.e. NP-hard or even NP-complete. The locks to be lifted will therefore consist in the design of heuristics to answer these problems in finite time with an ``acceptable" response. The third step is to define a generic methodology for assessing the safety of a security infrastructure. In order to validate the proposed models and methodology, the thesis program provides for the development of a research demonstrator in the form of an evaluation platform. Finally, the last step will be to evaluate an existing system from the platform by implementing the proposed methodology. The objective of this last step is to validate the models and the methodology and to propose an improvement if necessary
Vandomme, Elise. "Contributions to combinatorics on words in an abelian context and covering problems in graphs." Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GRENM010/document.
This dissertation is divided into two (distinct but connected) parts that reflect the joint PhD. We study and we solve several questions regarding on the one hand combinatorics on words in an abelian context and on the other hand covering problems in graphs. Each particular problem is the topic of a chapter. In combinatorics on words, the first problem considered focuses on the 2-regularity of sequences in the sense of Allouche and Shallit. We prove that a sequence satisfying a certain symmetry property is 2-regular. Then we apply this theorem to show that the 2-abelian complexity functions of the Thue--Morse word and the period-doubling word are 2-regular. The computation and arguments leading to these results fit into a quite general scheme that we hope can be used again to prove additional regularity results. The second question concerns the notion of return words up to abelian equivalence, introduced by Puzynina and Zamboni. We obtain a characterization of Sturmian words with non-zero intercept in terms of the finiteness of the set of abelian return words to all prefixes. We describe this set of abelian returns for the Fibonacci word but also for the Thue-Morse word (which is not Sturmian). We investigate the relationship existing between the abelian complexity and the finiteness of this set. In graph theory, the first problem considered deals with identifying codes in graphs. These codes were introduced by Karpovsky, Chakrabarty and Levitin to model fault-diagnosis in multiprocessor systems. The ratio between the optimal size of an identifying code and the optimal size of a fractional relaxation of an identifying code is between 1 and 2 ln(|V|)+1 where V is the vertex set of the graph. We focus on vertex-transitive graphs, since we can compute the exact fractional solution for them. We exhibit infinite families, called generalized quadrangles, of vertex-transitive graphs with integer and fractional identifying codes of order |V|^k with k in {1/4,1/3,2/5}. The second problem concerns (r,a,b)-covering codes of the infinite grid already studied by Axenovich and Puzynina. We introduce the notion of constant 2-labellings of weighted graphs and study them in four particular weighted cycles. We present a method to link these labellings with covering codes. Finally, we determine the precise values of the constants a and b of any (r,a,b)-covering code of the infinite grid with |a-b|>4. This is an extension of a theorem of Axenovich
Chouria, Ali. "Algèbres de Hopf combinatoires sur les partitions d'ensembles et leurs généralisations : applications à l'énumération et à la physique théorique." Rouen, 2016. http://www.theses.fr/2016ROUES007.
This thesis fits into the field of algebraic and enumerative combinatorics. It is devoted to the study of problems of enumeration using combinatorial Hopf algebras, particularly, the algebra of word symmetric functions WSym. We give noncommutative versions of the Redfield-Pólya theorem in WSym and other combinatorial Hopf algebras as the algebra of biword symmetric functions BWSym. In the second algebra, we give a relevement (version without multiplicities) of this theorem. We construct other Hopf algebras on set partitions and other objects which generealize them. We use these algebras to study noncommutative version of Bell polynomials. These polynomials involve combinatorial objects like set partitions. So it seems natural for us to investigate analogous formulæ in some combinatorial Hopf algebras with bases indexed by objects related to set partitions (set partitions into lists, colored set partitions, etc). Then, we give analogous identities of partial Bell polynomials, binomial functions, Lagrange inversion and Faà di Bruno formula. Finally, we are interested in the combinatorial structures arising in the boson normal ordering problem. We define new combinatorial objects (called B-diagrams). After studying the combinatorics and the enumeration of B-diagrams, we propose two constructions of algebras called : the Fusion algebra defined using formal variables and another algebra constructed directly from B-diagrams. We show the connection between these two algebras and that the algebra of B-diagrams B can be endowed with Hopf structure. We recognise two known combinatorial Hopf subalgebras of B : WSym the algebra of word symmetric functions indexed by set partitions and BWSym the algebra of biword symmetric functions indexed by set partitions into lists
Burgunder, Emily. "Bigèbres généralisées : de la conjecture de Kashiwara-Vergne aux complexes de graphes de Kontsevich." Montpellier 2, 2008. http://www.theses.fr/2008MON20248.
This thesis contains four articles developed around three themes : the Kashiwara-Vergne conjecture, Kontsevich's graph complex and magmatic bialgebras. The results obtained are linked by the notion of generalised bialgebras and their idempotents: in the first case we use the properties of classical bialgebras and in the second, a structure theorem for Zinbiel-associatives bialgebras. The main result of the first article is to construct explicitly all the solutions of the first equation of Kashiwara-Vergne conjecture, using the interplay between the Eulerian idempotent and the Dynkin idempotent. In second chapter we generalise the Kontsevich's theorem that computes the Lie homology of vector fields on a formal manifold. Indeed, we prove that the Leibniz homology of these symplectic vector fields on a formal manifold can be reconstructed thanks to the homology associated to a new type of graphs: the symmetric graphs. The third part contains two articles on magmatic bialgebras. In the first one, we prove a structure theorem which permits to reconstruct any infinite magmatic bialgebra through its primitives. In collaboration with Ralf Holtkamp, we extend this result to partial magmatic bialgebras and we construct a new type of operad that encodes the algebraic structure satisfied by the primitives
Pereira, Mike. "Champs aléatoires généralisés définis sur des variétés riemanniennes : théorie et pratique." Thesis, Paris Sciences et Lettres (ComUE), 2019. http://www.theses.fr/2019PSLEM055.
Geostatistics is the branch of statistics attached to model spatial phenomena through probabilistic models. In particular, the spatial phenomenon is described by a (generally Gaussian) random field, and the observed data are considered as resulting from a particular realization of this random field. To facilitate the modeling and the subsequent geostatistical operations applied to the data, the random field is usually assumed to be stationary, thus meaning that the spatial structure of the data replicates across the domain of study. However, when dealing with complex spatial datasets, this assumption becomes ill-adapted. Indeed, how can the notion of stationarity be defined (and applied) when the data lie on non-Euclidean domains (such as spheres or other smooth surfaces)? Also, what about the case where the data clearly display a spatial structure that varies across the domain? Besides, using more complex models (when it is possible) generally comes at the price of a drastic increase in operational costs (computational and storage-wise), rendering them impossible to apply to large datasets. In this work, we propose a solution to both problems, which relies on the definition of generalized random fields on Riemannian manifolds. On one hand, working with generalized random fields allows to naturally extend ongoing work that is done to leverage a characterization of random fields used in Geostatistics as solutions of stochastic partial differential equations. On the other hand, working on Riemannian manifolds allows to define such fields on both (only) locally Euclidean domains and on locally deformed spaces (thus yielding a framework to account for non-stationary cases). The discretization of these generalized random fields is undertaken using a finite element approach, and we provide an explicit formula for a large class of fields comprising those generally used in applications. Finally, to solve the scalability problem,we propose algorithms inspired from graph signal processing to tackle the simulation, the estimation and the inference of these fields using matrix-free approaches
Combier, Camille. "Mesures de similarité pour cartes généralisées." Phd thesis, Université Claude Bernard - Lyon I, 2012. http://tel.archives-ouvertes.fr/tel-00995382.
Ntaryamira, Evariste. "Une méthode asynchrone généralisée préservant la qualité des données des systèmes temps réel embarqués : cas de l’autopilote PX4-RT." Electronic Thesis or Diss., Sorbonne université, 2021. https://theses.hal.science/tel-03789654.
Real-time embedded systems, despite their limited resources, are evolving very quickly. For such systems, it is not enough to ensure that all jobs do not miss their deadlines, it is also mandatory to ensure the good quality of the data being transmitted from tasks to tasks. Speaking of the data quality constraints, they are expressed by the maintenance of a set of properties that a data sample must exhibit to be considered as relevant. It is mandatory to find trade-offs between the system scheduling constraints and those applied to the data. To ensure such properties, we consider the wait-free mechanism. The size of each communication buffer is based on the lifetime bound method. Access to the shared resources follows the single writer, many readers. To contain all the communication particularities brought by the uORB communication mechanism we modeled the interactions between the tasks by a bipartite graph that we called communication graph which is comprised of sets of so-called domain messages. To enhance the predictability of inter-task communication, we extend Liu and Layland model with the parameter communication state used to control writing/reading points.We considered two types of data constraints: data local constraints and data global constraints. To verify the data local constraints, we rely on the sub-sampling mechanism meant to verify data local constraints. Regarding the data global constraints, we introduced two new mechanism: the last reader tags mechanism and the scroll or overwrite mechanism. These 2 mechanisms are to some extent complementary. The first one works at the beginning of the spindle while the second one works at the end of the spindle
Poudret, Mathieu. "Transformations de graphes pour les opérations topologiques en modélisation géométrique - Application à l'étude de la dynamique de l'appareil de Golgi." Phd thesis, Université d'Evry-Val d'Essonne, 2009. http://tel.archives-ouvertes.fr/tel-00503818.
Nouvel, Bertrand. "Rotations discrètes et automates cellulaires." Lyon, École normale supérieure (sciences), 2006. http://www.theses.fr/2006ENSL0370.
Cardot, Anaïs. "Rejeu basé sur des règles de transformation de graphes." Thesis, Poitiers, 2019. http://www.theses.fr/2019POIT2254.
In many modelling fields, such as architecture, archaeology or CAD, performing many variations of the same model is an expanding need. But building all those variations manually takes time. It is therefore needed to use automatic technics to revaluate some parts of a model, or even an entire model, after the user specifies the modifications. Most of the existing approaches dedicated to revaluating models are based on a system called parametric modelling. It is made of two parts, a geometric model and a parametric specification, which allows to record the series of operation that created the model, and the different parameters of those operations. This way, the user can change some parameters, or edit the list of operations to modify the model. To do so, we use a system called persistent naming, introduced during the 90ies, that allows us to identify and match the entities of an initial specification and the ones of a revaluated specification. In this thesis, our goal is to propose a persistent naming system that would be general, homogeneous and that would allow the user to edit a parametric specification (which means move, add, or delete some operations). We base our system on the Jerboa library, which uses graph transformation rules. This way, we will be able to use those rules to create our naming system, while also linking the notions of graph transformation rules and parametric specification. We will then describe how to use our naming method to revaluate or edit parametric specifications. Finally, we will compare our method with the other ones from the literature
Vatant, Gautier. "Modélisation d'objets 3D à l'aide de cônes généralisés profilés et ramifiés et problèmes de raccord de surfaces soulevés par ces cônes." Phd thesis, Ecole Nationale Supérieure des Mines de Saint-Etienne, 1997. http://tel.archives-ouvertes.fr/tel-00940235.
Nouvel, Bertrand. "ROTATIONS DISCRETES ET AUTOMATES CELLULAIRES." Phd thesis, Ecole normale supérieure de lyon - ENS LYON, 2006. http://tel.archives-ouvertes.fr/tel-00444088.
Marchetti, Olivier. "Dimensionnement des mémoires pour systèmes embarqués." Paris 6, 2006. http://www.theses.fr/2006PA066383.
Yirci, Murat. "Arrangements 2D pour la Cartographie de l’Espace Public et des Transports." Thesis, Paris Est, 2016. http://www.theses.fr/2016PESC1075/document.
This thesis addresses easy and effective development of mapping and transportation applications which especially focuses on the generation of pedestrian networks for applications like navigation, itinerary calculation, accessibility analysis and urban planning. In order to achieve this goal, we proposed a two layered data model which encodes the public space into a hierarchy of semantic geospatial objects. At the lower level, the 2D geometry of the geospatial objects are captured using a planar partition which is represented as a topological 2D arrangement. This representation of a planar partition allows efficient and effective geometry processing and easy maintenance and validation throughout the editions when the geometry or topology of an object is modified. At the upper layer, the semantic and thematic aspects of geospatial objects are modelled and managed. The hierarchy between these objects is maintained using a directed acyclic graph (DAG) in which the leaf nodes correspond to the geometric primitives of the 2D arrangement and the higher level nodes represent the aggregated semantic geospatial objects at different levels. We integrated the proposed data model into our GIS framework called StreetMaker together with a set of generic algorithms and basic GIS capabilities. This framework is then rich enough to generate pedestrian network graphs automatically. In fact, within an accessibility analysis project, the full proposed pipeline was successfully used on two sites to produce pedestrian network graphs from various types of input data: existing GIS vector maps, semi-automatically created vector data and vector objects extracted from Mobile Mapping lidar point clouds.While modelling 2D ground surfaces may be sufficient for 2D GIS applications, 3D GIS applications require 3D models of the environment. 3D modelling is a very broad topic but as a first step to such 3D models, we focused on the semi-automatic modelling of objects which can be modelled or approximated by generalized cylinders (such as poles, lampposts, tree trunks, etc.) from single images. The developed methods and techniques are presented and discussed
Decoster, Jean. "Programmation logique inductive pour la classification et la transformation de documents semi-structurés." Thesis, Lille 1, 2014. http://www.theses.fr/2014LIL10046/document.
The recent proliferation of XML documents in databases and web applications rises some issues due to the numerous data exchanged and their diversity. To ease their uses, some smart means have been developed such as automatic classification and transformation. This thesis has two goals:• To propose a framework for the XML documents classification task.• To study the XML documents transformation learning.We have chosen to use Inductive Logic Programming. The expressiveness of logic programs grants flexibility in specifying the learning task and understandability to the induced theories. This flexibility implies a high computational cost, constraining the applicability of ILP systems. However, XML documents being trees, a good concession can be found.For our first contribution, we define clauses languages that allow encoding xml trees. The definition of our classification framework follows their studies. It stands on a rewriting of the standard ILP operations such as theta-subsumption and least general generalization [Plotkin1971]. Our algorithms are polynomials in time in the input size whereas the standard ones are exponentials. They grant an identification in the limit [Gold1967] of our languages.Our second contribution is the building of methods to learn XML documents transformations. It begins by the definition of a clauses class in the way of functional programs [Paulson91]. They are an ILP adaptation of edit scripts and allow a context. Their learning is possible thanks to two A*-like algorithms, a common ILP approach (HOC-Learner [Santos2009])
Bouazza, Syrine. "Contrôle des processus de désassemblage à l'aide des formalismes des systèmes à évènements discrets." Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPAST215.
Disassembly process control involves the methods and techniques used to safely and efficiently disassemble mechanical components or complex assemblies. To do this, control approaches are developed to satisfy the constraints imposed on these systems. More specifically, in this thesis we are interested in three types of specifications: marking constraints, Generalized Marking Constraints (GMCs), and Mutual Exclusion Constraints (MECs).To this aim, we have proposed three analytical methods. The first contribution concerns a new technique for designing control laws for disassembly systems to ensure the satisfaction of marking constraints in Timed Event Graphs (TEGs) with some uncontrollable input transitions. The second technique focuses on controller synthesis while ensuring GMCs specified by weighted inequalities in the Min-Plus algebra subject to GETs. The final method aims to control disassembly processes modelled by Timed Event Graph Networks (NGETs) imposed on MECs.Alternatively, it is worth noting that these approaches are based on the conceptual structures of Discrete Event Systems (DES) and the Min-Plus algebra. These tools offer the ability to represent manufacturing systems accurately and methodically. Consequently, the problem is formulated using linear control models based on Min-Plus algebra. In fact, the behaviour of these graphs is described using linear Min-Plus equations, and constraints are expressed by inequalities or weighted inequalities in the Min-Plus algebra.Sufficient conditions for the existence of causal control laws are established. These developed controllers are state feedbacks that can be represented by monitoring places preventing the system from any constraint violation. The graph is alive and unblocked
Miolane, Léo. "Fundamental limits of inference : a statistical physics approach." Thesis, Paris Sciences et Lettres (ComUE), 2019. http://www.theses.fr/2019PSLEE043.
We study classical statistical problems such as community detection on graphs, Principal Component Analysis (PCA), sparse PCA, Gaussian mixture clustering, linear and generalized linear models, in a Bayesian framework. We compute the best estimation performance (often denoted as “Bayes Risk”) achievable by any statistical method in the high dimensional regime. This allows to observe surprising phenomena: for many problems, there exists a critical noise level above which it is impossible to estimate better than random guessing. Below this threshold, we compare the performance of existing polynomial-time algorithms to the optimal one and observe a gap in many situations: even if non-trivial estimation is theoretically possible, computationally efficient methods do not manage to achieve optimality. From a statistical physics point of view that we adopt throughout this manuscript, these phenomena can be explained by phase transitions. The tools and methods of this thesis are therefore mainly issued from statistical physics, more precisely from the mathematical study of spin glasses
Mpe, A. Guilikeng Albert. "Un système de prédiction/vérification pour la localisation d'objets tridimentionnels." Compiègne, 1990. http://www.theses.fr/1990COMPD286.
Baboin, Anne-Céline. "Calcul quantique : algèbre et géométrie projective." Phd thesis, Université de Franche-Comté, 2011. http://tel.archives-ouvertes.fr/tel-00600387.
Oliva, Jean-Michel. "Reconstruction tridimensionnelle d'objets complexes a l'aide de diagrammes de Voronoi simplifiés : application a l'interpolation 3D de sections géologiques." Phd thesis, Ecole Nationale Supérieure des Mines de Saint-Etienne, 1995. http://tel.archives-ouvertes.fr/tel-00838782.
Salamat, Majid. "Coalescent, recombinaisons et mutations." Thesis, Aix-Marseille 1, 2011. http://www.theses.fr/2011AIX10059.
This thesis is concentrated on some sub jects on population genetics. In the first part we give formulae including the expectation and variance of the height and the length of the ancestral recombination graph (ARG) and the expectation and variance of the number of recombination events and we show that the expectation of the length of the ARG is a linear combination of the expectation of the length of Kingman's coalescent and the expectation of the height of the ARG. Also we show give a relation between the expectation of the ARG and the expectation of the number of recombination events. At the end of this part we show that the ARG comes down from infinity in the sense that we can dfine it with X_0 = ∞, while X_t <∞ ; for all t and we find the speed that the ARG comes down from infinity. In the second part wfind a generalization of the the Ewens sampling formula (GESF) in the presence of recombination for sample of sizes n = 2 and n = 3. In the third part of the thesis we study the ARG along the genome and we we find the distribution of the number of mutations when we have one recombination event in the genealogy of the sample
Baboin, Anne-Céline. "Calcul quantique : algèbre et géométrie projective." Electronic Thesis or Diss., Besançon, 2011. http://www.theses.fr/2011BESA2028.
The first vocation of this thesis would be a state of the art on the field of quantum computation, if not exhaustive, simple access (chapters 1, 2 and 3). The original (interesting) part of this treatise consists of two mathematical approaches of quantum computation concerning some quantum systems : the first one is an algebraic nature and utilizes some particular structures : Galois fields and rings (chapter 4), the second one calls to a peculiar geometry, known as projective one (chapter 5). These two approaches were motivated by the theorem of Kochen and Specker and by work of Peres and Mermin which rose from it
Allanic, Marianne. "Gestion et visualisation de données hétérogènes multidimensionnelles : application PLM à la neuroimagerie." Thesis, Compiègne, 2015. http://www.theses.fr/2015COMP2248/document.
Neuroimaging domain is confronted with issues in analyzing and reusing the growing amount of heterogeneous data produced. Data provenance is complex – multi-subjects, multi-methods, multi-temporalities – and the data are only partially stored, restricting multimodal and longitudinal studies. Especially, functional brain connectivity is studied to understand how areas of the brain work together. Raw and derived imaging data must be properly managed according to several dimensions, such as acquisition time, time between two acquisitions or subjects and their characteristics. The objective of the thesis is to allow exploration of complex relationships between heterogeneous data, which is resolved in two parts : (1) how to manage data and provenance, (2) how to visualize structures of multidimensional data. The contribution follow a logical sequence of three propositions which are presented after a research survey in heterogeneous data management and graph visualization. The BMI-LM (Bio-Medical Imaging – Lifecycle Management) data model organizes the management of neuroimaging data according to the phases of a study and takes into account the scalability of research thanks to specific classes associated to generic objects. The application of this model into a PLM (Product Lifecycle Management) system shows that concepts developed twenty years ago for manufacturing industry can be reused to manage neuroimaging data. GMDs (Dynamic Multidimensional Graphs) are introduced to represent complex dynamic relationships of data, as well as JGEX (Json Graph EXchange) format that was created to store and exchange GMDs between software applications. OCL (Overview Constraint Layout) method allows interactive and visual exploration of GMDs. It is based on user’s mental map preservation and alternating of complete and reduced views of data. OCL method is applied to the study of functional brain connectivity at rest of 231 subjects that are represented by a GMD – the areas of the brain are the nodes and connectivity measures the edges – according to age, gender and laterality : GMDs are computed through processing workflow on MRI acquisitions into the PLM system. Results show two main benefits of using OCL method : (1) identification of global trends on one or many dimensions, and (2) highlights of local changes between GMD states
Pedron, Antoine. "Développement d’algorithmes d’imagerie et de reconstruction sur architectures à unités de traitements parallèles pour des applications en contrôle non destructif." Thesis, Paris 11, 2013. http://www.theses.fr/2013PA112071/document.
This thesis work is placed between the scientific domain of ultrasound non-destructive testing and algorithm-architecture adequation. Ultrasound non-destructive testing includes a group of analysis techniques used in science and industry to evaluate the properties of a material, component, or system without causing damage. In order to characterize possible defects, determining their position, size and shape, imaging and reconstruction tools have been developed at CEA-LIST, within the CIVA software platform.Evolution of acquisition sensors implies a continuous growth of datasets and consequently more and more computing power is needed to maintain interactive reconstructions. General purprose processors (GPP) evolving towards parallelism and emerging architectures such as GPU allow large acceleration possibilities than can be applied to these algorithms.The main goal of the thesis is to evaluate the acceleration than can be obtained for two reconstruction algorithms on these architectures. These two algorithms differ in their parallelization scheme. The first one can be properly parallelized on GPP whereas on GPU, an intensive use of atomic instructions is required. Within the second algorithm, parallelism is easier to express, but loop ordering on GPP, as well as thread scheduling and a good use of shared memory on GPU are necessary in order to obtain efficient results. Different API or libraries, such as OpenMP, CUDA and OpenCL are evaluated through chosen benchmarks. An integration of both algorithms in the CIVA software platform is proposed and different issues related to code maintenance and durability are discussed
Lambert, Jason. "Parallélisation de simulations interactives de champs ultrasonores pour le contrôle non destructif." Thesis, Paris 11, 2015. http://www.theses.fr/2015PA112125/document.
The Non Destructive Testing field increasingly uses simulation.It is used at every step of the whole control process of an industrial part, from speeding up control development to helping experts understand results. During this thesis, a simulation tool dedicated to the fast computation of an ultrasonic field radiated by a phase array probe in an isotropic specimen has been developped. Its performance enables an interactive usage. To benefit from the commonly available parallel architectures, a regular model (aimed at removing divergent branching) derived from the generic CIVA model has been developped. First, a reference implementation was developped to validate this model against CIVA results, and to analyze its performance behaviour before optimization. The resulting code has been optimized for three kinds of parallel architectures commonly available in workstations: general purpose processors (GPP), manycore coprocessors (Intel MIC) and graphics processing units (nVidia GPU). On the GPP and the MIC, the algorithm was reorganized and implemented to benefit from both parallelism levels, multhreading and vector instructions. On the GPU, the multiple steps of field computing have been divided in multiple successive CUDA kernels.Moreover, libraries dedicated to each architecture were used to speedup Fast Fourier Transforms, Intel MKL on GPP and MIC and nVidia cuFFT on GPU. Performance and hardware adequation of the produced algorithms were thoroughly studied for each architecture. On multiple realistic control configurations, interactive performance was reached. Perspectives to adress more complex configurations were drawn. Finally, the integration and the industrialization of this code in the commercial NDT plateform CIVA is discussed
Cardot, Anais. "Rejeu basé sur des règles de transformation de graphes." Thesis, 2019. http://www.theses.fr/2019POIT2254/document.
In many modelling fields, such as architecture, archaeology or CAD, performing many variations of the same model is an expanding need. But building all those variations manually takes time. It is therefore needed to use automatic technics to revaluate some parts of a model, or even an entire model, after the user specifies the modifications. Most of the existing approaches dedicated to revaluating models are based on a system called parametric modelling. It is made of two parts, a geometric model and a parametric specification, which allows to record the series of operation that created the model, and the different parameters of those operations. This way, the user can change some parameters, or edit the list of operations to modify the model. To do so, we use a system called persistent naming, introduced during the 90ies, that allows us to identify and match the entities of an initial specification and the ones of a revaluated specification. In this thesis, our goal is to propose a persistent naming system that would be general, homogeneous and that would allow the user to edit a parametric specification (which means move, add, or delete some operations). We base our system on the Jerboa library, which uses graph transformation rules. This way, we will be able to use those rules to create our naming system, while also linking the notions of graph transformation rules and parametric specification. We will then describe how to use our naming method to revaluate or edit parametric specifications. Finally, we will compare our method with the other ones from the literature
Anton, François. "Voronoi diagrams of semi-algebraic sets." Phd thesis, 2003. http://tel.archives-ouvertes.fr/tel-00005932.
Le diagramme de Voronoï d'un ensemble d'objets est une décomposition de l'espace en zones de proximité. La zone de proximité d'un objet est l'ensemble des points plus proches de cet objet que de tout autre objet. Les diagrammes de Voronoï permettent de répondre aux requètes de proximité après avoir identifié la zone de proximité à laquelle le point objet de la requète appartient. Le graphe dual du diagramme de Voronoï est appelé le graphe de Delaunay. Seules les approximations par des coniques peuvent garantir un ordre de continuité approprié au niveau des points de contact, ce qui est nécessaire pour garantir l'exactitude du graphe de Delaunay.
L'objectif théorique de cette thèse est la mise en évidence des propriétés algébriques et géométriques élémentaires de la courbe déplacée d'une courbe algébrique et de réduire le calcul semi-algébrique du graphe de Delaunay à des calculs de valeurs propres. L'objectif pratique de cette thèse est le calcul certifié du graphe de Delaunay pour des ensembles semi-algébriques de faible degré dans le plan euclidien.
La méthodologie associe l'analyse par intervalles et la géométrie algébrique algorithmique. L'idée centrale de cette thèse est qu'un pré-traitement symbolique unique peut accélérer l'évaluation numérique certifiée du détecteur de conflits dans le graphe de Delaunay. Le pré-traitement symbolique est le calcul de l'équation implicite de la courbe déplacée généralisée d'une conique. La réduction du problème semi-algébrique de la détection de conflits dans le graphe de Delaunay à un problème d'algèbre linéaire a été possible grâce à la considération du sommet de Voronoï généralisé (un concept introduit dans cette thèse).
Le calcul numérique certifié du graphe de Delaunay a été éffectué avec une librairie de résolution de systèmes zéro-dimensionnels d'équations et d'inéquations algébriques basée sur l'analyse d'intervalles (ALIAS). Le calcul certifié du graphe de Delaunay repose sur des théorèmes sur l'unicité de racines dans des intervalles donnés (Kantorovitch et Moore-Krawczyk). Pour les coniques, les calculs sont accélérés lorsque l'on ne considère que les équations implicites des courbes déplacées.
Pelard, Emmanuelle. "La poésie graphique : Christian Dotremont, Roland Giguère, Henri Michaux et Jérôme Peignot." Thèse, 2012. http://hdl.handle.net/1866/9866.
The purpose of this thesis is to define a type of modern visual poetry (20th – 21st), that we called graphic poetry. The graphic poetry focuses on a plastic and visual experimentation of the graphic sign, demonstrates an important conscience of the visual potential of the written form and tries to produce poetry in the materiality of the writing shapes. This artistic practice of poetry follows and also renews the poetic and plastic avant-gardes of the 20th century, more particularly surrealism. The graphic poetry refers to a practice of poem which is specifically graphic and includes a painting of the sign as a typographic work of the letter in order to produce the poem. The main characteristic of graphic poets is their double vocation : they are writers and visual artists. They experiment writing in its linguistic dimension and its graphic dimension, because they believe that the action of drawing a line or making of a linotype setting — namely the painting or the drawing of the word, of the graphic sign, the typography and the publishing — is already literature. In other words, graphic poets think that the creation and the materialization of the poem proceed of the same gesture. Christian Dotremont’s logograms, Roland Giguère’s artists’ books (Editions Erta) and prints-poems, Henri Michaux’s anthologies of invented painted signs and Jérôme Peignot’s typoems are some forms of graphic poetry. Our study focuses on francophone works, which come from Belgian, French and Quebec fields, published between 1950 and 2004. Three characteristics mainly define the graphic poetry : the ambiguity and the nomadism of the sign in relation to the semiotic systems (graphic, iconic and plastic), graphics rhythm and lyricism, as modalities of the expression of the subject in the graphic material, and a questioning of the distinction between autographic arts and allographic arts, requiring new ways of perception and reading of the poem and the book, that we called visual-reading and touch-reading.
Réalisé en cotutelle avec l'Université de la Sorbonne - Paris IV