Dissertations / Theses on the topic 'Algorithmes de complexité paramétrés'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Algorithmes de complexité paramétrés.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Guillemot, Sylvain. "Approches combinatoires pour le consensus d'arbres et de séquences." Phd thesis, Montpellier 2, 2008. http://www.theses.fr/2008MON20234.
Daligault, Jean. "Techniques combinatoires pour les algorithmes paramétrés et les noyaux, avec applications aux problèmes de multicoupe." Phd thesis, Université Montpellier II - Sciences et Techniques du Languedoc, 2011. http://tel.archives-ouvertes.fr/tel-00804206.
Bulteau, Laurent. "Ordre et désordre dans l’algorithmique du génome." Nantes, 2013. http://archive.bu.univ-nantes.fr/pollux/show.action?id=34821dc1-842c-4bdc-a541-2e1752281cf7.
In this thesis, we explore the combinatorial complexity of several problems stemming from comparative genomics, and we provide solutions for some of these problems in the form of parameterized or approximation algorithms. In the problems we consider, we bring together the genomic information of several species and aim at drawing relevant conclusions for the study of these species. Sorting a genome either by transpositions or by prefix reversals leads to the reconstruction of the evolution history regarding the genomes of two species. Exemplar distances and common partition aim at comparing genomes in the algorithmically complex case where duplicated genes are present. Finally, the strip recovery and linearization problems aim at correcting or completing genomic information so that it can be used for further analysis. The main results we expose are the NP-hardness of the sorting problems (both by transpositions and by prefix reversals), of which the computational complexity has been a long-standing open question; an exhaustive study of the computational complexity of exemplar distances; a parameterized algorithm for the minimum common string partition problem; a deep and wide study of strip recovery problems; and finally a new graph structure allowing for a more efficient solving of the linearization problem
Guillemot, Sylvain. "Approches Combinatoires pour le Consensus d'Arbres et de Séquences." Phd thesis, Université Montpellier II - Sciences et Techniques du Languedoc, 2008. http://tel.archives-ouvertes.fr/tel-00401456.
Fradin, Julien. "Graphes complexes en biologie : problèmes, algorithmes et évaluations." Thesis, Nantes, 2018. http://www.theses.fr/2018NANT4093/document.
Ln order to better understand how a biological system works, it is necessary to study the interactions between the different entities that compose it. To this aim, these biological interactions can be modelled in the form of graphs. ln some of these graphs, the vertices are colored in order to provide additional information on the entity which is associated with them. ln this context, a common subproblem consists in searching for a subgraph of interest, called a motif, in these graphs. ln the first part of this manuscript, we present a state of the art from an algorithmical point of view of the GRAPH MOTIF problem, which consists in searching for so-called functional motifs in vertex-colored graphs. The modeling of biological systems in graphs form can also be applied in mass spectrometry. Thus, we introduce the MAXIMUM COLORFUL ARBORESCENCE problem (MCA) in order to de novo determine the molecular formula of unknown metabolites. ln the second part of this manuscript, we carry out an algorithmic study of the MCA problem. While MCA is algorithmically difficult to solve even in very constrained graph classes, our modeling allows us to obtain new approximation algorithms in these same classes, as well as to determine a new graph class in which MCA is solved in polynomial time. Parameterized complexity results for this problem are also shown, which are then compared to those in the literature on instances from biological data
Bonnet, Edouard. "Résultats Positifs et Négatifs en Approximation et Complexité Paramétrée." Thesis, Paris 9, 2014. http://www.theses.fr/2014PA090040/document.
Several real-Life problems are NP-Hard and cannot be solved in polynomial time.The two main options to overcome this issue are: approximation and parameterized complexity. In this thesis, we present a new technique called greediness-For-Parameterization and we use it to improve the parameterized complexity of many problems. We also use this notion to obtain parameterized algorithms for some problems in bipartite graphs. Aiming at establishing negative results on the approximability in subexponential time and in parameterized time, we introduce new methods of sparsification that preserves approximation. We combine those "sparsifiers" with known or new reductions to achieve our goal. Finally, we present some hardness results of games such as Bridge and Havannah
Chopin, Morgan. "Problèmes d'optimisation avec propagation dans les graphes : complexité paramétrée et approximation." Phd thesis, Université Paris Dauphine - Paris IX, 2013. http://tel.archives-ouvertes.fr/tel-00933769.
Bergé, Pierre. "Algorithmes pour voyager sur un graphe contenant des blocages." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLS480.
We study NP-hard problems on graphs with blockages seen as models of networks which are exposed to risk of failures.We treat cut problems via the parameterized complexity framework. The cutset size p is taken as a parameter. Given a set of sources {s1,...,sk} and a target $t, we propose an algorithm which builds a small edge cut of size p separating at least r sources from t. This NP-complete problem is called Partial One-Target Cut. It belongs to the family of multiterminal cut problems. Our algorithm is fixed-parameter tractable (FPT) as its execution takes $2^{O(p^2)}n^{O(1)}$. We prove that the vertex version of this problem, which imposes cuts to contain vertices instead of edges, is W[1]-hard. Then, we design an FPT algorithm which counts the minimum vertex (S,T)-cuts of an undirected graph in time $2^{O(plog p)}n^{O(1)}$.We provide numerous results on the competitive ratio of both deterministic and randomized strategies for the Canadian Traveller Problem. The optimal ratio obtained for the deterministic strategies on general graphs is 2k+1, where k is a given upper bound on the number of blockages. We show that randomized strategies which do not use memory cannot improve the bound 2k+1. In addition, we discuss the tightness of lower bounds on the competitiveness of randomized strategies. The distance competitive ratio for a group of travellers possibly equipped with telecommunication devices is studied. Eventually, a strategy dedicated to equal-weight chordal graphs is proposed while another one is built for graphs with small maximum (s,t)-cuts. Both strategies outperform the ratio 2k+1
Watrigant, Rémi. "Approximation et complexité paramétrée de problèmes d’optimisation dans les graphes : partitions et sous-graphes." Thesis, Montpellier 2, 2014. http://www.theses.fr/2014MON20100/document.
The theory of NP-completeness tells us that for many optimization problems, there is no hope for finding an efficient algorithm computing an optimal solution. Based on this, two classical approaches have been developped to deal with these problems. The first one, called polynomial- time approximation, consists in designing efficient algorithms computing a solution that is close to an optimal one. The second one, called param- eterized complexity, consists in designing exact algorithms which com- binatorial explosion is captured by a carefully chosen parameter of the instance. The goal of this thesis is twofold. First, we study and apply classical methods from these two domains in order to obtain positive and negative results for two optimization problems in graphs: a partitioning problem called Sparsest k-Compaction, and a cardinality constraint subgraph problem called Sparsest k-Subgraph. Then, we present how the different methods from these two domains have been combined in recent years in a concept called parameterized approximation. In particular, we study the links between approximation and kernelization algorithms
Khosravian, Ghadikolaei Mehdi. "Extension of NP Optimization Problems." Thesis, Paris Sciences et Lettres (ComUE), 2019. http://www.theses.fr/2019PSLED064.
The problem of determining the quality of a partial solution occurs in almost every algorithmic approach that gradually computes a global solution. Pruning search trees, proving approximation guarantees, or the efficiency of enumeration strategies usually require a suitable way to decide if a given partial solution is a reasonable candidate to pursue for extension to a global one, of assured quality. In this thesis, we study a special type of optimization problems, called extension problems for a large number of optimization problems on graphs. Possibly contradicting intuition, these problems tend to be NP-hard, even for problems where the underlying optimization problem can be solved in polynomial time. We present many positive/negative hardness and approximation results for different input scenarios. Moreover, the parameterized complexity of extension problems with respect to size of partial solutions, as well as the optimality of some exact algorithms under the Exponential-Time Hypothesis (ETH) are studied
Perez, Anthony. "Algorithmes de noyau pour des problèmes d'édition de graphes et autres structures." Phd thesis, Université Montpellier II - Sciences et Techniques du Languedoc, 2011. http://tel.archives-ouvertes.fr/tel-00660089.
Watel, Dimitri. "Approximation de l'arborescence de Steiner." Thesis, Versailles-St Quentin en Yvelines, 2014. http://www.theses.fr/2014VERS0025/document.
The directed Steiner tree problem (DST) asks, considering a directed weighted graph, a node r called root and a set of nodes X called terminals, for a minimum cost directed tree rooted in r spanning X. DST is an NP-complete problem. We are interested in the search for polynomial approximations for DST. Unless P = NP, DST can not be approximated neither within a constant ratio nor a logarithmic ratio with respected to k, where k is the number of terminals. The smallest known approximation ratio is O(kԑ)$ where ԑ is a positive real.In the first part, we provide three new approximation algorithms : a practical greedy algorithm merging two of the main approximation techniques for DST, an algorithm for the case where the graph is layered and where the distance between the terminals and the root is high, and an exponential approximation algorithm combining an approximation algorithm and an exact algorithm, parameterized with the number of terminals the exact algorithm must cover.In the last part we study DST with two branching constraints. With this technique, we are able to reduce the number of feasible solutions, and possibly facilitate the search for an optimal solution of the constraint problem. We study how it is possible to build such a solution in polynomial time and if this solution is a good approximation of an optimal solution of the non-constraint problem
Garnero, Valentin. "(Méta)-noyaux constructifs et linéaires dans les graphes peu denses." Thesis, Montpellier, 2016. http://www.theses.fr/2016MONTT328/document.
In the fields of Algorithmic and Complexity, a large area of research is based on the assumption that P ≠ NP(Polynomial time and Non deterministic Polynomial time), which means that there are problems for which a solution can be verified but not constructed in polynomial time. Many natural problems are not in P, which means, that they have no efficient algorithm. In order to tackle such problems, many different branches of Algorithmic have been developed. One of them is called Parametric Complexity. It consists in developing exact algorithms whose complexity is measured as a function of the size of the instance and of a parameter. Such a parameter allows a more precise analysis of the complexity. In this context, an algorithm will be considered to be efficient if it is fixed parameter tractable (fpt), that is, if it has a complexity which is exponential in the parameter and polynomial in the size of the instance. Problems that can be solved by such an algorithm form the FPT class.Kernelisation is a technical that produces fpt algorithms, among others. It can be viewed as a preprocessing of the instance, with a guarantee on the compression of the data. More formally, a kernelisation is a polynomial reduction from a problem to itself, with the additional constraint that the size of the kernel, the reduced instance, is bounded by a function of the parameter. In order to obtain an fpt algorithm, it is sufficient to solve the problem in the reduced instance, by brute-force for example (which has exponential complexity, in the parameter). Hence, the existence of a kernelisiation implies the existence of an fpt algorithm. It holds that the converse is true also. Nevertheless, the existence of an efficient fpt algorithm does not imply a small kernel, meaning a kernel with a linear or polynomial size. Under certain hypotheses, it can be proved that some problems can not have a kernel (that is, are not in FPT) and that some problems in FPT do not have a polynomial kernel.One of the main results in the field of Kernelisation is the construction of a linear kernel for the Dominating Set problem on planar graphs, by Alber, Fellows and Niedermeier.To begin with, the region decomposition method proposed by Alber, Fellows and Niedermeier has been reused many times to develop kernels for variants of Dominating Set on planar graphs. Nevertheless, this method had quite a few inaccuracies, which has invalidated the proofs. In the first part of our thesis, we present a more thorough version of this method and we illustrate it with two examples: Red Blue Dominating Set and Total Dominating Set.Next, the method has been generalised to larger classes of graphs (bounded genus, minor-free, topological-minor-free), and to larger families of problems. These meta-results prove the existence of a linear or polynomial kernel for all problems verifying some generic conditions, on a class of sparse graphs. As a price of generality, the proofs do not provide constructive algorithms and the bound on the size of the kernel is not explicit. In the second part of our thesis, we make a first step to constructive meta-results. We propose a framework to build linear kernels based on principles of dynamic programming and a meta-result of Bodlaender, Fomin, Lokshtanov, Penninkx, Saurabh and Thilikos
Cochefert, Manfred. "Algorithmes exacts et exponentiels pour les problèmes NP-difficiles sur les graphes et hypergraphes." Electronic Thesis or Diss., Université de Lorraine, 2014. http://www.theses.fr/2014LORR0336.
In this thesis, we are interested in the exact computation of np-hard problems on graphs and hypergraphs. Firstly, we study several variants of colorings. Those variants appear harder than the famous chromatic number problem, by adding difficulty in recognizing the color classes, or more often by introducing various relationships between them. Then we link to problems of transversals in hypergraphs. More precisely, we are interested in enumerating minimal transversals in bounded ranked hypergraphs. Besides the exact computation, we are also interested in fixed parameter tractability. For this area, we study two optimization versions of the famous square root of graphs problem. Finally, we will be interested in solving other problems of graphs related to colorings, or in order to compare efficiencies of algorithms depending on the memory space available. In this thesis, we will apply most of major techniques in designing exact exponential algorithms. The main techniques we use are dynamic programming, inclusion-exclusion, branching, or measure and conquer
Sikora, Florian. "Aspects algorithmiques de la comparaison d'éléments biologiques." Phd thesis, Université Paris-Est, 2011. http://pastel.archives-ouvertes.fr/pastel-00667797.
Yacoub, Taher. "Développement et implémentation d'une approche par fragments pour le design d'ARNs modifiés simple brin avec évaluation sur des protéines de liaison à l'ARN et un modèle d'étude la Bêta-Sécrétase 1." Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASL002.
New therapeutic strategies have emerged using aptamers that are high-affinity ligands generated using a procedure called SELEX. Nevertheless, their use are limited due to the lack of selectivity and off-target effects frequently observed as in all RNA-based therapeutic strategies. In order to increase their specificity and selectivity, a class of chemically modified aptamers has been designed in-situ by SELEX (“SOMAmers”). But, various limitations remain due to technical constraints in SELEX, in particular the number and the type of chemical modifications that can be used. We propose a new fragment-based strategy to design in-silico modified aptamers that will overcome some of these limitations, based on the knowledge of 3D structure and the color-coding technique introduced by Alon, Yuster and Zwick. This approach is based on the modeling of pose connectivity in the form of a graph, enabling the implementation of an efficient combinatorial algorithm using dynamic programming. It is used for the “Docking” from a given fragment distribution, the “Design” and the equilibrium statistics for learning features between fragments and the 3D protein, and has been the subject of a proof of concept on 7 ssRNA/protein complexes.To test this new approach on a real therapeutic case study, an analysis has been carried out on the Beta-Secretase 1 (BACE1), an enzyme involved in Alzheimer's disease, and the Beta-Secretase 2 (a homologous protein to BACE1) to determine the key features of specificity and to select the modified nucleotides that bind sites and subsites of BACE1. These results can provide a basis for the design of selective ssRNA and for the evaluation of the Color-Coding based methodology
Ayad, Ali. "Complexité de résolution de systèmes algébriques paramétrés." Rennes 1, 2006. https://tel.archives-ouvertes.fr/tel-00127383.
Cochefert, Manfred. "Algorithmes exacts et exponentiels pour les problèmes NP-difficiles sur les graphes et hypergraphes." Thesis, Université de Lorraine, 2014. http://www.theses.fr/2014LORR0336/document.
In this thesis, we are interested in the exact computation of np-hard problems on graphs and hypergraphs. Firstly, we study several variants of colorings. Those variants appear harder than the famous chromatic number problem, by adding difficulty in recognizing the color classes, or more often by introducing various relationships between them. Then we link to problems of transversals in hypergraphs. More precisely, we are interested in enumerating minimal transversals in bounded ranked hypergraphs. Besides the exact computation, we are also interested in fixed parameter tractability. For this area, we study two optimization versions of the famous square root of graphs problem. Finally, we will be interested in solving other problems of graphs related to colorings, or in order to compare efficiencies of algorithms depending on the memory space available. In this thesis, we will apply most of major techniques in designing exact exponential algorithms. The main techniques we use are dynamic programming, inclusion-exclusion, branching, or measure and conquer
Bulteau, Laurent. "Ordres et désordres dans l'algorithmique du génome." Phd thesis, Université de Nantes, 2013. http://tel.archives-ouvertes.fr/tel-00906929.
Mhalla, Mehdi. "Informatique quantique, algorithmes et complexité." Grenoble INPG, 2004. http://www.theses.fr/2004INPG0113.
This work consists in several results in different domains of quantum computing. First, we propose an introduction to the quantum computing theory. Then we give efficient characterizations of entanglement for pure states. We define the full separability and the p-q separability, and give optimal algorithms that improve by a quadratic factor the detection of entanglement. The third part is dedicated to quantum game theory. We analyse some classical combinatorial games, and find an optimal strategy for the 0. 07 octal game. Then we propose a quantisation of the family of octal games, and of some other combinatorial games, defining by the way a formalism that permits to study such games. We also provide some new ideas for the study of the well know coin flip game. In the last part, we study optimisation problems, and give an optimal minima finding algorithm based on the quantum search. Then we apply this tool to design algorithms for some graph problems (connectivity, strong connectivity, minimum spanning tree and single source shortest paths. We prove the optimality of our algorithms by using the quantum adversary lower bound method, giving therefore a characherisation of the speed-up given by quantum computing for these problems
Cheikh-Alili, Fahima. "Composition de services: algorithmes et complexité." Phd thesis, Université Paul Sabatier - Toulouse III, 2009. http://tel.archives-ouvertes.fr/tel-00459114.
Duflot, Marie. "Algorithmes distribués sur des anneaux paramétrés - Preuves de convergence probabiliste et déterministe." Phd thesis, École normale supérieure de Cachan - ENS Cachan, 2003. http://tel.archives-ouvertes.fr/tel-00091429.
Derhy, Nicolas. "Multicoupes et sous-graphes induits : complexité et algorithmes." Phd thesis, Conservatoire national des arts et metiers - CNAM, 2008. http://tel.archives-ouvertes.fr/tel-00367626.
Tapp, Alain. "Informatique quantique, algorithmes et complexité de la communication." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0018/NQ51978.pdf.
Dehry, Nicolas. "Multicoupes et sous-graphes induits : complexité et algorithmes." Paris, CNAM, 2008. http://www.theses.fr/2008CNAM0598.
Dans ce travail de thèse, nous nous intéressons à plusieurs problèmes de théorie des graphes. Dans un premier temps, nous étudions différents problémes de coupes et de multicoupes pouis, dans un second temps, nous nous focalisons sur des problèmes de recherche de sous-graphes induits. Néammoins, ces deux parties suivent la même ligne directrice : donner une vue d'ensemble de la complexité des problémes en établissant leur NP-complétude ou en déterminant un algorithme polynomial de moindre complexité. Dans la première partie de la thèse, nous abordons les problèmes de coupes et de multicoupes. Tout d'abord, nous étudions la conséquence de l'ajout d'une contrainte de cardinalité à ces deux types de problèmes et démontrons leur NP-complètude dans le cas général. Puis, nous déterminons leur complexité dans plusieurs classes de graphes particuliers telles que les étoiles orientées et les chaînes en élaborant, pour les cas polynominaux, différents algorithmes reposant principalement sur la programmation dynamique et l'utilisation de relaxations lagrangiennes. Nous généralisons ensuite cette approche en considérant les versions multicritères des problèmes de coupes et de multicoupes. Nous prouvons que ces derniers sont NP-complets même dans des typologies très simples comme les chaînes ou les cycles. Dans la seconde partie de ce mémoire, nous abordons des problèmes de recherche de sous-graphes induits. Nous nous intéressons principalement à la recherche d'arbres, de chaînes et de cycles induits ouvrant un ensemble T de sommets donnés. Après avouir prouvé la NP-complétude des cas généraux, nous nous focalisons davantage sur les cas ou la cardinalité de T est fixée. Nous donnons également plusieurs résultats structurels pour les graphes de maille suffisamment large
Markey, Nicolas. "Logiques temporelles pour la vérification : expressivité, complexité, algorithmes." Orléans, 2003. http://www.theses.fr/2003ORLE2004.
Montealegre, Barba Pedro. "Algorithmes de graphes séquentiels et distribués : algorithmes paramétrés via des cliques maximales potentielles : modèle de diffusion dans une clique congestionnée." Thesis, Orléans, 2017. http://www.theses.fr/2017ORLE2001/document.
This thesis is about structural and algorithmic aspects of graphs. It is divided in two parts, which are about two different studies: one part is about centralized-sequential algorithms, and the other part is about distributed algorithms. In the first part of the thesis we study algorithmic applications of two graph structures called minimal separators and potential maximal cliques. These two objects are in the core of a meta-theorem due to Fomin, Todinca and Villanger (SIAM J. Comput. 2015), which states that a large family of graph optimization problems can be solved in polynomial time, when the input is restricted to the family of graphs with polynomially many minimal separators. The contribution of this part of the thesis is to extend the meta-theorem of Fomin et al. in two ways. On one hand, we adapt it to be valid into a larger family of problems. On the other hand, we extend it into a parameterized version, for several graph parameters. In the second part of this thesis we study the broadcast congested clique model. In this model, the nodes of a graph communicate in synchronous rounds, broadcasting a message of small size visible to every other node. The goal is to design protocols that recognize graph classes minimizing the number of rounds and the message sizes. The contribution of this part is to explore the role of randomness on this model, and provide protocols for the recognition and reconstruction of some graph classes
Romero-Meléndez, Cutberto. "Complexité métrique sous-riemannienne." Dijon, 2004. http://www.theses.fr/2004DIJOS028.
The subject of this thesis is the metric complexity in the sense of Kolmogorov-Jean for horizontals curves of generic sub-Riemannian metrics of co-rank one, defined on a manifold of dimension N. We set the problem in the context of the motion planning problem. At the first place, for N bigger than 3, by aking use of the normal forms for the contact and quasi-contact sub-Riemannian structures, we provide explicit expressions for the metric complexity in terms of elementary invariants of the problem. The case N equal 3, turns out to be the more complicated, and in this case, in the presence of Martinet points, we provide bounds for the metric complexity. Secondly, we construct the asymptotic optimal syntheses for the motion planning problem. At the end, we present some algorithms that solve the motion planning problem and their corresponding implementation for the model of the unicycle
Nicolas, François. "Alignement, séquence, consensus, recherche de similarités : complexité et approximabilité." Montpellier 2, 2005. http://www.theses.fr/2005MON20179.
Hadjiat, Malika. "Problèmes de tension sur un graphe : algorithmes et complexité." Aix-Marseille 2, 1996. http://www.theses.fr/1996AIX22061.
Covanov, Svyatoslav. "Algorithmes de multiplication : complexité bilinéaire et méthodes asymptotiquement rapides." Thesis, Université de Lorraine, 2018. http://www.theses.fr/2018LORR0057/document.
Since 1960 and the result of Karatsuba, we know that the complexity of the multiplication (of integers or polynomials) is sub-quadratic: given a ring R, the product in R[X] of polynomials a_0 + a_1 X and b_0 + b_1 X, for any a_0, a_1, b_0 and b_1 in R, can be computed with three and not four multiplications over R: (a_0 + a_1X)(b_0 + b_1X) = m_0 + (m_2 - m_0 - m_1)X + m_1X^2, with the three multiplications m_0 = a_0b_0, m_1 = a_1b_1 et m_2 = (a_0 + a_1)(b_0 + b_1). In the same manner, Strassen's algorithm allows one to multiply two matrices 2nx2n with only seven products of matrices nxn. The two previous examples fall in the category of bilinear maps: these are functions of the form Phi : K^m x K^n -> K^l, given a field K, linear in each variable. Among the most classical bilinear maps, we have the multiplication of polynomials, matrices, or even elements of algebraic extension of finite fields. Given a bilinear map Phi, computing the minimal number of multiplications necessary to the evaluation of this map is a NP-hard problem. The purpose of this thesis is to propose algorithms minimizing this number of multiplications. Two angles of attack have been studied. The first aspect of this thesis is to study the problem of the computation of the bilinear complexity under the angle of the reformulation of this problem in terms of research of matrix subspaces of a given rank. This work led to an algorithm taking into account intrinsic properties of the considered products such as matrix or polynomial products over finite fields. This algorithm allows one to find all the possible decompositions, over F_2, for the product of polynomials modulo X^5 and the product of matrices 3x2 by 2x3. Another aspect of this thesis was the development of fast asymptotic methods for the integer multiplication. There is a particular family of algorithms that has been proposed after an article by Fürer published in 2007. This article proposed a first algorithm, relying on fast Fourier transform (FFT), allowing one to multiply n-bit integers in O(n log n 2^{O(log^* n)}), where log^* is the iterated logarithm function. In this thesis, an algorithm, relying on a number theoretical conjecture, has been proposed, involving the use of FFT and generalized Fermat primes. With a careful complexity analysis of this algorithm, we obtain a complexity in O(nlog n 4^{log^* n})
Covanov, Svyatoslav. "Algorithmes de multiplication : complexité bilinéaire et méthodes asymptotiquement rapides." Electronic Thesis or Diss., Université de Lorraine, 2018. http://www.theses.fr/2018LORR0057.
Since 1960 and the result of Karatsuba, we know that the complexity of the multiplication (of integers or polynomials) is sub-quadratic: given a ring R, the product in R[X] of polynomials a_0 + a_1 X and b_0 + b_1 X, for any a_0, a_1, b_0 and b_1 in R, can be computed with three and not four multiplications over R: (a_0 + a_1X)(b_0 + b_1X) = m_0 + (m_2 - m_0 - m_1)X + m_1X^2, with the three multiplications m_0 = a_0b_0, m_1 = a_1b_1 et m_2 = (a_0 + a_1)(b_0 + b_1). In the same manner, Strassen's algorithm allows one to multiply two matrices 2nx2n with only seven products of matrices nxn. The two previous examples fall in the category of bilinear maps: these are functions of the form Phi : K^m x K^n -> K^l, given a field K, linear in each variable. Among the most classical bilinear maps, we have the multiplication of polynomials, matrices, or even elements of algebraic extension of finite fields. Given a bilinear map Phi, computing the minimal number of multiplications necessary to the evaluation of this map is a NP-hard problem. The purpose of this thesis is to propose algorithms minimizing this number of multiplications. Two angles of attack have been studied. The first aspect of this thesis is to study the problem of the computation of the bilinear complexity under the angle of the reformulation of this problem in terms of research of matrix subspaces of a given rank. This work led to an algorithm taking into account intrinsic properties of the considered products such as matrix or polynomial products over finite fields. This algorithm allows one to find all the possible decompositions, over F_2, for the product of polynomials modulo X^5 and the product of matrices 3x2 by 2x3. Another aspect of this thesis was the development of fast asymptotic methods for the integer multiplication. There is a particular family of algorithms that has been proposed after an article by Fürer published in 2007. This article proposed a first algorithm, relying on fast Fourier transform (FFT), allowing one to multiply n-bit integers in O(n log n 2^{O(log^* n)}), where log^* is the iterated logarithm function. In this thesis, an algorithm, relying on a number theoretical conjecture, has been proposed, involving the use of FFT and generalized Fermat primes. With a careful complexity analysis of this algorithm, we obtain a complexity in O(nlog n 4^{log^* n})
Ayad, Ali. "Complexité de la résolution des systèmes algébriques paramétriques." Phd thesis, Université Rennes 1, 2006. http://tel.archives-ouvertes.fr/tel-00127383.
Revol, Nathalie. "Complexité de l'évaluation parallèle de circuits arithmétiques." Grenoble INPG, 1994. http://tel.archives-ouvertes.fr/tel-00005109.
Spaenlehauer, Pierre-Jean. "Résolution de systèmes multi-homogènes et déterminantiels algorithmes - complexité - applications." Paris 6, 2012. http://www.theses.fr/2012PA066467.
Multivariate polynomial systems arising in Engineering Science often carryalgebraic structures related to the problems they stem from. Inparticular, multi-homogeneous, determinantal structures and booleansystems can be met in a wide range of applications. A classical method to solve polynomial systems is to compute a Gröbner basis ofthe ideal associated to the system. This thesis provides new tools forsolving such structured systems in the context of Gröbner basis algorithms. On the one hand, these tools bring forth new bounds on the complexity of thecomputation of Gröbner bases of several families of structured systems(bilinear systems, determinantal systems, critical point systems,boolean systems). In particular, it allows the identification of families ofsystems for which the complexity of the computation is polynomial inthe number of solutions. On the other hand, this thesis provides new algorithms which takeprofit of these algebraic structures for improving the efficiency ofthe Gröbner basis computation and of the whole solving process(multi-homogeneous systems, boolean systems). These results areillustrated by applications in cryptology (cryptanalysis of MinRank),in optimization and in effective real geometry (critical pointsystems)
Atig, Mohamed Faouzi. "Vérification de Programmes Concurrents : Décidabilité et Complexité." Paris 7, 2010. http://www.theses.fr/2010PA077066.
This thesis addresses the verification problems in both, concurrent and recursive Systems as well as concurrent Systems with store buffers. We establish the required theoretical basis for automated analyses: decidability and complexity results for reachability problems. In a first time, we are interested in verifying concurrent programs where each process corresponds to a sequential program with (recursive) procedure calls. The difficulty in analyzing such programs cornes from the interaction between recursion and concurrency which makes the reachability problems undecidable in general. However, in practice programs obey additional constraints that can be exploited to turn the reachability problem decidable. Their study is subject of this thesis. These conditions may be seen as constraints to impose on the order between the actions of the analyzed programs. Moreover, these decidability results can be used to perform an under-approximation analysis to effectively detect bad behaviors of the analyzed programs. In a second time, we study concurrent programs running under weak memory models. In such kind of programs, the order between actions of the same process is relaxed (for performance reasons) by allowing the permutation between certain types of memory operations. This makes reasoning about the behaviors of concurrent programs much more difficult. Moreover, it is not clear how to apply standard reasoning techniques. Our works show that indeed according to the type of relaxation, the reachability problem becomes décidable (but with a highly complexity) in other cases, it even turns out undecidability
Bagan, Guillaume. "Algorithmes et complexité des problèmes d'énumération pour l'évaluation de requêtes logiques." Phd thesis, Université de Caen, 2009. http://tel.archives-ouvertes.fr/tel-00424232.
Faik, Taoufik. "La b-continuite des b-colorations : complexité, propriétés structurelles et algorithmes." Paris 11, 2005. http://www.theses.fr/2005PA112047.
THE B-COLORING OF A GRAPH G IS A COLORING SUCH OF G SUCH THAT FOR EACH COLOR c, THERE EXISTS AT LEAST A VERTEX v (CALLED COLORFUL VERTEX) COLORED c HAVING EACH OTHER COLOR ON ITS NEIGHBORHOOD. AN F-COLORING IS A B-COLORING WHERE ALL THE VERTICES ARE COLORFUL. IT IS CLEAR THAT ANY COLORING WITH THE CHROMATIC NUMBER OF COLORS IS A B-COLORING. WHOEVER, THE F-COLORING DO NOT EXIST IN ANY GRAPH. ^ONE PECULIAR CHARACTERISTIC OF B-COLORINGS (RESP; F-COLORINGS) IS THAT FOR SOME GRAPHS, THERE EXISTS A p AND A q B-COLORING (RESP. F-COLORING) WITH p
Abouelaoualim, Abdelfattah. "Exploration des graphes arêtes-colorées : topologie, algorithmes, complexité et (non)-approximabilité." Paris 11, 2007. https://tel.archives-ouvertes.fr/tel-00281533.
The graphs which edges are colored with c>1 colors, with c is a given integer, in other words c-edge-colored graphs, have a growing number of fields of applications particularly in molecular biology and VLSI. Their theoretical motivation is obvious sine they are a generalization of digraphs. In the present work, we explore these graphs to extract and study structures (i. E. Subgraphs) called properly-edge-colored which every pair of adjacent edges differ in color. We start this work by a part introducing the most notable results in the literature and cover the majority of questions treated in this topic since the sixties. In the second part, first we give characterizations of certain properly-edge-colored structures such as paths and cycles. After that, we were interested by the construction of polynomial algorithms, the study of complexity and approximability aspect of a variety of structures
Ouertani, Rym. "Algorithmes de décodage pour les systèmes multi-antennes à complexité réduite." Phd thesis, Paris, Télécom ParisTech, 2009. https://pastel.hal.science/pastel-00718214.
MIMO systems offer large capacity. Several decoders of such systems exist in the literature. Unfortunately, their complexity increases drastically with the lattice dimension and the constellation size. Then, we propose a sequential algorithm (SB-Stack) based on the stack decoder search strategy and the sphere decoder search region. The proposed decoder outperforms the existing ones in term of complexity while achieving ML performance. Furthermore, introducing a bias parameter, the SB-Stack gives a range of performances from ML to ZF-DFE with proportional complexities. So, different performance/complexity trade-offs could be obtained. When channel coding is used, soft decoding becomes necessary. The SB-Stack is then extended to support soft-output detection. A straightforward idea was to exploit the nodes stored in the stack at the end of hard decoding process to calculate LLR. The gain of such method is rather large then classical soft decoders. The big variation of the complexity between low and high SNR is an additional problem because of the variable decoding time. We propose an adaptive decoder based on the SB-Stack that switches between several decoders according to the channel realization and the system specifications. This decoder has an almost constant complexity while keeping good performance. Lattice reduction is used to accelerate the decoding of infinite lattice. Using the MMSE-GDFE, it becomes possible to apply lattice reduction when finite constellations are considered. Therefore, interesting results are obtained when considering MIMO schemes combining the lattice reduction, the MMSE-GDFE process and the sequential decoders given previously
Ouertani, Rym. "Algorithmes de décodage pour les systèmes multi-antennes à complexité réduite." Phd thesis, Télécom ParisTech, 2009. http://pastel.archives-ouvertes.fr/pastel-00718214.
Moroz, Guillaume. "Sur la décomposition réelle et algébrique des systèmes dépendant de paramètre." Paris 6, 2008. https://tel.archives-ouvertes.fr/tel-00812436.
Trystram, Denis. "Quelques résultats de complexité en algorithmique parallèle et systolique." Grenoble INPG, 1988. http://tel.archives-ouvertes.fr/tel-00009202.
Bhouri, Mounir. "Algorithmes adaptatifs parallèles à complexité réduite, application au filtrage adaptatif multi-canal." Paris 5, 1999. http://www.theses.fr/1999PA05S003.
Moataz, Fatima Zahra. "Vers des réseaux optiques efficaces et tolérants aux pannes : complexité et algorithmes." Thesis, Nice, 2015. http://www.theses.fr/2015NICE4077/document.
We study in this thesis optimization problems with application in optical networks. The problems we consider are related to fault-tolerance and efficient resource allocation and the results we obtain are mainly related to the computational complexity of these problems. The first part of this thesis is devoted to finding paths and disjoint paths. Finding a path is crucial in all types of networks in order to set up connections and finding disjoint paths is a common approach used to provide some degree of protection against failures in networks. We study these problems under different settings. We first focus on finding paths and node or link-disjoint paths in networks with asymmetric nodes, which are nodes with restrictions on their internal connectivity. Afterwards, we consider networks with star Shared Risk Link Groups (SRLGs) which are groups of links that might fail simultaneously due to a localized event. In these networks, we investigate the problem of finding SRLG-disjoint paths. The second part of this thesis focuses on the problem of Routing and Spectrum Assignment (RSA) in Elastic Optical Networks (EONs). EONs are proposed as the new generation of optical networks and they aim at an efficient and flexible use of the optical resources. RSA is the key problem in EONs and it deals with allocating resources to requests under multiple constraints. We first study the static version of RSA in tree networks. Afterwards, we examine a dynamic version of RSA in which a non-disruptive spectrum defragmentation technique is used. Finally, we present in the appendix another problem that has been studied during this thesis
Abelard, Simon. "Comptage de points de courbes hyperelliptiques en grande caractéristique : algorithmes et complexité." Thesis, Université de Lorraine, 2018. http://www.theses.fr/2018LORR0104/document.
Counting points on algebraic curves has drawn a lot of attention due to its many applications from number theory and arithmetic geometry to cryptography and coding theory. In this thesis, we focus on counting points on hyperelliptic curves over finite fields of large characteristic $p$. In this setting, the most suitable algorithms are currently those of Schoof and Pila, because their complexities are polynomial in $\log q$. However, their dependency in the genus $g$ of the curve is exponential, and this is already painful even in genus 3. Our contributions mainly consist of establishing new complexity bounds with a smaller dependency in $g$ of the exponent of $\log p$. For hyperelliptic curves, previous work showed that it was quasi-quadratic, and we reduced it to a linear dependency. Restricting to more special families of hyperelliptic curves with explicit real multiplication (RM), we obtained a constant bound for this exponent.In genus 3, we proposed an algorithm based on those of Schoof and Gaudry-Harley-Schost whose complexity is prohibitive in general, but turns out to be reasonable when the input curves have explicit RM. In this more favorable case, we were able to count points on a hyperelliptic curve defined over a 64-bit prime field
Abelard, Simon. "Comptage de points de courbes hyperelliptiques en grande caractéristique : algorithmes et complexité." Electronic Thesis or Diss., Université de Lorraine, 2018. http://www.theses.fr/2018LORR0104.
Counting points on algebraic curves has drawn a lot of attention due to its many applications from number theory and arithmetic geometry to cryptography and coding theory. In this thesis, we focus on counting points on hyperelliptic curves over finite fields of large characteristic p. In this setting, the most suitable algorithms are currently those of Schoof and Pila, because their complexities are polynomial in \log q. However, their dependency in the genus g of the curve is exponential, and this is already painful even in genus 3. Our contributions mainly consist of establishing new complexity bounds with a smaller dependency in g of the exponent of \log p. For hyperelliptic curves, previous work showed that it was quasi-quadratic, and we reduced it to a linear dependency. Restricting to more special families of hyperelliptic curves with explicit real multiplication (RM), we obtained a constant bound for this exponent.In genus 3, we proposed an algorithm based on those of Schoof and Gaudry-Harley-Schost whose complexity is prohibitive in general, but turns out to be reasonable when the input curves have explicit RM. In this more favorable case, we were able to count points on a hyperelliptic curve defined over a 64-bit prime field
Chapelle, Mathieu. "Décompositions de graphes : quelques limites et obstructions." Phd thesis, Université d'Orléans, 2011. http://tel.archives-ouvertes.fr/tel-00659666.
Lavault, Christian. "Algorithmique et complexité distribuées : applications à quelques problèmes fondamentaux de complexité, protocoles distribués à consensus, information globale, problèmes distribués d'élection et de routage." Paris 11, 1987. http://www.theses.fr/1987PA112392.
Cornet, Alexis. "Algorithmes et résultats de complexité pour des problèmes de graphes avec contraintes additionnelles." Thesis, Université Clermont Auvergne (2017-2020), 2018. http://www.theses.fr/2018CLFAC034/document.
Domination problems (dominating set, independant dominating set, ...) as well as covering problems (vertex-cover, Steiner tree, ...) are NP-complete. However, for most of these problems, it is always possible to construct a (eventually bad) solution in polynomial time, or at least it is possible to determine whether a solution exists. Those problems originally came from industry, but are simplified modelizations of the real life problems. We add additional constraints modeling plausible practical constraints : conflicts which are pairs of elements that cannot apear simultaneously in a solution (to modelize various incompatibilities), connexity in a second graph (elements of the solution must be able to communicate, and the communication links are a second graph), and obligations which are subsets of interdependant vertices which must be added simultaneously in a solution.We don't aim to model a specific real-world problem, but to study how these plausible constraints affect the complexity of the studied problems. We will see that, in many cases, even determining the existence of a solution (regardless of its size) become hard. The firefighter problem models firefighters aiming to contain a fire spreading turn by turn in a (eventually infinite) graph. We studied this problem with the addition of deplacement constraints for the firefighters (a limited moving speed between turns). We will see that, most of the time, this constraint increase the number of firefighters necessary to contain the fire, but does not trigger such major change as constraints studied in the others problems