Rozprawy doktorskie na temat „Réduction de la complexité de calcul”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „Réduction de la complexité de calcul”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.
Rouat, Valérie. "Validité de l'approche classification dans la réduction statistique de la complexité de #SAT". Rennes 1, 1999. http://www.theses.fr/1999REN10021.
Pełny tekst źródłaBarbanchon, Régis. "Réductions fines entre problèmes NP-complets : linéarité, planarité, parcimonie, et minimalité logique". Caen, 2003. http://www.theses.fr/2003CAEN2066.
Pełny tekst źródłaBerthomieu, Jérémy. "Contributions à la résolution des systèmes algébriques : réduction, localisation, traitement des singularités ; implantations". Phd thesis, Palaiseau, Ecole polytechnique, 2011. https://theses.hal.science/docs/00/67/19/68/PDF/Main.pdf.
Pełny tekst źródłaThis PhD thesis deals with some particular aspects of the algebraic systems resolution. Firstly, we introduce a way of minimizing the number of additive variables appearing in an algebraic system. For this, we make use of two invariants of variety introduced by Hironaka: the ridge and the directrix. Then, we propose fast arithmetic routines, the so-called relaxed routines, for p-adic integers. These routines allow us, then, to solve efficiently an algebraic system with rational coefficients locally, i. E. Over the p-adic integers. In a fourth part, we are interested in the factorization of a bivariate polynomial, which is at the root of the decomposition of hypersurfaces into irreducible components. We propose an algorithm reducing the factorization of the input polynomial to that of a polynomial whose dense size is essentially equivalent to the convex-dense size of the input polynomial. In the last part, we consider real algebraic systems solving in average. We design a probabilistic algorithm computing an approximate complex zero of the real algebraic system given as input
Berthomieu, Jérémy. "Contributions à la résolution des systèmes algébriques : réduction, localisation, traitement des singularités ; implantations". Phd thesis, Ecole Polytechnique X, 2011. http://pastel.archives-ouvertes.fr/pastel-00670436.
Pełny tekst źródłaStehlé, Damien. "Algorithmique de la réduction de réseaux et application à la recherche de pires cas pour l'arrondi defonctions mathématiques". Phd thesis, Université Henri Poincaré - Nancy I, 2005. http://tel.archives-ouvertes.fr/tel-00011150.
Pełny tekst źródłaplusieurs domaines de l'algorithmique, en cryptographie et en théorie
algorithmique des nombres par exemple. L'objet du présent mémoire est dual : nous améliorons les algorithmes de réduction des réseaux,
et nous développons une nouvelle application dans le domaine
de l'arithmétique des ordinateurs. En ce qui concerne l'aspect algorithmique, nous nous intéressons aux cas des petites dimensions (en dimension un, où il s'agit du calcul de pgcd, et aussi en dimensions 2 à 4), ainsi qu'à la description d'une nouvelle variante de l'algorithme LLL, en dimension quelconque. Du point de vue de l'application, nous utilisons la méthode
de Coppersmith permettant de trouver les petites racines de polynômes modulaires multivariés, pour calculer les pires cas pour l'arrondi des fonctions mathématiques, quand la fonction, le mode d'arrondi, et la précision sont donnés. Nous adaptons aussi notre technique aux mauvais cas simultanés pour deux fonctions. Ces deux méthodes sont des pré-calculs coûteux, qui une fois
effectués permettent d'accélérer les implantations des fonctions mathématiques élémentaires en précision fixée, par exemple en double précision.
La plupart des algorithmes décrits dans ce mémoire ont été validés
expérimentalement par des implantations, qui sont
disponibles à l'url http://www.loria.fr/~stehle.
Shahkarami, Abtin. "Complexity reduction over bi-RNN-based Kerr nonlinearity equalization in dual-polarization fiber-optic communications via a CRNN-based approach". Electronic Thesis or Diss., Institut polytechnique de Paris, 2022. http://www.theses.fr/2022IPPAT034.
Pełny tekst źródłaThe impairments arising from the Kerr nonlinearity in optical fibers limit the achievable information rates in fiber-optic communication. Unlike linear effects, such as chromatic dispersion and polarization-mode dispersion, which can be compensated via relatively simple linear equalization at the receiver, the computational complexity of the conventional nonlinearity mitigation techniques, such as the digital backpropagation, can be substantial. Neural networks have recently attracted attention, in this context, for low-complexity nonlinearity mitigation in fiber-optic communications. This Ph.D. dissertation deals with investigating the recurrent neural networks to efficiently compensate for the nonlinear channel impairments in dual-polarization long-haul fiber-optic transmission. We present a hybrid convolutional recurrent neural network (CRNN) architecture, comprising a convolutional neural network (CNN) -based encoder followed by a recurrent layer working in tandem. The CNN-based encoder represents the shortterm channel memory arising from the chromatic dispersion efficiently, while transitioning the signal to a latent space with fewer relevant features. The subsequent recurrent layer is implemented in the form of a unidirectional vanilla RNN, responsible for capturing the long-range interactions neglected by the CNN encoder. We demonstrate that the proposed CRNN achieves the performance of the state-of-theart equalizers in optical fiber communication, with significantly lower computational complexity depending on the system model. Finally, the performance complexity trade-off is established for a number of models, including multi-layer fully-connected neural networks, CNNs, bidirectional recurrent neural networks, bidirectional long short-term memory (bi-LSTM), bidirectional gated recurrent units, convolutional bi-LSTM models, and the suggested hybrid model
Guiraud, Maël. "Ordonnancement periodiques de messages pour minimiser la latence dans les réseaux dans un contexte 5G et au delà". Electronic Thesis or Diss., université Paris-Saclay, 2021. http://www.theses.fr/2021UPASG034.
Pełny tekst źródłaThis thesis is the result of a collaboration between DAVID Laboratory and Nokia Bell Labs France.The original idea is to find algorithmic solutions to deterministically manage periodic flows in networks in order to control and minimize the transmission time, called latency. One of the objectives of 5G (C-RAN, for Cloud Radio Access Network) is to centralize the calculation units of the radio antennas of telecommunications networks (called Radio Access Network) in the same computer center (the Cloud). The network between the computing center and the antennas must be able to satisfy the latency constraints imposed by the protocols.We define the problem of finding a periodic scheduling for messages so that they never compete for the same resource, and prove that the different variants of the problem studied are NP-complete. We first study the problem for a particular topology in which all the streams share the same link. We first propose polynomial algorithms of increased sophistication, and FPT algorithms that allow us to find a solution when the number of routes is reasonable, which is the case for C-RAN networks.Since the algorithms developed in this first part are not directly adaptable to more general topologies, we then propose a canonical form to the problem which allows us to define an efficient neighborhood notion for local search heuristics (hill climbing, tabu search, simulated annealing). We use this canonical form to define an efficient Branch and Bound algorithm when the number of routes is moderate.We also propose a performance evaluation of the proposed solutions compared to current flow management solutions, and show that our model is feasible in practice thanks to new equipment under development
Cagniart, Nicolas. "Quelques approches non linéaires en réduction de complexité". Thesis, Sorbonne université, 2018. http://www.theses.fr/2018SORUS194/document.
Pełny tekst źródłaModel reduction methods provide a general framework for substantially reducing computational costs of numerical simulations. In this thesis, we propose to extend the scope of these methods. The common point of the topics discussed here is the attempt to go beyond the standard linear "reduced basis" framework, which only deals with cases where the solution manifold have a small Kolmogorov width. We shall see how truncate, translate, rotate, stretch, compress etc. and then recombine the solutions, can sometimes help to overcome the problem when this Kolmogorov width is not small. We will also discuss the need for tailor-made stabilisation methods for the reduced frame
Madet, Antoine. "Complexité implicite dans des Lambda -calculs concurrents". Paris 7, 2012. http://www.theses.fr/2012PA077222.
Pełny tekst źródłaControlling the resource consumption of programs is crucial: besides performance reasons, it has many applications in the field of computer security where e. G. Mobile or embedded Systems dispose of limited amounts of resources. In this thesis, we develop static criteria to control the resource consumption of higher-order concurrent programs. Our starting point is the framework of Light Logics which has been extensively studied to control the complexity of higher-order functional programs through the proofs-as-programs correspondent. The contribution of this thesis is to extend this framework to higher-order concurrent programs. More generally, this thesis fits in the research field of Implicit Computational Complexity which aims at characterizing complexity classes by logical principles or language restrictions. The criteria that we propose are purely syntactic and are developed gradually to control the computational time of programs in a finer and finer way: first, we show how to guarantee the termination of programs (finite time); then, we show how to guarantee the termination of programs in elementary time and last, we show how to guarantee the termination of programs in polynomial time. We also introduce type Systems so that well-typed programs are guaranteed to terminate in bounded time and to return values. Finally, we show that the type Systems capture some interesting concurrent programs that iterate functions producing side effects over inductive data structures. In the last part, we study an alternative semantic method to control the resource consumption of higher-order imperative programs. The method is based on Dal Lago and Hofmann's quantitative realizability framework and allows to obtain various complexity bounds in a uniform way. This last par is joint work with Aloïs Brunel
Ayad, Ali. "Complexité de résolution de systèmes algébriques paramétrés". Rennes 1, 2006. https://tel.archives-ouvertes.fr/tel-00127383.
Pełny tekst źródłaSenot, Maxime. "Modèle géométrique de calcul : fractales et barrières de complexité". Phd thesis, Université d'Orléans, 2013. http://tel.archives-ouvertes.fr/tel-00870600.
Pełny tekst źródłaLafitte, Grégory. "Calculs et infinis". Lyon, École normale supérieure (sciences), 2002. http://www.theses.fr/2002ENSL0239.
Pełny tekst źródłaWe introduce a hierarchy of notions of generalized computation. The idea is to almagamate in one concept all that we could qualify of "computability", to study those notions and ultimately to have transfer theorems between those notions. Those notions correspond also in some cases to computation models obtained by means of concrete machines. We obtain in this way a new computation model, "infinite time cellular automata", which are more homogeneous than Turing machines (lack of head). The notion of computational complexity (according to a certain notion of computation) is also generalized and studied. Finally, we obtain notions of random reals that are finer than the classical notion of Martin-Löf (or Kolmogorov) and yet more and more refinable. All of this leads to a notion of generalized Kolmogorov complexity which opens up interesting prospects.
Pégny, Maël. "Sur les limites empiriques du calcul : calculabilité, complexité et physique". Thesis, Paris 1, 2013. http://www.theses.fr/2013PA010673/document.
Pełny tekst źródłaRecent years have seen a surge in the interest for non-standard computational models, inspired by physical, biological or chemical phenomena. The exact properties of some of these models have been a topic of somewhat heated discussion: what do they compute? And how fast do they compute? The stakes of these questions were heightened by the claim that these models would violate the accepted limits of computation, by violating the Church-Turing Thesis or the Extended Church-Turing Thesis. To answer these questions, the physical realizability of some of those models - or lack thereof - has often been put at the center of the argument. It thus seems that empirical considerations have been introduced into the very foundations of computability and computational complexity theory, both subjects that would have been previously considered purely a priori parts of logic and computer science. Consequently, this dissertation is dedicated to the following question: do computability and computational complexity theory rest on empirical foundations? If yes, what are these foundations? We will first examine the precise meaning of those limits of computation, and articulate a philosophical conception of computation able to make sense of this variety of models. We then answer the first question by the affirmative, through a careful examination of current debates around non-standard models. We show the various difficulties surrounding the second question, and study how they stem from the complex translation of computational concepts into physical limitations
LACROSE, Véronique. "Réduction de la complexité des contrôleurs flous : applications à la commande multivariable". Phd thesis, INSA de Toulouse, 1997. http://tel.archives-ouvertes.fr/tel-00010030.
Pełny tekst źródłaLanco, Antoine. "Stratégies pour la réduction forte". Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG097.
Pełny tekst źródłaSemantics of a programming language, especially functional ones, usually leaves underspecified the order in which the various operations are performed. Some usual strategies, such as call by value or lazy evaluation, already benefit from a large theoretical corpus as well as numerous efficient implementations. This corpus, however, mainly focuses on program evaluation, that is, producing a value. This is the framework of weak evaluation, in which no evaluation is performed inside a function that is not fully applied. Indeed, in a functional language, the closure that represents such a function is already considered as a value.Yet, several situations necessitate to go further and to look at normalization, which means that reduction is performed as far as possible, including in objects that are already values. That is called strong reduction. Among other things, strong evaluation can be used as an optimization pass during compilation, in order to partly evaluation the body of a function depending on the already known values of some of its arguments. Strong evaluation is also used inside proof assistants, for example Coq's machines of conversion (lazy strategy) and of reduction (call by value).Despite the importance of these applications, the theoretical corpus that extends the usual strategies from weak evaluation to strong reduction is way less comprehensive than the corpus for weak evaluation. It either focuses on some specific extensions or on some intermediate frameworks such as open evaluation. These works, very recent for the most part, do not yet bring any satisfying explanation to the way strong reduction behaves, for example the size explosion of some terms. As a consequence, the existing implementations often rely on some adhoc strategies to alleviate the impact of these issues.This thesis defines a call-by-need calculus with strong reduction, meaning a normalization strategies that extend the usual call-by-need evaluation strategies. The computation uses explicit substitutions to eliminate certain term size explosion effects, Furthermore, the strategies offer early detection of certain necessary redexes, thereby reducing the number of duplicated computations.Furthermore, our computation benefits from correctness properties (the obtained results correspond to normal forms of the lambda calculus), and completeness (for any normalizable term, our strategies reach the normal form). We have formalized our computation and prove its correctness with the Abella proof assistant.In this space, we have isolated a strategy that consistently produces the shortest reduction sequences and defined and implemented an abstract machine that executes this strategy. The implementation allowed us to conduct numerous tests which confirm the expected gains over conventional évaluation strategies
Ziegler, Yvan. "Calcul effectif sur les courbes hyperelliptiques à réduction semi-stable". Thesis, Rennes 1, 2019. http://www.theses.fr/2019REN1S023/document.
Pełny tekst źródłaIn this thesis we study the weight filtration on the De Rham cohomology of an hyperelliptic curve C defined over a finite extension of Qp and with semi-stable reduction. The goal is to provide algorithms computing explicitly, given an equation of C, the basis of the weight filtration’s spaces as well as the matrix of the Poincaré pairing. In the first chapter we introduce tools related to the algebraic De Rham cohomology of the hyperelliptic curve. We build a suitable basis of the De Rham cohomology of C, we establish explicit formulae for the cup-product and the trace, and we give an algorithm computing the matrix of the Poincaré pairing. The second chapter is dedicated to the explicit description of the morphism induced by the inclusion of the tube of a double point on the cohomology spaces. It is the main ingredient that allows us to describe the weight filtration on the De Rham cohomology of C. To achieve that, we use the framework of the Berkovitch analytical geometry. We introduce and then we develop the notion of standard residually singular points and the notion of apparent form of the curve’s equation. In the third and last chapter, we synthesize all the results and we complete the description of the weight filtration. Finally, we give the algorithms that compute the basis of Fil0 and Fil1. For each of our algorithm, we propose a sage implementation and concrete examples on genus one and two curves
Nesme, Vincent. "Complexité en requêtes et symétries". Phd thesis, Ecole normale supérieure de lyon - ENS LYON, 2007. http://tel.archives-ouvertes.fr/tel-00156762.
Pełny tekst źródłaproblèmes symétriques, dans les cadres du calcul probabiliste classique
et du calcul quantique.
Il est montré, dans le cas quantique, une application de la méthode de
bornes inférieures dite "polynomiale" au calcul de la complexité en
requêtes des problèmes de sous-groupes cachés abéliens, via la technique de "symétrisation".
Dans le cas du calcul probabiliste, sous une hypothèse de "symétrie
transitive" des problèmes, il est donné une formule combinatoire
permettant de calculer la complexité en requêtes exacte du meilleur
algorithme non-adaptatif. De plus, il est mis en évidence que sous
certaines hypothèses de symétrie, ce meilleur algorithme non-adaptatif
est optimal même parmi les algorithmes probabilistes plus généraux, ce qui donne pour la classe de problèmes correspondante une expression exacte de la complexité en requêtes.
Henrio, Ludovic. "Asynchronous object calculus : confluence and determinacy". Nice, 2003. http://www.theses.fr/2003NICE4047.
Pełny tekst źródłaThe objective of this thesis is to design an object calculus that allows one to write parallel and distributed applications, particularly on wide range networks, while ensuring good properties. This calculus is named ASP : Asynchronous Sequential Processes. The main characteristics of ASP are: asynchronous communications, futures, and a sequential execution within each process. ASP presents strong confluence and determinism properties, proved in a context as general as possible within this thesis. A first design decision is the absence of sharing : objects live in disjoint activities. An activity is a set of objects managed by a unique process and a unique active object. Active objects are accessible through global/distant references. They communicate through asynchronous method calls with futures. A future is a global reference representing a result not yet computed. This thesis models those aspects, their main properties, and the consequences of these mechanisms on the deterministic behavior of programs. The main result consists in a confluence property and its application to the identification of a set of programs behaving deterministically. From a practical point of view, ASP can also be considered as a model of the ProActive library. This library provides tools for developing parallel and distributed applications in Java
Trystram, Denis. "Quelques résultats de complexité en algorithmique parallèle et systolique". Grenoble INPG, 1988. http://tel.archives-ouvertes.fr/tel-00009202.
Pełny tekst źródłaSpaenlehauer, Pierre-Jean. "Résolution de systèmes multi-homogènes et déterminantiels algorithmes - complexité - applications". Paris 6, 2012. http://www.theses.fr/2012PA066467.
Pełny tekst źródłaMultivariate polynomial systems arising in Engineering Science often carryalgebraic structures related to the problems they stem from. Inparticular, multi-homogeneous, determinantal structures and booleansystems can be met in a wide range of applications. A classical method to solve polynomial systems is to compute a Gröbner basis ofthe ideal associated to the system. This thesis provides new tools forsolving such structured systems in the context of Gröbner basis algorithms. On the one hand, these tools bring forth new bounds on the complexity of thecomputation of Gröbner bases of several families of structured systems(bilinear systems, determinantal systems, critical point systems,boolean systems). In particular, it allows the identification of families ofsystems for which the complexity of the computation is polynomial inthe number of solutions. On the other hand, this thesis provides new algorithms which takeprofit of these algebraic structures for improving the efficiency ofthe Gröbner basis computation and of the whole solving process(multi-homogeneous systems, boolean systems). These results areillustrated by applications in cryptology (cryptanalysis of MinRank),in optimization and in effective real geometry (critical pointsystems)
Markey, Nicolas. "Logiques temporelles pour la vérification : expressivité, complexité, algorithmes". Orléans, 2003. http://www.theses.fr/2003ORLE2004.
Pełny tekst źródłaMadet, Antoine. "Complexité Implicite de Lambda-Calculs Concurrents". Phd thesis, Université Paris-Diderot - Paris VII, 2012. http://tel.archives-ouvertes.fr/tel-00794977.
Pełny tekst źródłaBonfante, Guillaume. "Constructions d'ordres, analyse de la complexité". Vandoeuvre-les-Nancy, INPL, 2000. http://docnum.univ-lorraine.fr/public/INPL_T_2000_BONFANTE_G.pdf.
Pełny tekst źródłaOur concern in the thesis is in the study of orders, through the analysis of complexity within rewriting, but also through the construction of monotonous λ-calculi within the framework of ordinal terms, and subrecursivc hierarchies. We show how to extract some knowledge in terms of complexity from the proof of termination in rewriting theory. This allows us to establish some hierarchies of caracterisation of complexity classes in terms of rewriting. So, one has an a priori complexity measure of functions with respect to their proof of termination. We study more specifically the case of the Knuth-Bendix ordering and polynomial interpretations. Next to that, we engaged ourselves into an other challenge, a more fundamental approach. This concerns the notion of ordering within typed λ-calculi. The interest comes from the fact that such a representation is necessary when one has to represent a structure with an ω -ary operation: that is an operation that only applies to monotonous sequences (which is the case of ordinal terms). We develop semantical tools, in particular we present a mono id al close category on graphs, but also syntactical constructions, in particular a calcul us where types arc graphs in which the traditional approach would be in terms of sets
Vidal, Didier. "Nouvelles notions de réduction en lambda-calcul : Application à la réalisation d'un langage fonctionnel fondé sur la réduction forte". Nancy 1, 1989. http://www.theses.fr/1989NAN10488.
Pełny tekst źródłaBrault-Baron, Johann. "De la pertinence de l’énumération : complexité en logiques". Caen, 2013. http://www.theses.fr/2013CAEN2056.
Pełny tekst źródłaBeyond the decision of satisfiability problems, we investigate the problem of enumerating all their solutions. In a first part, we consider the enumeration problem in the framework of the propositional satisfiability problem. Creignou and Hébrard proved that the polynomial classes for the non-trivial sat problem are exactly those for the enumeration problem. We give optimal enumeration algorithms for each of these classes, that generalize any non-trivial decision algorithm for this class. This suggests that enumeration is the relevant problem in this case, rather than the decision problem. Beyond the decision of satisfiability problems, we investigate the problem of enumerating all their solutions. In a first part, we consider the enumeration problem in the framework of the propositional satisfiability problem. Creignou and Hébrard proved that the polynomial classes for the non-trivial sat problem are exactly those for the enumeration problem. We give optimal enumeration algorithms for each of these classes, that generalize any non-trivial decision algorithm for this class. This suggests that enumeration is the relevant problem in this case, rather than the decision problem. In a second part, we simplify and complete some results of Bagan et al. That establish a strong connection between the tractability of a conjunctive query and a notion of hypergraph acyclicity. We establish similar results for the dual class of the class of conjunctive queries, thanks to a new algorithm. Finally, we generalize all these results through a single dichotomy for the enumeration problem of conjunctive signed queries, by generalizing some classical combinatorial result by Brouwer and Kolen. This dichotomy establishes a close connection between enumeration strong tractability and decision strong tractability. In a second part, we simplify and complete some results of Bagan et al. That establish a strong connection between the tractability of a conjunctive query and a notion of hypergraph acyclicity. We establish similar results for the dual class of the class of conjunctive queries, thanks to a new algorithm. Finally, we generalize all these results through a single dichotomy for the enumeration problem of conjunctive signed queries, by generalizing some classical combinatorial result by Brouwer and Kolen. This dichotomy establishes a close connection between enumeration strong tractability and decision strong tractability
Chapdelaine, Philippe. "Contribution à la théorie de la complexité algorithmique : problèmes de contraintes, complétude et résultats de classification, complexité structurelle". Caen, 2004. http://www.theses.fr/2004CAEN2042.
Pełny tekst źródłaTaveneaux, Antoine. "Puissance logique et calculatoire de l'aléa algorithmique". Paris 7, 2013. http://www.theses.fr/2013PA077217.
Pełny tekst źródłaTheory of algorithmic randomness theory studies the Jack of structure that characterizes random objects Kolmogorov complexity is a fimdamental tool of this theory and we study the characteristic properties of this fonction. In a second step we investigate the possibility of extending the study of the biased random bit sequences wondering if precise knowledge of using or not changes the quality of randomness we describe, We then focus on the logic power of the random object: What can be inferred from the fact (non provable) that a sequence has no structure? Finally we look a the possibility of calculating a completion of arithmetic from an randomized algorithm
Revol, Nathalie. "Complexité de l'évaluation parallèle de circuits arithmétiques". Grenoble INPG, 1994. http://tel.archives-ouvertes.fr/tel-00005109.
Pełny tekst źródłaNguyen, Hai-Nam. "Optimisation de la précision de calcul pour la réduction d'énergie des systèmes embarqués". Phd thesis, Université Rennes 1, 2011. http://tel.archives-ouvertes.fr/tel-00705141.
Pełny tekst źródłaOtaño, Aramendi Nerea. "Réduction du coût de calcul pour la simulation du comportement mécanique de câbles". Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLC061.
Pełny tekst źródłaThe work presented in this dissertation is focused on the simulation of the mechanical behaviour of lift's wire ropes. The aim of the work is to elaborate a method to simulate the mechanical behaviour of such wire ropes with low computational cost and sufficient accuracy.First of all, several methods to model or simulate wire ropes have been compared and their weak and strong points have been highlighted. Analytical and finite element methods have been compared with experimental tests. It was concluded that analytical methods considered in this work have a lower computational cost than finite element methods, but the results obtained using them are not accurate enough to simulate lift wire ropes. Therefore, finite element methods have been considered as the most appropriate to simulate these wire ropes. However, their computational cost is high so some methods to reduce it must be applied.In order to reduce the computational time, three type of methods have been considered: homogenization, metamodeling and model order reduction. Model order reduction technique was chosen as the most adequate method and it was implemented in the wire rope finite element simulation program Multifil. Accurate results have been obtained, however the computational cost needed by initial simulations to get the snapshots used to define a reduce basis was too high for long wire ropes. To solve this problem, a sectionwise reduction method was proposed and implemented. This formulation takes advantage of the periodic structure of wire ropes: the reduced basis is identified only on a reference elementary section and used for all repetitive sections of a multi-section wire rope. The computational cost induced by the multiplication of matrices in order to transform the linear system of the initial problem into the linear system of the reduced problem was shown to remain too high, particularly in the context of the solving of a non-linear problem, to allow the global computational time to be significantly decreased using the proposed techniques. To overcome this difficulty, an additional technique, namely the so-called Discrete Empirical Interpolation Method (DEIM) was successfully implemented and tested, allowing a time reduction factor of 4 to be obtained
Ceroi, Stéphan. "Ordres et géométrie plane : application au nombre de sauts". Montpellier 2, 2000. http://www.theses.fr/2000MON20114.
Pełny tekst źródłaLeroy, Julien. "Contribution à la résolution de la conjecture S-adique". Amiens, 2012. http://www.theses.fr/2012AMIE0101.
Pełny tekst źródłaThis thesis is about the S-adic conjecture which supposes the existence of a stronger notion of S-adicity that would be equivalent to having a sub-linear factor complexity. A sequence w over a finite alphabet A is said to be S-adic if S is a set of morphisms that allows to indefinitely de-substitute w. Without additional condition, the factor complexity of an S-adic sequence might be arbitrarily large. However, many well-known families of sequences have a sub-linear complexity and admit some S-adic expansions with a finite set S. The S-adic conjecture is therefore a natural attempt to link these two notions. In this thesis, we study in detail a method based on Rauzy graphs that provides an S-adic expansion of uniformly recurrent sequences with a sub-linear complexity. By this way we are able to determine some necessary (but not sufficient) conditions of these expansions. In the particular case of uniformly recurrent sequences with first difference of complexity bounded by two, the method is studied with even much more details, which makes the necessary conditions sufficient
Simoncini, David. "Sélection topologique dans les algorithmes évolutionnaires cellulaires : étude du compromis exploration exploitation". Nice, 2009. http://www.theses.fr/2009NICE4079.
Pełny tekst źródłaEvolutionary algorithms are stochastic optimization methods manipulating a population of solutions. Their behaviour is inspired by Darwin's theory of evolution. The combined application of stochastic operators and selection mechanisms allow renewing the population by exploring the search space and exploiting the already found solutions. The convergence speed of an evolutionary algorithm relies on its ability to generate efficient solutions by leading the search toward promising regions of the search space, and the ability of solutions to survive according to their fitness defined by the selective pressure. The latter allows dealing with the exploration / exploitation trade-off and prevents the algorithm from converging prematurely toward a local optimum. Evolutionary cellular algorithms introduce a notion of geographical neighborhood by embedding the solution on a grid. This adds a topological level between the phenotypical and genotypical ones. In this context, we define new selection methods that allow controlling the topology and obtain complex dynamics thanks to a single continuous and bounded parameter. Instead of restricting solutions to evolve on a uniform grid, we propose to enhance the topology with notions of anisotropy and locality. We study the influence of the topological selection on the preservation of genotypic diversity. Experiences made on two classes of NP-complete problems show that taking into account the topological level leads to a fine equilibrium between exploration and exploitation. In order to study the search dynamic and especially to analyze the efficiency of the observed trade-offs, we define a model based on the notion of punctuated equilibria. Finally, we propose adaptive algorithms in the intent of dynamically controlling the selective pressure and thus dealing with the relation between exploration and exploitation phases without any knowledge on the studied problems
Afif, Mohammed. "Approches algorithmiques pour la résolution de certains problèmes NP-Complets". Paris 9, 1999. https://portail.bu.dauphine.fr/fileviewer/index.php?doc=1999PA090026.
Pełny tekst źródłaMonnot, Jérôme. "Familles d'instances critiques et approximation polynomiale". Paris 9, 1998. https://portail.bu.dauphine.fr/fileviewer/index.php?doc=1998PA090028.
Pełny tekst źródłaGrunspan, Cyril. "Théorie de Toda discrète et espaces homogènes quantiques". Palaiseau, École polytechnique, 2000. http://www.theses.fr/2000EPXX0042.
Pełny tekst źródłaTuruani, Mathieu. "Sécurité des protocoles cryptographiques : décidabilité et complexité". Nancy 1, 2003. http://www.theses.fr/2003NAN10223.
Pełny tekst źródłaLetombe, Florian. "De la validité des formules booléennes quantifiées : étude de complexité et exploitation de classes traitables au sein d'un prouveur QBF". Artois, 2005. http://www.theses.fr/2005ARTO0407.
Pełny tekst źródłaThis thesis is centered on QBF, the validity problem for quantified Boolean formulae: given a formula of the form Σ = ∀y1 Ǝx1. . . ∀yn Ǝxn. ø where ø is a propositional formula built on {x1, y1,. . . , xn, yn} (the matrix of Σ), is it the case that for each assignment of a truth value to y1 in ø, there exists an assignment of a truth value to x1 in ø,. . . , for each assignment of a truth value to yn in ø, there exists an assignment of a truth value to xn in ø that makes ø valid ? Since QBF is computationally hard (PSPACE-complete), it is important to point out some specific cases for which the practical solving of QBF could be achieved. In this thesis, we have considered some restrictions of QBF based on the matrices of instances. Our main purpose was (1) to identify the complexity of QBF for some restrictions not considered so far and (2) to explore how to take advantage of polynomial classes for QBF within a general QBF solver in order to increase its efficiency. As to the first point, we have shown that QBF, when restricted to the target fragments for knowledge compilation studied in (Darwiche & Marquis 2002), remain typically PSPACE-complete. We have shown a close connexion between this study and the compilability issue for QBF. As to the second point, we have presented a new branching heuristics Δ which aims at promoting the generation of quantified renamable Horn formulae into the search-tree developed by a Q-DPLL procedure for QBF. We have obtained experimental results showing that, in practice, state-of-the-art QBF solvers, except our solver Qbfl, are unable to solve quantified Horn instances or quantified renamable Horn instances of medium size. This observation is sufficient to show the interest of our approach. Our experiments have also shown the heuristics Δ to improve the efficiency of Qbfl, even if this solver does not appear as one of the best QBF solvers at this time
Aubert, Clément. "Logique linéaire et classes de complexité sous-polynominales". Paris 13, 2013. https://theses.hal.science/tel-00957653.
Pełny tekst źródłaCette recherche en informatique théorique construit de nouveaux ponts entre logique linéaire et théorie de la complexité. Elle propose deux modèles de machines abstraites qui permettent de capturer de nouvelles classes de complexité avec la logique linéaire, les classes des problèmes efficacement parallélisables (NC et AC) et celle des problèmes solutionnables avec peu d’espace, dans ses versions déterministes et non-déterministes (L et NL). La représentation des preuves de la logique linéaire comme réseaux de preuves est employée pour représenter efficacement le calcul parallèle des circuits booléens, y compris à profondeur constante. La seconde étude s’inspire de la géométrie de l’interaction, une délicate reconstruction de la logique linéaire à l’aide d’opérateurs d’une algèbre de von Neumann. Nous détaillons comment l’interaction d’opérateurs représentant des entiers et d’opérateurs représentant des programmes peut être reconnue nilpotente en espace logarithmique. Nous montrons ensuite comment leur itération représente un calcul effectué par des machines à pointeurs que nous définissons et que nous rattachons à d’autres modèles plus classiques. Ces deux études permettent de capturer de façon implicite de nouvelles classes de complexité, en dessous du temps polynomial
Rossi, Fabrice. "Calcul de différentielles dans les réseaux de neurones généralisés : algorithmes, complexité, implantation logicielle et applications". Paris 9, 1996. https://portail.bu.dauphine.fr/fileviewer/index.php?doc=1996PA090043.
Pełny tekst źródłaNeural networks are non linear régression tools. They are asymptotically more efficient than B-spline or polynomial régression. Classical types of neural network hâve been studied rather completely. This is the case for instance for Multi-Layer Perceptron and Radial Basis Function based networks. Less common neural network types such as wavelet network or high order neurons hâve not been studied as completely as classical ones. VVe introduce a general mathematical framework that allows to uniformly describe every neural network type. We show that classic neural networks are particular cases of our general model. The proposed mathematical framework allows the analysis of neural networks without précisé knowledge of functions implemented by each neuron while keeping the architecture of the network build into the mathematical définition. We introduce this way algorithms that allow to compute first and second dérivatives of different functions computed with this model. We study the time complexities of these algorithms and show that the back-propagation algorithm (extended to our general définition) is not always the fastest algorithm. We finally describe an object-oriented implémentation of our generalized neural networks. We thus illustrate a practical use of our framework. It allows to unify neural network training methods, which are considered as function minimization algorithms. It allows also to obtain an easy to extend neural network simulator which solves one of the important problems of neural network simulation in a research framework : the implémentation of a new neural network type in an already existing simulator
Ayad, Ali. "Complexité de la résolution des systèmes algébriques paramétriques". Phd thesis, Université Rennes 1, 2006. http://tel.archives-ouvertes.fr/tel-00127383.
Pełny tekst źródłaMaire, Sylvain. "Réduction de variance pour l'intégration numérique et pour le calcul critique en transport neutronique". Toulon, 2001. http://www.theses.fr/2001TOUL0013.
Pełny tekst źródłaThis work deals with Monte Carlo methods and is especially devoted to variance-reduction. In the first part, we study a probabilistic algorithm, based on iterated control variates, wich enables the computation of mean-square ap-. Proximations. We obtain Monte Carlo estimators with increased convergence rate for monodimensional regular functions using it with periodized Fourier basis, Legendre and Tchebychef polynomial basis. It is then extended to the multidimensional case in trying to attenuate the dimensional effect by making a good choice of the basis functions. Various numerical examples and applications are studied. The second part deals with criticality in neutron transport theory. We develop a numerical method to compute the principal eigenvalue of the neutron transport operator by combining the Monte-Carlo computation of the solution of the relative Cauchy problem and its formal eigenfunction expansion. Various variance-reduction methods are tested on both homogeneous and inhomo-geaeous models. The stochastic representation of the principal eigenvalue is obtained for a peculiar homogeneous model
Thomasse, Rémy. "Analyse de complexité d'enveloppes convexes aléatoires". Thesis, Nice, 2015. http://www.theses.fr/2015NICE4116/document.
Pełny tekst źródłaIn this thesis, we give some new results about the average size of convex hulls made of points chosen in a convex body. This size is known when the points are chosen uniformly (and independently) in a convex polytope or in a "smooth" enough convex body. This average size is also known if the points are independently chosen according to a centered Gaussian distribution. In the first part of this thesis, we introduce a technique that will give new results when the points are chosen arbitrarily in a convex body, and then noised by some random perturbations. This kind of analysis, called smoothed analysis, has been initially developed by Spielman and Teng in their study of the simplex algorithm. For an arbitrary set of point in a ball, we obtain a lower and a upper bound for this smoothed complexity, in the case of uniform perturbation in a ball (in arbitrary dimension) and in the case of Gaussian perturbations in dimension 2. The asymptotic behavior of the expected size of the convex hull of uniformly random points in a convex body is polynomial for a "smooth" body and polylogarithmic for a polytope. In the second part, we construct a convex body so that the expected size of the convex hull of points uniformly chosen in that body oscillates between these two behaviors when the number of points increases. In the last part, we present an algorithm to generate efficiently a random convex hull made of points chosen uniformly and independently in a disk. We also compute its average time and space complexity. This algorithm can generate a random convex hull without explicitly generating all the points. It has been implemented in C++ and integrated in the CGAL library
Osswald, Christophe. "Classification, analyse de la similitude et hypergraphes". Paris, EHESS, 2003. http://www.theses.fr/2003EHES0079.
Pełny tekst źródłaA classical purpose of clustering is to make explicit the relationship between elements, and between the clusters, 'i. E. ' sets of elements. Partitions and hierarchies are often used. Clusters of a pyramid are parts of some path; trees and arboreal set systems are common in biology. Most of these models require an 'a priori' structure choice and lead to NP-hard problems. Within similarity analysis (Flament 'et al. ', 1962-1981), we search a smallest 'rigidity graph' of a given hypergraph, such that hyperedges are connected classes of the graph. Many methods are classical to generate a set system from a dissimilarity : maximal cliques, balls, 2-balls, realizations (Brucker, 2003). We prove our problem is NP-hard in general and with the first three methods; there is an algorithm in O(n4) operations for realisations. We identify hypercycles - admitting a cycle as graph of rigidity - O(n4) operations and study circular dissimilarities, whose clusters form a hypercycle, in relation with the model of Hubert 'et al. ' (1998). We apply these methods to memory psychology, text analysis and genetic data
Dang, Minh Dung. "Autour des primitifs quantiques pour le calcul sécurisé à deux parties". Paris, ENST, 2008. http://pastel.archives-ouvertes.fr/pastel-00005098.
Pełny tekst źródłaIn this thesis, we are interested in the theory of unconditional secure two-party computations of which Oblivious Transfer (OT) and Bit Commitment (BC) are the central primitives. On one hand, my works are inspired from Crépeau's et al. 's framework of building of OT protocol from noisy communication channels. The principle of this framework is to conceive, from noisy channels, an intermediate erasure model which is a variant of OT. We contributed to this framework by proposing a more general intermediate model, the Binary Symmetric Multi-Error-Rate Channel, which also can be built from noisy channels. With this intermediate model, we can build OT protocol from the noisy channels more effectively. In addition, we expose some case studies on emulating noisy models by a quantum nonorthogonal coding (QNOC) scheme which uses two non-orthogonal pure states for encoding two values of the classical bit. On the other hand, we revise the quantum model for general two-party protocols concerning classical and quantum computations and communications. We state that in the general model, a classical channel is inevitably macroscopic and its decoherence is so strong that quantum information is not accepted to be transfered on it. Thus, the quantum model for two-party protocols becomes three-party, including an environment of the channel. Indeed, with the faithful interpretation of general quantum two-party protocols in this three-party model, we reaffirm the no-go theorems of Mayers and Lo & Chau on the impossibilities of quantum OT, BC. In addion, we can go further to apply these negative results to protocols using some quantum trusted oracles, such as Coin-Flipping
Martin, Sébastien. "Analyse structurelle des systèmes algébro-différentiels conditionnels : complexité, modèles et polyèdres". Paris 9, 2011. https://basepub.dauphine.psl.eu/handle/123456789/9724.
Pełny tekst źródłaDifferential algebraic systems are used for modeling complex physical systems as electrical networks and dynamic movements. They are often large and difficult to solve. The structural analysis for differential algebraic systems permits to verify if these systems can not be solved with numerical methods. It consists to solve an underlying matching problem in graphs. In this thesis, we consider the structural analysis problem for differential algebraic systems with conditional equations. We show that the structural analysis problem for differential algebraic systems with conditional equations reduces to which we call the perfect matching free subgraph problem. We show the NP-completeness of this latter problem. We propose a formulation in terms of graphs and two integer programming formulations. We study the polytope associated to this problem and describe several classes of valid inequalities. We give necessary and sufficient conditions for these inequalities to be facet defining. We also discuss separation algorithms for these constraints. We develop a branch-and-cut algorithm based on these results. We also study an extension of this problem to differential algebraic systems with conditional embedded equations. We generalize the results obtained for the first variant and give new valid inequalities for this more general problem. In a second part, we study the parallelization problem for differential algebraic systems. This problem reduces to which is called the separator problem. We give several integer programming formulations, and for one of them we study the associated polytope. We give a few experimental results associated to this polyhedral study
Cervelle, Julien. "Complexité structurelle et algorithmique des pavages et des automates cellulaires". Aix-Marseille 1, 2002. https://hal.archives-ouvertes.fr/tel-01208375.
Pełny tekst źródłaFérée, Hugo. "Complexité d'ordre supérieur et analyse récursive". Thesis, Université de Lorraine, 2014. http://www.theses.fr/2014LORR0173/document.
Pełny tekst źródłaWhile first order complexity is well defined and studied, higher order lacks a satisfactory notion of complexity. Such a theory already exists at order 2 and provides a complexity class analogue to usual polynomial time computable functions. This is already especially interesting in the domain of computable analysis, where real numbers or real functions for example can be represented by first order functions. In particular, there is a clear link between computability and continuity, and we also illustrate in the case of real operators that complexity can be related to some analytical properties. However, we prove that, from a complexity point of view, some mathematical spaces can not be faithfully represented by order 1 functions and require higher order ones. This result underlines that there is a need to develop a notion of complexity at higher types which will be, in particular but not only, useful to computable analysis. We have developed a computational model for higher type sequential computations based on a game semantics approach, where the application of two functions is represented by the game opposing two strategies. By defining the size of such strategies, we are able to define a robust and meaningful notion of complexity at all types, together with a class of polynomial time computable higher order functionals which seems to be a good candidate for a class of feasible functionals at higher types
Cegarra, Julien. "La gestion de la complexité dans la planification : le cas de l'ordonnancement". Paris 8, 2004. http://www.theses.fr/2004PA082447.
Pełny tekst źródłaThis research relates to cognitive planning activities. It was conducted in a multi-disciplinary field associating Cognitive Ergonomics and Operational Research. Within these activities, scheduling constitutes an interesting problem in particular due to the complexity of field problems. In order to better understand human aspects of scheduling, three studies were undertaken. The first experiment was done to identify the nature of scheduling expertise by the study of novice and experts schedulers. Results led to a second experiment investigating the effect of task demands (e. G. Complexity level) on operators’ strategies. Then, the influence of external representation on problems complexity was studied in a third experiment. Finally, the combination of these various studies led to the proposal of hints for interfaces design
Poteaux, Adrien. "Calcul de développements de Puiseux et application au calcul du groupe de monodromie d'une courbe algébrique plane". Limoges, 2008. https://aurore.unilim.fr/theses/nxfile/default/b6d84d62-1bad-4a2f-9405-b23ed771029e/blobholder:0/2008LIMO4032.pdf.
Pełny tekst źródłaWe present a new symbolic-numeric algorithm to compute numerical approximations of Puiseux expansions above critical points. We use this new algorithm to compute monodromy groups of plane algebraic curves. In essence, we compute numerical approximations of Puiseux expansions in the following way: computations modulo a well chosen prime number p are used to obtain the exact information required to guide floating point computations. We also give complexity bounds for the symbolic part of our algorithm. Then, we propose a new strategy to compute monodromy groups of plane algebraic curves using this numerical approximations of Puiseux expansions