Dissertations / Theses on the topic 'Complexity'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Complexity.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Baumler, Raphaël. "La sécurité de marché et son modèle maritime : entre dynamiques du risque et complexité des parades : les difficultés pour construire la sécurité." Thesis, Evry-Val d'Essonne, 2009. http://www.theses.fr/2009EVRY0024/document.
Full textModels of development, capitalism and industrialism are also big dynamics of risk by their ability altering social world. At the level of firms, innovation and competition requires ongoing adjustment. Subject to their owners, companies focus on financial risk. Other risks are subordinate to the primary target. The dynamics of risk are changing the firm at the rate of external demands. The competition justifies harmful cost reductions and destabilizing re-engineering. The aim of safety is to reduce the uprising conditions of risk. Safety is a complex social building. Locally, safety seems a melt of man and tools within an organization. Overall, control of the safety is a challenge between risk and cost in the unit. Between cost and efficiency, management makes its own choice. As the shipowner and his vessel, the factory management has the keys to safety. It arbitrates between budgets and plays competition between territories. Ensuring impunity, equivalence and non-discrimination, international law guarantees competition between all States and flags. With globalization, we entered the era of the safety market. Safety is one of the production factors in global competition. Business leaders incorporate it into their overall strategies. With this factor in mind they choose their factories geographical location but also the allocation of budgets inside the firm. In selecting safety participants, the Executive create a unique picture of what safety is that corresponds to their paradigms. The rebuilding of safety in production units is played locally but also globally and by discovering the complexities of the dynamics of risk and the way of building safety
Rubiano, Thomas. "Implicit Computational Complexity and Compilers." Thesis, Sorbonne Paris Cité, 2017. http://www.theses.fr/2017USPCD076/document.
Full textLa théorie de la complexité´e s’intéresse à la gestion des ressources, temps ou espace, consommés par un programmel ors de son exécution. L’analyse statique nous permet de rechercher certains critères syntaxiques afin de catégoriser des familles de programmes. L’une des approches les plus fructueuses dans le domaine consiste à observer le comportement potentiel des données manipulées. Par exemple, la détection de programmes “non size increasing” se base sur le principe très simple de compter le nombre d’allocations et de dé-allocations de mémoire, en particulier au cours de boucles et on arrive ainsi à détecter les programmes calculant en espace constant. Cette méthode s’exprime très bien comme propriété sur les graphes de flot de contrôle. Comme les méthodes de complexité implicite fonctionnent à l’aide de critères purement syntaxiques, ces analyses peuvent être faites au moment de la compilation. Parce qu’elles ne sont ici que statiques, ces analyses ne sont pas toujours calculables ou facilement calculables, des compromis doivent être faits en s’autorisant des approximations. Dans le sillon du “Size-Change Principle” de C. S. Lee, N. D. Jones et A. M. Ben-Amram, beaucoup de recherches reprennent cette méthode de prédiction de terminaison par observation de l’évolution des ressources. Pour le moment, ces méthodes venant des théories de la complexité implicite ont surtout été appliquées sur des langages plus ou moins jouets. Cette thèse tend à porter ces méthodes sur de “vrais” langages de programmation en s’appliquant au niveau des représentations intermédiaires dans des compilateurs largement utilises. Elle fournit à la communauté un outil permettant de traiter une grande quantité d’exemples et d’avoir une idée plus précise de l’expressivité réelle de ces analyses. De plus cette thèse crée un pont entre deux communautés, celle de la complexité implicite et celle de la compilation, montrant ainsi que chacune peut apporter à l’autre
Pankratov, Denis. "Communication complexity and information complexity." Thesis, The University of Chicago, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=3711791.
Full textInformation complexity enables the use of information-theoretic tools in communication complexity theory. Prior to the results presented in this thesis, information complexity was mainly used for proving lower bounds and direct-sum theorems in the setting of communication complexity. We present three results that demonstrate new connections between information complexity and communication complexity.
In the first contribution we thoroughly study the information complexity of the smallest nontrivial two-party function: the AND function. While computing the communication complexity of AND is trivial, computing its exact information complexity presents a major technical challenge. In overcoming this challenge, we reveal that information complexity gives rise to rich geometrical structures. Our analysis of information complexity relies on new analytic techniques and new characterizations of communication protocols. We also uncover a connection of information complexity to the theory of elliptic partial differential equations. Once we compute the exact information complexity of AND, we can compute exact communication complexity of several related functions on n-bit inputs with some additional technical work. Previous combinatorial and algebraic techniques could only prove bounds of the form Θ( n). Interestingly, this level of precision is typical in the area of information theory, so our result demonstrates that this meta-property of precise bounds carries over to information complexity and in certain cases even to communication complexity. Our result does not only strengthen the lower bound on communication complexity of disjointness by making it more exact, but it also shows that information complexity provides the exact upper bound on communication complexity. In fact, this result is more general and applies to a whole class of communication problems.
In the second contribution, we use self-reduction methods to prove strong lower bounds on the information complexity of two of the most studied functions in the communication complexity literature: Gap Hamming Distance (GHD) and Inner Product mod 2 (IP). In our first result we affirm the conjecture that the information complexity of GHD is linear even under the uniform distribution. This strengthens the Ω(n) bound shown by Kerenidis et al. (2012) and answers an open problem by Chakrabarti et al. (2012). We also prove that the information complexity of IP is arbitrarily close to the trivial upper bound n as the permitted error tends to zero, again strengthening the Ω(n) lower bound proved by Braverman and Weinstein (2011). More importantly, our proofs demonstrate that self-reducibility makes the connection between information complexity and communication complexity lower bounds a two-way connection. Whereas numerous results in the past used information complexity techniques to derive new communication complexity lower bounds, we explore a generic way, in which communication complexity lower bounds imply information complexity lower bounds in a black-box manner.
In the third contribution we consider the roles that private and public randomness play in the definition of information complexity. In communication complexity, private randomness can be trivially simulated by public randomness. Moreover, the communication cost of simulating public randomness with private randomness is well understood due to Newman's theorem (1991). In information complexity, the roles of public and private randomness are reversed: public randomness can be trivially simulated by private randomness. However, the information cost of simulating private randomness with public randomness is not understood. We show that protocols that use only public randomness admit a rather strong compression. In particular, efficient simulation of private randomness by public randomness would imply a version of a direct sum theorem in the setting of communication complexity. This establishes a yet another connection between the two areas. (Abstract shortened by UMI.)
Smith, Peter. "Adaptive leadership: fighting complexity with complexity." Thesis, Monterey, California: Naval Postgraduate School, 2014. http://hdl.handle.net/10945/42728.
Full textContemporary crises have become increasingly complex and the methods of leading through them have failed to keep pace. If it is assumed that leadership matters—that it has a legitimate effect on the outcome of a crisis, then leaders have a duty to respond to that adaptation with modifications of their own. Using literature sources, the research explores crisis complexity, crisis leadership, and alternative leadership strategies. Specifically, the research evaluates the applicability of complexity science to current crises. Having identified the manner in which crises have changed, it focuses on the gap between contemporary crises and the current methods of crisis leadership. The paper pursues adaptive methods of leading in complex crises and examines a number of alternative strategies for addressing the gap. The research suggests that a combination of recognizing the complexity of contemporary crises, applying resourceful solutions, and continually reflecting on opportunities to innovate, may be an effective way to lead through complex crises using complex leadership.
Chen, Lijie S. M. Massachusetts Institute of Technology. "Fine-grained complexity meets communication complexity." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122754.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (pages 215-229).
Fine-grained complexity aims to understand the exact exponent of the running time of fundamental problems in P. Basing on several important conjectures such as Strong Exponential Time Hypothesis (SETH), All-Pair Shortest Path Conjecture, and the 3-Sum Conjecture, tight conditional lower bounds are proved for numerous exact problems from all fields of computer science, showing that many text-book algorithms are in fact optimal. For many natural problems, a fast approximation algorithm would be as important as fast exact algorithms. So it would be interesting to show hardness for approximation algorithms as well. But we had few techniques to prove tight hardness for approximation problems in P--In particular, the celebrated PCP Theorem, which proves similar approximation hardness in the world of NP-completeness, is not fine-grained enough to yield interesting conditional lower bounds for approximation problems in P.
In 2017, a breakthrough work of Abboud, Rubinstein and Williams [12] established a framework called "Distributed PCP", and applied that to show conditional hardness (under SETH) for several fundamental approximation problems in P. The most interesting aspect of their work is a connection between fine-grained complexity and communication complexity, which shows Merlin-Arther communication protocols can be utilized to give fine-grained reductions between exact and approximation problems. In this thesis, we further explore the connection between fine-grained complexity and communication complexity. More specifically, we have two sets of results. In the first set of results, we consider communication protocols other than Merlin-Arther protocols in [12] and show that they can be used to construct other fine-grained reductions between problems. [sigma]₂ Protocols and An Equivalence Class for Orthogonal Vectors (OV).
First, we observe that efficient [sigma]₂[superscripts cc] protocols for a function imply fine-grained reductions from a certain related problem to OV. Together with other techniques including locality-sensitive hashing, we establish an equivalence class for OV with O(log n) dimensions, including Max-IP/Min-IP, approximate Max-IP/Min-IP, and approximate bichromatic closest/further pair. · NP · UPP Protocols and Hardness for Computational Geometry Problems in 2⁰([superscript log*n]) Dimensions. Second, we consider NP · UPP protocols which are the relaxation of Merlin-Arther protocols such that Alice and Bob only need to be convinced with probability > 1/2 instead of > 2/3.
We observe that NP · UPP protocols are closely connected to Z-Max-IP problem in very small dimensions, and show that Z-Max-IP, l₂₋-Furthest Pair and Bichromatic l₂-Closest Pair in 2⁰[superscript (log* n)] dimensions requires n²⁻⁰[superscript (1)] time under SETH, by constructing an efficient NP - UPP protocol for the Set-Disjointness problem. This improves on the previous hardness result for these problems in w(log² log n) dimensions by Williams [172]. · IP Protocols and Hardness for Approximation Problems Under Stronger Conjectures. Third, building on the connection between IP[superscript cc] protocols and a certain alternating product problem observed by Abboud and Rubinstein [11] and the classical IP = PSPACE theorem [123, 155]. We show that several finegrained problems are hard under conjectures much stronger than SETH (e.g., the satisfiability of n⁰[superscript (1)]-depth circuits requires 2(¹⁻⁰[superscript (1)n] time).
In the second set of results, we utilize communication protocols to construct new algorithms. · BQP[superscript cc] Protocols and Approximate Counting Algorithms. Our first connection is that a fast BQP[superscript cc] protocol for a function f implies a fast deterministic additive approximate counting algorithm for a related pair counting problem. Applying known BQP[superscript cc] protocols, we get fast deterministic additive approximate counting algorithms for Count-OV (#OV), Sparse Count-OV and Formula of SYM circuits. · AM[superscript cc]/PH[superscript cc] Protocols and Efficient SAT Algorithms. Our second connection is that a fast AM[superscript cc] (or PH[superscript cc]) protocol for a function f implies a faster-than-bruteforce algorithm for a related problem.
In particular, we show that if the Longest Common Subsequence (LCS) problem admits a fast (computationally efficient) PH[superscript cc] protocol (polylog(n) complexity), then polynomial-size Formula-SAT admits a 2[superscript n-n][superscript 1-[delta]] time algorithm for any constant [delta] > 0, which is conjectured to be unlikely by a recent work of Abboud and Bringmann [6].
by Lijie Chen.
S.M.
S.M. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
Gopalakrishnan, K. S. "Complexity cores in average-case complexity theory." [Ames, Iowa : Iowa State University], 2009. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:1473222.
Full textBrochenin, Rémi. "Separation logic : expressiveness, complexity, temporal extension." Phd thesis, École normale supérieure de Cachan - ENS Cachan, 2013. http://tel.archives-ouvertes.fr/tel-00956587.
Full textOtto, James R. (James Ritchie). "Complexity doctrines." Thesis, McGill University, 1995. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=29104.
Full textAda, Anil. "Communication complexity." Thesis, McGill University, 2014. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=121119.
Full textLa complexité de communication étudie combien de bits un groupe de joueurs donné doivent échanger entre eux pour calculer une function dont l'input est distribué parmi les joueurs. Bien que ce soit un domaine de recherche naturel basé sur des considérations pratiques, la motivation principale vient des nombreuses applications théoriques.Cette thèse comporte trois parties principales, étudiant trois aspects de la complexité de communication.1. La première partie discute le modèle 'number on the forehead' (NOF) dans la complexité de communication à plusieurs joueurs. Il s'agit d'un modèle fondamental en complexité de communication, avec des applications à la complexité des circuits, la complexité des preuves, les programmes de branchement et la théorie de Ramsey. Dans ce modèle, nous étudions les fonctions composeés f de g. Ces fonctions comprennent la plupart des fonctions bien connues qui sont étudiées dans la littérature de la complexité de communication. Un objectif majeur est de comprendre quelles combinaisons de f et g produisent des compositions qui sont difficiles du point de vue de la communication. En particulier, à cause de l'importance des applications aux circuits, il est intéressant de comprendre la puissance du modèle NOF quand le nombre de joueurs atteint ou dépasse log n. Motivé par ces objectifs nous montrons l'existence d'un protocole simultané efficace à k joueurs de coût O(log^3 n) pour sym de g lorsque k > 1 + log n, sym est une function symmétrique quelconque et g est une fonction arbitraire. Nous donnons aussi des applications de notre protocole efficace à la théorie de Ramsey.Dans le contexte où k < log n, nous étudions de plus près des fonctions de la forme majority de g, mod_m de g et nor de g, où les deux derniers cas sont des généralisations des fonctions bien connues et très étudiées Inner Product et Disjointness respectivement. Nous caractérisons la complexité de communication de ces fonctions par rapport au choix de g.2. La deuxième partie considère les applications de l'analyse de Fourier des fonctions symmétriques à la complexité de communication et autres domaines. La norme spectrale d'une function booléenne f:{0,1}^n -> {0,1} est la somme des valeurs absolues de ses coefficients de Fourier. Nous donnons une caractérisation combinatoire pour la norme spectrale des fonctions symmétriques. Nous montrons que le logarithme de la norme spectrale est du même ordre de grandeur que r(f)log(n/r(f)), avec r(f) = max(r_0,r_1) où r_0 et r_1 sont les entiers minimaux plus petits que n/2 pour lesquels f(x) ou f(x)parity(x) est constant pour tout x tel que x_1 + ... + x_n à [r_0,n-r_1]. Nous présentons quelques applications aux arbres de décision et à la complexité de communication des fonctions symmétriques.3. La troisième partie étudie la confidentialité dans le contexte de la complexité de communication: quelle quantité d'information est-ce que les joueurs révèlent sur leur input en suivant un protocole donné? L'inatteignabilité de la confidentialité parfaite pour plusieurs fonctions motivent l'étude de la confidentialité approximative. Feigenbaum et al. (Proceedings of the 11th Conference on Electronic Commerce, 167--178, 2010) ont défini des notions de confidentialité approximative dans le pire cas et dans le cas moyen, et ont présenté plusieurs bornes supérieures intéressantes ainsi que quelques questions ouvertes. Dans cette thèse, nous obtenons des bornes asymptotiques précises, pour le pire cas aussi bien que pour le cas moyen, sur l'échange entre la confidentialité approximative de protocoles et le coût de communication pour les enchères Vickrey Auction, qui constituent l'exemple canonique d'une enchère honnête. Nous démontrons aussi des bornes inférieures exponentielles sur la confidentialité approximative de protocoles calculant la function Intersection, indépendamment du coût de communication. Ceci résout une conjecture de Feigenbaum et al.
Mariotti, Humberto, and Cristina Zauhy. "Managing Complexity." Universidad Peruana de Ciencias Aplicadas (UPC), 2014.
Find full textSharp, L. Kathryn. "Text Complexity." Digital Commons @ East Tennessee State University, 2014. https://dc.etsu.edu/etsu-works/4290.
Full textWennberg, Andreas, and Emil Persson. "Coopetition and Complexity : Exploring a Coopetitive Relationship with Complexity." Thesis, Umeå universitet, Handelshögskolan vid Umeå universitet (USBE), 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-52689.
Full textOkabe, Yasuo. "Parallel Computational Complexity and Date-Transfer Complexity of Supercomputing." Kyoto University, 1994. http://hdl.handle.net/2433/74658.
Full textRaynard, Mia. "Deconstructing Complexity: Configurations of Institutional Complexity and Structural Hybridity." SAGE Publications, 2016. http://dx.doi.org/10.1177/1476127016634639.
Full textColijn, Caroline. "Addressing complexity, exploring social change through chaos and complexity theory." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/mq43374.pdf.
Full textLarge, David. "Complexity and communities : the application of complexity to community studies." Thesis, Northumbria University, 2015. http://nrl.northumbria.ac.uk/25244/.
Full textUden, Jacobus Cornelis van. "Organisation & complexity : using complexity science to theorise organisational aliveness /." [S. l. : s. n.], 2004. http://catalogue.bnf.fr/ark:/12148/cb39270773j.
Full textBelow, Alexander. "Complexity of triangulation /." [S.l.] : [s.n.], 2002. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=14672.
Full textRezaei, Hengameh. "Models complexity measurement." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-68701.
Full textMayhew, Dillon. "Matroids and complexity." Thesis, University of Oxford, 2005. http://ora.ox.ac.uk/objects/uuid:23640923-17c3-4ad8-9845-320e3b662910.
Full textChew, Leroy Nicholas. "QBF proof complexity." Thesis, University of Leeds, 2017. http://etheses.whiterose.ac.uk/18281/.
Full textBeheshti, Soosan 1969. "Minimum description complexity." Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/8012.
Full textIncludes bibliographical references (p. 136-140).
The classical problem of model selection among parametric model sets is considered. The goal is to choose a model set which best represents observed data. The critical task is the choice of a criterion for model set comparison. Pioneer information theoretic based approaches to this problem are Akaike information criterion (AIC) and different forms of minimum description length (MDL). The prior assumption in these methods is that the unknown true model is a member of all the competing sets. We introduce a new method of model selection: minimum description complexity (MDC). The approach is motivated by the Kullback-Leibler information distance. The method suggests choosing the model set for which the model set relative entropy is minimum. We provide a probabilistic method of MDC estimation for a class of parametric model sets. In this calculation the key factor is our prior assumption: unlike the existing methods, no assumption of the true model being a member of the competing model sets is needed. The main strength of the MDC calculation is in its method of extracting information from the observed data.
(cont.) Interesting results exhibit the advantages of MDC over MDL and AIC both theoretically and practically. It is illustrated that, under particular conditions, AIC is a special case of MDC. Application of MDC in system identification and signal denoising is investigated. The proposed method answers the challenging question of quality evaluation in identification of stable LTI systems under a fair prior assumption on the unmodeled dynamics. MDC also provides a new solution to a class of denoising problems. We elaborate the theoretical superiority of MDC over the existing thresholding denoising methods.
by Soosan Beheshti.
Ph.D.
Uzuner, Tolga. "Effective network complexity." Thesis, University of Cambridge, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.612749.
Full textWashburn, Fred AlDean. "Supervisee cognitive complexity." Diss., University of Iowa, 2015. https://ir.uiowa.edu/etd/1791.
Full textWinerip, Jason. "Graph Linear Complexity." Scholarship @ Claremont, 2008. https://scholarship.claremont.edu/hmc_theses/216.
Full textAleo, Ignazio. "Complexity in motion." Doctoral thesis, Università di Catania, 2012. http://hdl.handle.net/10761/1072.
Full textDervic, Amina, and Alexander Rank. "ATC complexity measures: Formulas measuring workload and complexity at Stockholm TMA." Thesis, Linköpings universitet, Kommunikations- och transportsystem, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-114534.
Full textAddy, Robert. "Cost of complexity : mitigating transition complexity in mixed-model assembly lines." Thesis, Massachusetts Institute of Technology, 2020. https://hdl.handle.net/1721.1/126942.
Full textThesis: S.M., Massachusetts Institute of Technology, Department of Mechanical Engineering, in conjunction with the Leaders for Global Operations Program at MIT, May, 2020
Cataloged from the official PDF of thesis.
Includes bibliographical references (page 72).
The Nissan Smyrna automotive assembly plant is a mixed-model production facility which currently produces six different vehicle models. This mixed-model assembly strategy enables the production level adjustment of different vehicles to match changing market demand, but it necessitates a trained workforce who are familiar with the different parts and processes required for each vehicle. Currently, the mixed-model production process is not batched; assembly line technicians might switch between assembling different vehicles several times every hour. When a switch or 'transition' occurs between different models, variations in the defect rate could occur as technicians must familiarize themselves with a different set of parts and processes. This thesis identifies this confusion as the consequence of 'transition' complexity, which results not only from variety but also familiarity; how quickly can a new situation be recognized, and how quickly can associates remember what to do and recover the skills needed to succeed. Recommendations follow to mitigate the impact of transition complexity on associate performance, thereby improving vehicle production quality. Transition complexity is an important factor in determining the performance of the assembly system (with respect to defect rates) and could supplement existing models of complexity measurement in assembly systems. Several mitigation measures at the assembly plant level are recommended to limit the impact of transition complexity on system performance. These measures include improvements to the offline kitting system to reduce errors such as reconfiguring the physical layout and implementing a visual error detection system. Additionally, we recommend altering the production scheduling system to ensure low volume models are produced at more regular intervals and with consistently low sequence gaps.
by Robert Addy.
M.B.A.
S.M.
M.B.A. Massachusetts Institute of Technology, Sloan School of Management
S.M. Massachusetts Institute of Technology, Department of Mechanical Engineering
Lacayo, Virginia. "Communicating Complexity: A Complexity Science Approach to Communication for Social Change." Ohio University / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1367522049.
Full textPontoizeau, Thomas. "Community detection : computational complexity and approximation." Thesis, Paris Sciences et Lettres (ComUE), 2018. http://www.theses.fr/2018PSLED007/document.
Full textThis thesis deals with community detection in the context of social networks. A social network can be modeled by a graph in which vertices represent members, and edges represent relationships. In particular, I study four different definitions of a community. First, a community structure can be defined as a partition of the vertices such that each vertex has a greater proportion of neighbors in its part than in any other part. This definition can be adapted in order to study only one community. Then, a community can be viewed as a subgraph in which every two vertices are at distance 2 in this subgraph. Finally, in the context of online meetup services, I investigate a definition for potential communities in which members do not know each other but are related by their common neighbors. In regard to these proposed definitions, I study computational complexity and approximation within problems that either relate to the existence of such communities or to finding them in graphs
Melkebeek, Dieter van. "Randomness and completeness in computational complexity." New York : Springer, 2000. http://www.springerlink.com/openurl.asp?genre=issue&issn=0302-9743&volume=1950.
Full textMonet, Mikaël. "Combined complexity of probabilistic query evaluation." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLT003/document.
Full textQuery evaluation over probabilistic databases (probabilistic queryevaluation, or PQE) is known to be intractable inmany cases, even in data complexity, i.e., when the query is fixed. Althoughsome restrictions of the queries and instances have been proposed tolower the complexity, these known tractable cases usually do not apply tocombined complexity, i.e., when the query is not fixed. My thesis investigates thequestion of which queries and instances ensure the tractability ofPQE in combined complexity.My first contribution is to study PQE of conjunctive queries on binary signatures, which we rephraseas a probabilistic graph homomorphism problem. We restrict the query and instance graphs to be trees and show the impact on the combined complexity of diverse features such as edge labels, branching,or connectedness. While the restrictions imposed in this setting are quite severe, my second contribution shows that,if we are ready to increase the complexity in the query, then we can evaluate a much more expressive language on more general instances. Specifically, I show that PQE for a particular class of Datalog queries on instances of bounded treewidth can be solved with linear complexity in the instance and doubly exponential complexity in the query.To prove this result, we use techniques from tree automata and knowledge compilation. The third contribution is to show the limits of some of these techniques by proving general lower bounds on knowledge compilation and tree automata formalisms
Osberg, Deborah Carol. "Curriculum, complexity and representation : rethinking the epistemology of schooling through complexity theory." Thesis, Open University, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.417476.
Full textDe, Coning Cedric Hattingh. "Complexity, peacebuilding and coherence : implications of complexity for the peacebuilding coherence dilemma." Thesis, Stellenbosch : Stellenbosch University, 2012. http://hdl.handle.net/10019.1/71891.
Full textENGLISH ABSTRACT: This dissertation explores the utility of using Complexity studies to improve our understanding of peacebuilding and the coherence dilemma, which is regarded as one of the most significant problems facing peacebuilding interventions. Peacebuilding is said to be complex, and this study investigates what this implies, and asks whether Complexity could be of use in improving our understanding of the assumed causal link between coherence, effectiveness and sustainability. Peacebuilding refers to all actions undertaken by the international community and local actors to consolidate the peace – to prevent a (re)lapse into violent conflict – in a given conflict-prone system. The nexus between development, governance, politics and security has become a central focus of the international effort to manage transitions, and peacebuilding is increasingly seen as the collective framework within which these diverse dimensions of conflict management can be brought together in one common framework. The coherence dilemma refers to the persistent gap between policy-level assumptions about the value and causal role of coherence in the effectiveness of peacebuilding and empirical evidence to the contrary from peacebuilding practice. The dissertation argues that the peacebuilding process is challenged by enduring and deep-rooted tensions and contradictions, and that there are thus inherent limits and constraints regarding the degree to which coherence can be achieved in any particular peacebuilding context. On the basis of the application of the general characteristics of Complexity to peacebuilding, the following three recommendations reflect the core findings of the study: (1) Peacebuilders need to concede that they cannot, from the outside, definitively analyse complex conflicts and design ‘solutions’ on behalf of a local society. Instead, they should facilitate inductive processes that assist knowledge to emerge from the local context, and such knowledge needs to be understood as provisional and subject to a continuous process of refinement and adaptation. (2) Peacebuilders have to recognise that self-sustainable peace is directly linked to, and influenced by, the extent to which a society has the capacity, and space, to selforganise. For peace consolidation to be self-sustainable, it has to be the result of a home-grown, bottom-up and context-specific process. (3) Peacebuilders need to acknowledge that they cannot defend the choices they make on the basis of pre-determined models or lessons learned elsewhere. The ethical implications of their choices have to be considered in the local context, and the effects of their interventions - intended and unintended - need to be continuously assessed against the lived-experience of the societies they are assisting. Peacebuilding should be guided by the principle that those who will have to live with the consequences should have the agency to make decisions about their own future. The art of peacebuilding lies in pursuing the appropriate balance between international support and home-grown solutions. The dissertation argues that the international community has, to date, failed to find this balance. As a result, peacebuilding has often contributed to the very societal weaknesses and fragilities that it was meant to resolve. On the basis of these insights, the dissertation concludes with a call for a significant re-balancing of the relationship between international influence and local agency, where the role of the external peacebuilder is limited to assisting, facilitating and stimulating the capacity of the local society to self-organise. The dissertation thus argues for reframing peacebuilding as something that must be essentially local.
AFRIKAANSE OPSOMMING: Hierdie proefskrif ondersoek die toepaslikheid van Kompleksiteitstudies om ons begrip van vredesbou en die dilemma van koherensie te verbeter, wat as een van die gewigtigste probleme vir die toetrede tot vredesbou beskou kan word. Vredesbou word as kompleks beskou en die implikasies van hierdie siening word in hierdie proefskrif ondersoek. Dienooreenkomstig word die vraag na die nut van Kompleksiteitstudies vir die verbetering van ons begrip van die veronderstelde kousale verband tussen koherensie, doeltreffendheid en volhoubaarheid aangespreek. Vredesbou verwys na alle handelinge wat deur die internasionale gemeenskap en plaaslike belanghebbendes onderneem word om vrede binne ʼn gegewe sisteem, wat neig na konflik, te konsolideer om sodoende ’n (her)verval in gewelddadige konflik te voorkom. Die aanknopingspunt tussen ontwikkeling, staatsbestuur, staatkunde en sekuriteit is tans die sentrale fokus van die internasionale poging om sodanige oorgange te beheer, en vredesbou word toenemend as ’n kollektiewe raamwerk beskou, waarbinne hierdie onderskeie dimensies van konflikbestuur in een gemeenskaplike raamwerk saamgebring kan word. Die koherensiedilemma verwys na die voortdurende gaping tussen beleidsvlakaannames ten opsigte van die waarde en kousale rol van koherensie vir die doeltreffendheid van vredesboupogings en empiriese data vanuit die vredesboupraktyk wat hierdie aanvaarde kousale verband weerspreek. Die proefskrif toon dat vredesboupogings uitgedaag word deur voortdurende en diepgewortelde spanninge en teenstrydighede, en dat daar dus inherente beperkings en stremmings is ten opsigte van die mate waartoe koherensie binne enige spesifieke vredesboukonteks moontlik is. Op grond van die toepassing van die algemene kenmerke van Kompleksiteitstudies op die vredesbouproses, weerspieël die volgende drie aanbevelings die kernbevindings van die studie: (1) Vredesbouers moet toegee dat hulle nie daartoe in staat is om komplekse konflikte van buite af bepalend te analiseer en ‘oplossings’ namens ’n plaaslike gemeenskap te ontwerp nie. Hulle behoort eerder induktiewe prosesse te fasiliteer om ondersteuning te bied sodat kennis uit die plaaslike konteks na vore kom, en sodanige kennis moet as voorlopig en onderhewig aan ’n voortdurende proses tot verfyning en aanpassing, verstaan word. (2) Vredesbouers moet besef dat die selfvolhoubaarheid van vrede direk verband hou met, en beïnvloed word deur, die mate waartoe ’n gemeenskap oor die vermoë tot en ruimte vir selforganisering beskik. Vir vredeskonsolidering om selfvolhoubaar te wees, moet die proses wat daartoe aanleiding gee inheems, van ‘onder-na-bo’ en konteks-spesifiek wees. (3) Vredesbouers moet aanvaar dat hulle nie die besluite wat hulle neem op grond van voorafbestaande modelle of lesse wat elders geleer is kan regverdig nie. Die etiese implikasies van hulle besluite moet in terme van die plaaslike konteks beoordeel word, en die effekte van hulle ingrepe – bepland en onbepland – moet voortdurend opgeweeg word teen die daaglikse ervaring van die samelewings wat bygestaan word. Vredesbehoupogings behoort gelei te word deur die beginsel dat diegene wat met die gevolge van die proses sal moet saamleef, die agentskap behoort te hê om besluite oor hulle eie toekoms te neem. Die kuns van vredesbou lê in die vasstel van ’n toepaslike balans tussen internasionale ondersteuning en inheemse oplossings. Die proefskrif se argument is dat die internasionale gemeenskap tot dusver daarin gefaal het om hierdie balans te vind. As gevolg hiervan het pogings tot vredesbou dikwels bygedra tot die presiese swakhede en broosheid in die gemeenskap wat dit veronderstel was om aan te spreek. Op grond van hierdie insigte sluit die proefskrif af met ’n beroep tot ’n betekenisvolle herbalansering van die verhouding tussen internasionale invloed en plaaslike agentskap, waarin die rol van die eksterne vredesbouer beperk moet word tot die ondersteuning, fasilitering en stimulering van die plaaslike gemeenskap se vermoë tot selforganisering. Die proefskrif bepleit dus dat vredesbou herontwerp word binne ’n essensieel plaaslike raamwerk.
Falcioni, Valentina. "Complexity of Seifert manifolds." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/17054/.
Full textEsteban, Ángeles Juan Luis. "Complexity measures for resolution." Doctoral thesis, Universitat Politècnica de Catalunya, 2003. http://hdl.handle.net/10803/6642.
Full textMejoramos separaciones de tamaño anteriores entre las versiones generales y arbóreas de Resolución y Planos Secantes. Para hacerlo, extendemos una cota inferior de tamaño para circuitos monótonos booleanos de Ran y McKenzie a circuitos monótonos reales. Este tipo de separaciones es interesante porque algunos demostradores automáticos se basan en la versión arbórea de sistemas de demostración, por tanto la separación indica que no es siempre una buena idea restringirnos a la versión arbórea.
Tras la reciente aparición de R(k), que es un sistema de demostración entre Resolución y Frege con profundidad acotada, era importante estudiar cuan potente es y su relación con otros sistemas de demostración. Resolvemos un problema abierto propuesto por Krajícek, concretamente mostramos que R(2) no tiene la propiedad de la interpolación monónota factible. Para hacerlo, mostramos que R(2) es estrictamente más potente que Resolución.
Una pregunta natural es averiguar si se pueden separar sucesivos niveles de R(k) o R(k) arbóreo. Mostramos separaciones exponenciales entre niveles sucesivos de lo que podemos llamar la jerarquía R(k) arbórea. Esto significa que hay formulas que requieren refutaciones de tamaño exponencial en R(k) arbóreo, pero tienen refutaciones de tamaño polinómico en R(k+1) arbóreo.
Propusimos una nueva definición de espacio para Resolución mejorando la anterior de Kleine-Büning y Lettmann. Dimos resultados generales sobre el espacio para Resolución y Resolución arbórea y también una caracterización combinatoria del espacio para Resolución arbórea usando un juego con dos adversarios para fórmulas en FNC. La caracterización permite demostrar cotas inferiores de espacio para la Resolución arbórea sin necesidad de usar el concepto de Resolución o Resolución arbórea. Durante mucho tiempo no se supo si el espacio para Resolución y Resolución arbórea coincidían o no. Hemos demostrado que no coinciden al haber dado la primera separación entre el espacio para Resolución y Resolución arbórea.
También hemos estudiado el espacio para R(k). Demostramos que al igual que pasaba con el tamaño, R(k) arbóreo también forma una jerarquía respecto al
espacio. Por tanto, hay fórmulas que necesitan espacio casi lineal en R(k) arbóreo mientras que tienen refutaciones en R(k+1) arbóreo con espacio contante. Extendemos todas las cotas inferiores de espacio para Resolución conocidas a R(k) de una forma sencilla y unificada, que también sirve para Resolución, usando el concepto de satisfactibilidad dinámica presentado en esta obra.
This work is a contribution to the field of Proof Complexity, which studies the complexity of proof systems in terms of the resources needed to prove or refute propositional formulas. Proof Complexity is an interesting field which has several connections to other fields of Computer Science like Computational Complexity or Automatic Theorem Proving among others. This work focuses in complexity measures for refutational proof systems for CNF formulas. We consider several proof systems, namely Resolution, R(k) and Cutting Planes and our results concern mainly to the size and space complexity measures.
We improve previous size separations between treelike and general versions of Resolution and Cutting Planes. To do so we extend a size lower bound for monotone boolean circuits by Raz and McKenzie, to monotone real circuits. This kind of separations is interesting because some automated theorem provers rely on the treelike version of proof systems, so the separations show that is not always a good idea to restrict to the treelike version.
After the recent apparition of R(k) which is a proof system lying between Resolution and bounded-depth Frege it was important to study how powerful it is and its relation with other proof systems. We solve an open problem posed by Krajícek, namely we show that R(2) does not have the feasible monotone interpolation property. To do so, we show that R(2) is strictly more powerful than Resolution.
A natural question is to find out whether we can separate successive levels of R(k) or treelike R(k). We show exponential separations between successive levels of what we can call now the treelike R(k) hierarchy. That means that there are formulas that require exponential size treelike R(k) refutations whereas they have polynomial size treelike R(k+1) refutations.
We have proposed a new definition for Resolution space improving a previous one from Kleine-Büning and Lettmann. We give general results for Resolution and treelike Resolution space and also a combinatorial characterization of treelike Resolution space via a Player-Adversary game over CNF formulas. The characterization allows to prove lower bounds for treelike Resolution space with no need to use the concept of Resolution or Resolution refutations at all. For a long time it was not known whether Resolution space and treelike Resolution space coincided or not. We have answered this question in the negative because we give the first space separation from Resolution to treelike Resolution.
We have also studied space for R(k). We show that, as happened with respect to size, treelike R(k) forms a hierarchy respect to space. So, there are formulas that require nearly linear space for treelike R(k) whereas they have constant space treelike R(k+1) refutations. We extend all known Resolution space lower bounds to R(k) in an easier and unified way, that also holds for Resolution, using the concept of dynamical satisfiability introduced in this work.
Chan, Ming-Yan. "Video encoder complexity reduction /." View abstract or full-text, 2005. http://library.ust.hk/cgi/db/thesis.pl?ELEC%202005%20CHANM.
Full textWidmer, Steven. "Topics in word complexity." Thesis, Lyon 1, 2010. http://www.theses.fr/2010LYO10287/document.
Full textThe main topics of interest in this thesis will be two types of complexity, abelian complexity and permutation complexity. Abelian complexity has been investigated over the past decades. Permutation complexity is a relatively new type of word complexity which investigates lexicographical ordering of shifts of an aperiodic word. We will investigate two topics in the area of abelian complexity. Firstly we will consider an abelian variation of maximal pattern complexity. Secondly we consider an upper bound for words with the C-balance property. In the area of permutation complexity, we compute the permutation complexity function for a number of words. A formula for the complexity of Thue-Morse word is established by studying patterns in subpermutations and the action of the Thue-Morse morphism on the subpermutations. We then give a method to calculate the complexity of the image of certain words under the doubling map. The permutation complexity function of the image of the Thue-Morse word under the doubling map and the image of a Sturmian word under the doubling map are established
Chan, Siu Man. "Pebble Games and Complexity." Thesis, University of California, Berkeley, 2013. http://pqdtopen.proquest.com/#viewpdf?dispub=3593787.
Full textWe study the connection between pebble games and complexity.
First, we derive complexity results using pebble games. It is shown that three pebble games used for studying computational complexity are equivalent: namely, the two-person pebble game of Dymond-Tompa, the two-person pebble game of Raz-McKenzie, and the one-person reversible pebble game of Bennett have the same pebble costs over any directed acyclic graph. The three pebble games have been used for studying parallel complexity and for proving lower bounds under restricted settings, and we show one more such lower bound on circuit-depth.
Second, the pebble costs are applied to proof complexity. Concerning a family of unsatisfiable CNFs called pebbling contradictions, the pebble cost in any of the pebble games controls the scaling of some parameters of resolution refutations. Namely, the pebble cost controls the minimum depth of resolution refutations and the minimum size of tree-like resolution refutations.
Finally, we study the space complexity of computing the pebble costs and of computing the minimum depth of resolution refutations. It is PSPACE-complete to compute the pebble cost in any of the three pebble games, and to compute the minimum depth of resolution refutations.
Viyuygin, Mikhail. "Mixability and predictive complexity." Thesis, Royal Holloway, University of London, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.414435.
Full textCooper, D. "Classes of low complexity." Thesis, University of Oxford, 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.375251.
Full textDam, Wim van. "Nonlocality and communication complexity." Thesis, University of Oxford, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.325982.
Full textFarr, Graham E. "Topics in computational complexity." Thesis, University of Oxford, 1986. http://ora.ox.ac.uk/objects/uuid:ad3ed1a4-fea4-4b46-8e7a-a0c6a3451325.
Full textHardman, Mark. "Complexity and classroom learning." Thesis, Canterbury Christ Church University, 2015. http://create.canterbury.ac.uk/14466/.
Full textPreda, Daniel C. (Daniel Ciprian) 1979. "Quantum query complexity revisited." Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/29689.
Full textIncludes bibliographical references (leaves 30-31).
In this thesis, we look at the polynomial method for quantum query complexity and relate it to the BQPA = PA question for a random oracle A. We will also look at some open problems and improve some bounds relating classical and quantum complexity.
by Daniel C. Preda.
M.Eng.and S.B.
Kim, Christopher Eric. "Composites cost modeling : complexity." Thesis, Massachusetts Institute of Technology, 1993. http://hdl.handle.net/1721.1/12357.
Full textCollender, Michael. "Complexity and hermeneutic phenomenology." Thesis, Stellenbosch : Stellenbosch University, 2008. http://hdl.handle.net/10019.1/1084.
Full textThis thesis argues that the study of the brain as a system, which includes the disciplines of cognitive science and neuroscience, is a kind of textual exegesis, like literary criticism. Through research in scientific modeling in the 20th and early 21st centuries, anong with the advances of nonlinear science, and both cognitive science and neuroscience, along with the work of Aristotle, Saussure, and Paul Ricoeur, I argue that the parts of the brain have multiple functions, like words have multiple uses. Ricoeur, through Aristotle, argues that words only have meaning in the act of predication, the sentence. Likewise, a brain act must corporately employ a certain set of parts in the brain system. Using Aristotle, I make the case that human cognition cannot be reduced to mere brain events because the parts, the whole, and the context are integrally important to understanding the function of any given brain process. It follows then that to understand any given brain event we need to know the fullness of human experience as lived experience, not lab experience. Science should progress from what is best known to what is least known. The methodology of reductionist neuroscience does the exact opposite, at times leading to the denial of personhood or even intelligence. I advocate that the relationship between the phenomenology of human experience (which Merleau-Ponty explored famously) and brain science should be that of data to model. When neuroscience interprets the brain as separated from the lived human world, it “reads into the text” in a sense. The lived human world must intersect intimately with whatever the brain and body are doing. The cognitive science research project has traditionally required the researcher to artificially segment human experience into it pure material constituents and then reassemble it. Is the creature reanimated at the end of the dissections really human consciousness? I will suggest that we not assemble the whole out of the parts; rather human brain science should be an exegesis inward. So, brain activities are aspects of human acts, because they are performed by humans, as humans, and interpreting them is a human activity.
De, Villiers Tanya. "Complexity and the self." Thesis, Stellenbosch : Stellenbosch University, 2002. http://hdl.handle.net/10019.1/52744.
Full textENGLISH ABSTRACT: In this thesis it is argued that the age-old philosophical "Problem of the Self' can benefit by being approached from the perspective of a relatively recent science, namely that of Complexity Theory. With this in mind the conceptual features of this theory is highlighted and summarised. Furthermore, the argument is made that the predominantly dualistic approach to the self that is characteristic of the Western Philosophical tradition serves to hinder, rather than edify, our understanding of the phenomenon. The benefits posed by approaching the self as an emergent property of a complex system is elaborated upon, principally with the help of work done by Sigmund Freud, Richard Dawkins, Daniel Dennett, and Paul Cilliers. The aim is to develop a materialistic conception of the self that is plausible in terms of current empirical information and resists the temptation see the self as one or other metaphysical entity within the brain, without "reducing" the self to a crude materialism. The final chapter attempts to formulate a possible foil against the accusation of crude materialism by emphasising that the self is part of a greater system that includes the mental apparatus and its environment (conceived as culture). In accordance with Dawkins's theory the medium of interaction in this system is conceived of as memes and the self is then conceived of as a meme-complex, with culture as a medium for memetransference. The conclusion drawn from this is that the self should be studied through narrative, which provides an approach to the self that is material without being crudely physicalistic.
AFRIKAANSE OPSOMMING: In hierdie tesis word daar aangevoer dat die relatiewe jong wetenskap van Kompleksiteitsteorie 'n nuttige bydra kan lewer tot die eeue-oue filosofiese "Probleem van die Self'. Met die oog hierop word die konseptueie kenmerke van hierdie teorie na vore gebring en opgesom. Die argument word gemaak dat die meerendeels dualistiese benadering van die Westerse filosofiese tradisie tot die self ons verstaan van die fenomeen belemmer eerder as om dit te bemiddel. Die voordele van dié nuwe benadering, wat die self sien as 'n ontluikende (emergent) eienskap van In komplekses sisteem, word bespreek met verwysing na veral die werke van Sigmund Freud, Richard Dawkins, Daniel Dennett en Paul Cilliers. Daar word beoog om In verstaan van die self te ontwikkel wat kontemporêre empiriese insigte in ag neem en wat die versoeking weerstaan om ongeoorloofde metafisiese eienskappe aan die self toe te ken. Terselfdetyd word daar gepoog om geensins die uniekheid van die self te "reduseer" na 'n kru materialisme nie. In die finale hoofstuk word daar gepoog om 'n teenargument vir die voorsiene beswaar van kru materialisme te ontwikkel. Dit word gedoen deur te benadruk dat die self gesien word as deel van 'n groter, komplekse sisteem, wat die masjienerie van denke en die omgewing (wat as kultuur gekonseptualiseer word) insluit. Insgelyks, in die teorie van Dawkins word die medium van interaksie in hierdie sisteem gesien as "memes", waar die self dan n meme-kompleks vorm, en kultuur die medium van meme-oordrag is. Daar word tot die konklusie gekom dat die self op 'n narratiewe manier bestudeer behoort te word, wat dan 'n benadering tot die self voorsien wat materialisties is, sonder om kru fisikalisties te wees.
Gurr, Douglas J. "Semantic frameworks for complexity." Thesis, University of Edinburgh, 1990. http://hdl.handle.net/1842/13968.
Full textJones, Charles H., and Lee S. Gardner. "COMPLEXITY OF PCM FORMATTING." International Foundation for Telemetering, 1997. http://hdl.handle.net/10150/609697.
Full textHow difficult is it to develop a pulse code modulation (PCM) stream data format? Specifically, given a size, in bits, and a set of parameter sample rates, how hard is it to find a mapping of the sample rates that fits into the frame size -- if one even exists? Using telemetry trees this paper will show that the number of possible mappings for a given set of parameters and sample rates grows exponentially in terms of the number of parameters. The problem can thus be stated in terms of finding a specific instance, or showing that no such instance exists, among an exponentially large number of potential mappings. Although not proof, this provides strong evidence that the PCM format design problem is NP-complete (meaning it is a member of nondeterministic polynomial space but not a member of deterministic polynomial space). That is, that the problem can not be solved in polynomial time and would take a computer years or centuries to solve relatively small instances of the problem. However, if the problem requirements are relaxed slightly, telemetry trees can be used to reduce the PCM formatting problem to linear time in terms of the number of parameters. This paper describes a technique that can provide an optimal and fully packed PCM format.