Rozprawy doktorskie na temat „Complexity”

Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Complexity.

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „Complexity”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.

1

Baumler, Raphaël. "La sécurité de marché et son modèle maritime : entre dynamiques du risque et complexité des parades : les difficultés pour construire la sécurité". Thesis, Evry-Val d'Essonne, 2009. http://www.theses.fr/2009EVRY0024/document.

Pełny tekst źródła
Streszczenie:
Modèles de développement, capitalisme et industrialisme sont de grandes dynamiques du risque par leur capacité à transformer le social. Au niveau des firmes, l’innovation continue et la concurrence obligent à l’ajustement permanent. Soumises aux propriétaires, les firmes se focalisent sur le risque financier. Les autres risques lui sont subordonnés. Les dynamiques internes du risque évoluent au rythme d’impératifs externes. La compétition justifie réductions de coûts et réorganisations déstabilisantes. La sécurité a pour objectif la limitation des conditions de réalisation des risques. Construction sociale complexe, la sécurité voit localement la fusion d’hommes, d’outils dans une organisation. Globalement, l’enjeu de la sécurité est la maîtrise du niveau de risque et son coût. Comme pour l’armateur du navire, la direction de l’unité possède les clés de la sécurité. Elle arbitre entre les budgets et joue la concurrence entre les territoires. En assurant l’impunité, l’équivalence et la non-discrimination, le droit international garantit une mise en concurrence de tous les États. Avec la Mondialisation, nous entrons dans l’ère de la sécurité de marché. La sécurité est vue comme un facteur de production. Dans la concurrence, les dirigeants l’intègrent dans leurs stratégies, notamment lors des choix d’implantation géographique et des répartitions budgétaires. En sélectionnant les participants, la direction produit une représentation univoque de la sécurité en phase avec ses paradigmes. La rénovation de la sécurité dans les unités productives se joue localement mais aussi globalement en découvrant les complexités des dynamiques du risque et de la construction de la sécurité
Models of development, capitalism and industrialism are also big dynamics of risk by their ability altering social world. At the level of firms, innovation and competition requires ongoing adjustment. Subject to their owners, companies focus on financial risk. Other risks are subordinate to the primary target. The dynamics of risk are changing the firm at the rate of external demands. The competition justifies harmful cost reductions and destabilizing re-engineering. The aim of safety is to reduce the uprising conditions of risk. Safety is a complex social building. Locally, safety seems a melt of man and tools within an organization. Overall, control of the safety is a challenge between risk and cost in the unit. Between cost and efficiency, management makes its own choice. As the shipowner and his vessel, the factory management has the keys to safety. It arbitrates between budgets and plays competition between territories. Ensuring impunity, equivalence and non-discrimination, international law guarantees competition between all States and flags. With globalization, we entered the era of the safety market. Safety is one of the production factors in global competition. Business leaders incorporate it into their overall strategies. With this factor in mind they choose their factories geographical location but also the allocation of budgets inside the firm. In selecting safety participants, the Executive create a unique picture of what safety is that corresponds to their paradigms. The rebuilding of safety in production units is played locally but also globally and by discovering the complexities of the dynamics of risk and the way of building safety
Style APA, Harvard, Vancouver, ISO itp.
2

Rubiano, Thomas. "Implicit Computational Complexity and Compilers". Thesis, Sorbonne Paris Cité, 2017. http://www.theses.fr/2017USPCD076/document.

Pełny tekst źródła
Streszczenie:
Complexity theory helps us predict and control resources, usually time and space, consumed by programs. Static analysis on specific syntactic criterion allows us to categorize some programs. A common approach is to observe the program’s data’s behavior. For instance, the detection of non-size-increasing programs is based on a simple principle : counting memory allocation and deallocation, particularly in loops. This way, we can detect programs which compute within a constant amount of space. This method can easily be expressed as property on control flow graphs. Because analyses on data’s behaviour are syntactic, they can be done at compile time. Because they are only static, those analyses are not always computable or easily computable and approximations are needed. “Size-Change Principle” from C. S. Lee, N. D. Jones et A. M. Ben-Amram presented a method to predict termination by observing resources evolution and a lot of research came from this theory. Until now, these implicit complexity theories were essentially applied on more or less toy languages. This thesis applies implicit computational complexity methods into “real life” programs by manipulating intermediate representation languages in compilers. This give an accurate idea of the actual expressivity of these analyses and show that implicit computational complexity and compilers communities can fuel each other fruitfully. As we show in this thesis, the methods developed are quite generals and open the way to several new applications
La théorie de la complexité´e s’intéresse à la gestion des ressources, temps ou espace, consommés par un programmel ors de son exécution. L’analyse statique nous permet de rechercher certains critères syntaxiques afin de catégoriser des familles de programmes. L’une des approches les plus fructueuses dans le domaine consiste à observer le comportement potentiel des données manipulées. Par exemple, la détection de programmes “non size increasing” se base sur le principe très simple de compter le nombre d’allocations et de dé-allocations de mémoire, en particulier au cours de boucles et on arrive ainsi à détecter les programmes calculant en espace constant. Cette méthode s’exprime très bien comme propriété sur les graphes de flot de contrôle. Comme les méthodes de complexité implicite fonctionnent à l’aide de critères purement syntaxiques, ces analyses peuvent être faites au moment de la compilation. Parce qu’elles ne sont ici que statiques, ces analyses ne sont pas toujours calculables ou facilement calculables, des compromis doivent être faits en s’autorisant des approximations. Dans le sillon du “Size-Change Principle” de C. S. Lee, N. D. Jones et A. M. Ben-Amram, beaucoup de recherches reprennent cette méthode de prédiction de terminaison par observation de l’évolution des ressources. Pour le moment, ces méthodes venant des théories de la complexité implicite ont surtout été appliquées sur des langages plus ou moins jouets. Cette thèse tend à porter ces méthodes sur de “vrais” langages de programmation en s’appliquant au niveau des représentations intermédiaires dans des compilateurs largement utilises. Elle fournit à la communauté un outil permettant de traiter une grande quantité d’exemples et d’avoir une idée plus précise de l’expressivité réelle de ces analyses. De plus cette thèse crée un pont entre deux communautés, celle de la complexité implicite et celle de la compilation, montrant ainsi que chacune peut apporter à l’autre
Style APA, Harvard, Vancouver, ISO itp.
3

Pankratov, Denis. "Communication complexity and information complexity". Thesis, The University of Chicago, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=3711791.

Pełny tekst źródła
Streszczenie:

Information complexity enables the use of information-theoretic tools in communication complexity theory. Prior to the results presented in this thesis, information complexity was mainly used for proving lower bounds and direct-sum theorems in the setting of communication complexity. We present three results that demonstrate new connections between information complexity and communication complexity.

In the first contribution we thoroughly study the information complexity of the smallest nontrivial two-party function: the AND function. While computing the communication complexity of AND is trivial, computing its exact information complexity presents a major technical challenge. In overcoming this challenge, we reveal that information complexity gives rise to rich geometrical structures. Our analysis of information complexity relies on new analytic techniques and new characterizations of communication protocols. We also uncover a connection of information complexity to the theory of elliptic partial differential equations. Once we compute the exact information complexity of AND, we can compute exact communication complexity of several related functions on n-bit inputs with some additional technical work. Previous combinatorial and algebraic techniques could only prove bounds of the form Θ( n). Interestingly, this level of precision is typical in the area of information theory, so our result demonstrates that this meta-property of precise bounds carries over to information complexity and in certain cases even to communication complexity. Our result does not only strengthen the lower bound on communication complexity of disjointness by making it more exact, but it also shows that information complexity provides the exact upper bound on communication complexity. In fact, this result is more general and applies to a whole class of communication problems.

In the second contribution, we use self-reduction methods to prove strong lower bounds on the information complexity of two of the most studied functions in the communication complexity literature: Gap Hamming Distance (GHD) and Inner Product mod 2 (IP). In our first result we affirm the conjecture that the information complexity of GHD is linear even under the uniform distribution. This strengthens the Ω(n) bound shown by Kerenidis et al. (2012) and answers an open problem by Chakrabarti et al. (2012). We also prove that the information complexity of IP is arbitrarily close to the trivial upper bound n as the permitted error tends to zero, again strengthening the Ω(n) lower bound proved by Braverman and Weinstein (2011). More importantly, our proofs demonstrate that self-reducibility makes the connection between information complexity and communication complexity lower bounds a two-way connection. Whereas numerous results in the past used information complexity techniques to derive new communication complexity lower bounds, we explore a generic way, in which communication complexity lower bounds imply information complexity lower bounds in a black-box manner.

In the third contribution we consider the roles that private and public randomness play in the definition of information complexity. In communication complexity, private randomness can be trivially simulated by public randomness. Moreover, the communication cost of simulating public randomness with private randomness is well understood due to Newman's theorem (1991). In information complexity, the roles of public and private randomness are reversed: public randomness can be trivially simulated by private randomness. However, the information cost of simulating private randomness with public randomness is not understood. We show that protocols that use only public randomness admit a rather strong compression. In particular, efficient simulation of private randomness by public randomness would imply a version of a direct sum theorem in the setting of communication complexity. This establishes a yet another connection between the two areas. (Abstract shortened by UMI.)

Style APA, Harvard, Vancouver, ISO itp.
4

Smith, Peter. "Adaptive leadership: fighting complexity with complexity". Thesis, Monterey, California: Naval Postgraduate School, 2014. http://hdl.handle.net/10945/42728.

Pełny tekst źródła
Streszczenie:
CHDS State/Local
Contemporary crises have become increasingly complex and the methods of leading through them have failed to keep pace. If it is assumed that leadership matters—that it has a legitimate effect on the outcome of a crisis, then leaders have a duty to respond to that adaptation with modifications of their own. Using literature sources, the research explores crisis complexity, crisis leadership, and alternative leadership strategies. Specifically, the research evaluates the applicability of complexity science to current crises. Having identified the manner in which crises have changed, it focuses on the gap between contemporary crises and the current methods of crisis leadership. The paper pursues adaptive methods of leading in complex crises and examines a number of alternative strategies for addressing the gap. The research suggests that a combination of recognizing the complexity of contemporary crises, applying resourceful solutions, and continually reflecting on opportunities to innovate, may be an effective way to lead through complex crises using complex leadership.
Style APA, Harvard, Vancouver, ISO itp.
5

Chen, Lijie S. M. Massachusetts Institute of Technology. "Fine-grained complexity meets communication complexity". Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122754.

Pełny tekst źródła
Streszczenie:
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 215-229).
Fine-grained complexity aims to understand the exact exponent of the running time of fundamental problems in P. Basing on several important conjectures such as Strong Exponential Time Hypothesis (SETH), All-Pair Shortest Path Conjecture, and the 3-Sum Conjecture, tight conditional lower bounds are proved for numerous exact problems from all fields of computer science, showing that many text-book algorithms are in fact optimal. For many natural problems, a fast approximation algorithm would be as important as fast exact algorithms. So it would be interesting to show hardness for approximation algorithms as well. But we had few techniques to prove tight hardness for approximation problems in P--In particular, the celebrated PCP Theorem, which proves similar approximation hardness in the world of NP-completeness, is not fine-grained enough to yield interesting conditional lower bounds for approximation problems in P.
In 2017, a breakthrough work of Abboud, Rubinstein and Williams [12] established a framework called "Distributed PCP", and applied that to show conditional hardness (under SETH) for several fundamental approximation problems in P. The most interesting aspect of their work is a connection between fine-grained complexity and communication complexity, which shows Merlin-Arther communication protocols can be utilized to give fine-grained reductions between exact and approximation problems. In this thesis, we further explore the connection between fine-grained complexity and communication complexity. More specifically, we have two sets of results. In the first set of results, we consider communication protocols other than Merlin-Arther protocols in [12] and show that they can be used to construct other fine-grained reductions between problems. [sigma]₂ Protocols and An Equivalence Class for Orthogonal Vectors (OV).
First, we observe that efficient [sigma]₂[superscripts cc] protocols for a function imply fine-grained reductions from a certain related problem to OV. Together with other techniques including locality-sensitive hashing, we establish an equivalence class for OV with O(log n) dimensions, including Max-IP/Min-IP, approximate Max-IP/Min-IP, and approximate bichromatic closest/further pair. · NP · UPP Protocols and Hardness for Computational Geometry Problems in 2⁰([superscript log*n]) Dimensions. Second, we consider NP · UPP protocols which are the relaxation of Merlin-Arther protocols such that Alice and Bob only need to be convinced with probability > 1/2 instead of > 2/3.
We observe that NP · UPP protocols are closely connected to Z-Max-IP problem in very small dimensions, and show that Z-Max-IP, l₂₋-Furthest Pair and Bichromatic l₂-Closest Pair in 2⁰[superscript (log* n)] dimensions requires n²⁻⁰[superscript (1)] time under SETH, by constructing an efficient NP - UPP protocol for the Set-Disjointness problem. This improves on the previous hardness result for these problems in w(log² log n) dimensions by Williams [172]. · IP Protocols and Hardness for Approximation Problems Under Stronger Conjectures. Third, building on the connection between IP[superscript cc] protocols and a certain alternating product problem observed by Abboud and Rubinstein [11] and the classical IP = PSPACE theorem [123, 155]. We show that several finegrained problems are hard under conjectures much stronger than SETH (e.g., the satisfiability of n⁰[superscript (1)]-depth circuits requires 2(¹⁻⁰[superscript (1)n] time).
In the second set of results, we utilize communication protocols to construct new algorithms. · BQP[superscript cc] Protocols and Approximate Counting Algorithms. Our first connection is that a fast BQP[superscript cc] protocol for a function f implies a fast deterministic additive approximate counting algorithm for a related pair counting problem. Applying known BQP[superscript cc] protocols, we get fast deterministic additive approximate counting algorithms for Count-OV (#OV), Sparse Count-OV and Formula of SYM circuits. · AM[superscript cc]/PH[superscript cc] Protocols and Efficient SAT Algorithms. Our second connection is that a fast AM[superscript cc] (or PH[superscript cc]) protocol for a function f implies a faster-than-bruteforce algorithm for a related problem.
In particular, we show that if the Longest Common Subsequence (LCS) problem admits a fast (computationally efficient) PH[superscript cc] protocol (polylog(n) complexity), then polynomial-size Formula-SAT admits a 2[superscript n-n][superscript 1-[delta]] time algorithm for any constant [delta] > 0, which is conjectured to be unlikely by a recent work of Abboud and Bringmann [6].
by Lijie Chen.
S.M.
S.M. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
Style APA, Harvard, Vancouver, ISO itp.
6

Gopalakrishnan, K. S. "Complexity cores in average-case complexity theory". [Ames, Iowa : Iowa State University], 2009. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:1473222.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Brochenin, Rémi. "Separation logic : expressiveness, complexity, temporal extension". Phd thesis, École normale supérieure de Cachan - ENS Cachan, 2013. http://tel.archives-ouvertes.fr/tel-00956587.

Pełny tekst źródła
Streszczenie:
This thesis studies logics which express properties on programs. These logics were originally intended for the formal verification of programs with pointers. Overall, no automated verification method will be proved tractable here- rather, we give a new insight on separation logic. The complexity and decidability of some essential fragments of this logic for Hoare triples were not known before this work. Also, its combination with some other verification methods was little studied. Firstly, in this work we isolate the operator of separation logic which makes it undecidable. We describe the expressive power of this logic, comparing it to second-order logics. Secondly, we try to extend decidable subsets of separation logic with a temporal logic, and with the ability to describe data. This allows us to give boundaries to the use of separation logic. In particular, we give boundaries to the creation of decidable logics using this logic combined with a temporal logic or with the ability to describe data.
Style APA, Harvard, Vancouver, ISO itp.
8

Otto, James R. (James Ritchie). "Complexity doctrines". Thesis, McGill University, 1995. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=29104.

Pełny tekst źródła
Streszczenie:
We characterize various complexity classes as the images in set$ sp2,$ set$ sp{V},$ and set$ sp3$ of categories initial in various complexity doctrines. (A doctrine consists of the models of a theory of theories.) We so characterize the linear time, P space, linear space, P time, and Kalmar elementary functions as well as the linear time hierarchy relations. (Our machine model is multi-tape Turing machines with constant number of tapes.) These doctrines extend, using comprehensions, the first order doctrines GM and JB. We show, using dependent product diagrams, how to so extend the higher order doctrine LCC. However, using Church numerals, we show that the resulting LCC comprehensions do not provide enough control over higher order types to characterize complexity classes. We also show how to use sketches and orthogonality for almost equational specification.
Style APA, Harvard, Vancouver, ISO itp.
9

Ada, Anil. "Communication complexity". Thesis, McGill University, 2014. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=121119.

Pełny tekst źródła
Streszczenie:
Communication complexity studies how many bits a certain number of parties need to communicate with each other in order to compute a function whose input is distributed among those parties. Although it is a natural area of investigation based on practical considerations, the main motivation comes from the myriad of applications in theoretical computer science.This thesis has three main parts, studying three different aspects of communication complexity.1. The first part is concerned with the k-party communication complexity of functions F:({0,1}^n)^k -> {0,1} in the 'number on the forehead' (NOF) model. This is a fundamental model with many applications. In this model we study composed functions f of g. These functions include most of the well-known and studied functions in communication complexity literature. A major goal is to understand which combinations of f and g lead to hard communication functions. In particular, due to important circuit applications, it is of great interest to understand how powerful the NOF model becomes when k is log n or more. Motivated by these goals, we show that there is an efficient O(log^3 n) cost simultaneous protocol for sym of g when k > 1+log n, sym is any symmetric function and g is any function. This class of functions includes some functions that were previously conjectured to be hard and our result rules this class out for possible very important circuit complexity applications. We also give Ramsey theoretic applications of our efficient protocol. In the setting of k < log n, we study more closely functions of the form majority of g, mod_m of g, and nor of g, where the latter two are generalizations of the well-known functions Inner Product and Disjointness respectively. We characterize the communication complexity of these functions with respect to the choice of g. As the main application, we answer a question posed by Babai et al. (SIAM Journal on Computing, 33:137--166, 2004) and determine the communication complexity of majority of qcsb, where qcsb is the "quadratic character of the sum of the bits" function. 2. The second part is about Fourier analysis of symmetric boolean functions and its applications in communication complexity and other areas. The spectral norm of a boolean function f:{0,1}^n -> {0,1} is the sum of the absolute values of its Fourier coefficients. This quantity provides useful upper and lower bounds on the complexity of a function in areas such as communication complexity, learning theory and circuit complexity. We give a combinatorial characterization for the spectral norm of symmetric functions. We show that the logarithm of the spectral norm is of the same order of magnitude as r(f)log(n/r(f)) where r(f) = max(r_0,r_1), and r_0 and r_1 are the smallest integers less than n/2 such that f(x) or f(x)parity(x) is constant for all x with x_1 + ... + x_n in [r_0, n-r_1]. We present some applications to the decision tree and communication complexity of symmetric functions. 3. The third part studies privacy in the context of communication complexity: how much information do the players reveal about their input when following a communication protocol? The unattainability of perfect privacy for many functions motivates the study of approximate privacy. Feigenbaum et al. (Proceedings of the 11th Conference on Electronic Commerce, 167--178, 2010) defined notions of worst-case as well as average-case approximate privacy, and presented several interesting upper bounds, and some open problems for further study. In this thesis, we obtain asymptotically tight bounds on the trade-offs between both the worst-case and average-case approximate privacy of protocols and their communication cost for Vickrey Auction, which is the canonical example of a truthful auction. We also prove exponential lower bounds on the approximate privacy of protocols computing the Intersection function, independent of its communication cost. This proves a conjecture of Feigenbaum et al.
La complexité de communication étudie combien de bits un groupe de joueurs donné doivent échanger entre eux pour calculer une function dont l'input est distribué parmi les joueurs. Bien que ce soit un domaine de recherche naturel basé sur des considérations pratiques, la motivation principale vient des nombreuses applications théoriques.Cette thèse comporte trois parties principales, étudiant trois aspects de la complexité de communication.1. La première partie discute le modèle 'number on the forehead' (NOF) dans la complexité de communication à plusieurs joueurs. Il s'agit d'un modèle fondamental en complexité de communication, avec des applications à la complexité des circuits, la complexité des preuves, les programmes de branchement et la théorie de Ramsey. Dans ce modèle, nous étudions les fonctions composeés f de g. Ces fonctions comprennent la plupart des fonctions bien connues qui sont étudiées dans la littérature de la complexité de communication. Un objectif majeur est de comprendre quelles combinaisons de f et g produisent des compositions qui sont difficiles du point de vue de la communication. En particulier, à cause de l'importance des applications aux circuits, il est intéressant de comprendre la puissance du modèle NOF quand le nombre de joueurs atteint ou dépasse log n. Motivé par ces objectifs nous montrons l'existence d'un protocole simultané efficace à k joueurs de coût O(log^3 n) pour sym de g lorsque k > 1 + log n, sym est une function symmétrique quelconque et g est une fonction arbitraire. Nous donnons aussi des applications de notre protocole efficace à la théorie de Ramsey.Dans le contexte où k < log n, nous étudions de plus près des fonctions de la forme majority de g, mod_m de g et nor de g, où les deux derniers cas sont des généralisations des fonctions bien connues et très étudiées Inner Product et Disjointness respectivement. Nous caractérisons la complexité de communication de ces fonctions par rapport au choix de g.2. La deuxième partie considère les applications de l'analyse de Fourier des fonctions symmétriques à la complexité de communication et autres domaines. La norme spectrale d'une function booléenne f:{0,1}^n -> {0,1} est la somme des valeurs absolues de ses coefficients de Fourier. Nous donnons une caractérisation combinatoire pour la norme spectrale des fonctions symmétriques. Nous montrons que le logarithme de la norme spectrale est du même ordre de grandeur que r(f)log(n/r(f)), avec r(f) = max(r_0,r_1) où r_0 et r_1 sont les entiers minimaux plus petits que n/2 pour lesquels f(x) ou f(x)parity(x) est constant pour tout x tel que x_1 + ... + x_n à [r_0,n-r_1]. Nous présentons quelques applications aux arbres de décision et à la complexité de communication des fonctions symmétriques.3. La troisième partie étudie la confidentialité dans le contexte de la complexité de communication: quelle quantité d'information est-ce que les joueurs révèlent sur leur input en suivant un protocole donné? L'inatteignabilité de la confidentialité parfaite pour plusieurs fonctions motivent l'étude de la confidentialité approximative. Feigenbaum et al. (Proceedings of the 11th Conference on Electronic Commerce, 167--178, 2010) ont défini des notions de confidentialité approximative dans le pire cas et dans le cas moyen, et ont présenté plusieurs bornes supérieures intéressantes ainsi que quelques questions ouvertes. Dans cette thèse, nous obtenons des bornes asymptotiques précises, pour le pire cas aussi bien que pour le cas moyen, sur l'échange entre la confidentialité approximative de protocoles et le coût de communication pour les enchères Vickrey Auction, qui constituent l'exemple canonique d'une enchère honnête. Nous démontrons aussi des bornes inférieures exponentielles sur la confidentialité approximative de protocoles calculant la function Intersection, indépendamment du coût de communication. Ceci résout une conjecture de Feigenbaum et al.
Style APA, Harvard, Vancouver, ISO itp.
10

Mariotti, Humberto, i Cristina Zauhy. "Managing Complexity". Universidad Peruana de Ciencias Aplicadas (UPC), 2014.

Znajdź pełny tekst źródła
Streszczenie:
This article is a brief introduction to complexity, complex thinking and complexitymanagement. Its purpose is to present an update on the applications of the complexitysciences particularly to the universe of corporations and management. It includes anexample taken from the globalized world and two more stories from the corporateenvironment. Some details on how to think about complexity and how to apply theconceptual and operative tools of complex thinking are provided. The article ends withsome remarks on personal, interpersonal and corporate benefits of the complexthinking.
Style APA, Harvard, Vancouver, ISO itp.
11

Sharp, L. Kathryn. "Text Complexity". Digital Commons @ East Tennessee State University, 2014. https://dc.etsu.edu/etsu-works/4290.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
12

Wennberg, Andreas, i Emil Persson. "Coopetition and Complexity : Exploring a Coopetitive Relationship with Complexity". Thesis, Umeå universitet, Handelshögskolan vid Umeå universitet (USBE), 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-52689.

Pełny tekst źródła
Streszczenie:
Cooperation have in previous research been seen as a negative impact on competition and  vice versa. This thesis is building on a concept called coopetition in which cooperation and  competition is studied simultaneously. Coopetition have been studied in terms of the level of  cooperation and competition. However, we found a possible link between coopetition and  complexity in previous literature. Thus, the purpose of this study is to explore whether  complexity can develop an understanding for what organizations within a company group  cooperate and compete about as well what they want to cooperate and compete about.     The four main cornerstones in the theoretical frame of reference is cooperation, competition,  coopetition and complexity. We begin by defining these concepts, describing previous  research and discuss various factors of the concepts. Finally, we further develop the possible  link between coopetition and complexity.    For reaching our purpose we study a company group in the travel industry. We are  conducting unstructured interviews with people at leading positions in the company group.  Our analysis is done by thematic network analysis in six steps.    The empirical data is coded, basic themes are found and a condensed version of the  interviews is presented together with a short presentation about the company group.     In the analysis we present two global themes, cooperation and competition. These are both  derived from the basic themes and organizing themes. The factors of complexity are the  organizing themes.    Our conclusion is that complexity can categorize wanted and actual cooperation in the  company group in the sense that complexity has to be lowered for cooperation to exist.  Regarding competition, we did not draw any conclusion to our purpose due to lack of data.  However, we find that competition is mainly seen as negative in the coopetitive situation  studied.     The implications of this thesis is that complexity can further refine the concept of coopetition  but the causality have to be further tested. In a managerial perspective, leaders should focus  on decreasing the factors of complexity if cooperation is wanted. We also suggest that it is  important to understand if people in organizations are positive towards cooperation and  negative to competition or the other way around.
Style APA, Harvard, Vancouver, ISO itp.
13

Okabe, Yasuo. "Parallel Computational Complexity and Date-Transfer Complexity of Supercomputing". Kyoto University, 1994. http://hdl.handle.net/2433/74658.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
14

Raynard, Mia. "Deconstructing Complexity: Configurations of Institutional Complexity and Structural Hybridity". SAGE Publications, 2016. http://dx.doi.org/10.1177/1476127016634639.

Pełny tekst źródła
Streszczenie:
This article unpacks the notion of institutional complexity and highlights the distinct sets of challenges confronting hybrid structural arrangements. The framework identifies three factors that contribute to the experience of complexity - namely, the extent to which the prescriptive demands of logics are incompatible, whether there is a settled or widely accepted prioritization of logics within the field, and the degree to which the jurisdictions of the logics overlap. The central thesis is that these "components" of complexity variously combine to produce four distinct institutional landscapes, each with differing implications for the challenges organizations face and for how they might respond. The article explores the situational relevance of an array of hybridizing responses and discusses their implications for organizational legitimacy and performance. It concludes by specifying the boundary conditions of the framework and highlighting fruitful directions for future scholarship.
Style APA, Harvard, Vancouver, ISO itp.
15

Colijn, Caroline. "Addressing complexity, exploring social change through chaos and complexity theory". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/mq43374.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
16

Large, David. "Complexity and communities : the application of complexity to community studies". Thesis, Northumbria University, 2015. http://nrl.northumbria.ac.uk/25244/.

Pełny tekst źródła
Streszczenie:
Understanding community dynamics has always been a challenge for policy-makers. Often community policy has been ineffective and wasteful. This research explores and sets out an alternative, complexity-informed approach to community studies. The research develops an innovative, two-stage interview methodology informed by complexity considerations. This methodology is applied to two case studies of community-based organisations in Newcastle upon Tyne. The two case studies allow a comparative assessment of the complexity-informed methodology. In this way, the research uses a complexity-informed approach to produce a holistic and realistic view of the community being examined. By analysing the contribution of those present the research is able to capture information that is relevant and that may be used to bring about change. Complexity-informed approaches are thus shown to be open, flexible, insightful, confidence-building and engaging, when considering people living and working in communities. The research finds complexity considerations to show that, to be effective, public policy needs to offer choices to local people as to how they want to interpret local government policy in their area. This requires more than evidence gathering and assessment of the evidence gathered. It requires the active involvement of the community. Complexity factors such as interaction and emergence are used to identify important relationships and to assess social, economic and environmental changes from the community point of view. These are considered in the context in which they occur and for as long as the situation applies. A complexity-informed approach is shown to open the way for community interventions based on community views and needs. In doing this complexity is able to support genuine decision-making and action by communities for communities. Through discussion and reflection, the thesis finds this to be a suitable basis for public policy formation.
Style APA, Harvard, Vancouver, ISO itp.
17

Uden, Jacobus Cornelis van. "Organisation & complexity : using complexity science to theorise organisational aliveness /". [S. l. : s. n.], 2004. http://catalogue.bnf.fr/ark:/12148/cb39270773j.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
18

Below, Alexander. "Complexity of triangulation /". [S.l.] : [s.n.], 2002. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=14672.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
19

Rezaei, Hengameh. "Models complexity measurement". Thesis, Linköpings universitet, Institutionen för datavetenskap, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-68701.

Pełny tekst źródła
Streszczenie:
The demand for measuring the quality aspects and need for higher maintainability and understandability of the models are increasing within the field of software engineering and management. Among these, complex models are of special interest for designers as they are more correlated to the eventual reliability of the system and therefore are considered very important. This study presents a method for measuring the complexity of existing software models in Ericsson seeking to raise the maintainability and understandability of the software engineering project in progress. A literature survey was performed in order to find a list of all potentially useful metrics. Narrowing down the long list of metrics was carried out by interviews with designers at Ericsson. Utilizing statistical data analysis based on interviews results was the next step. Beside, workshops were used for evaluating the reliability of preliminary data analysis and an empirical formula was generated for models’ complexity prediction. Metrics such as “non-self-transitions”, “transitions per states”, and “state depth” are the most important for calculating the models’ complexity score (rank) and for these metrics threshold values were set. Challenges and experiences gained in this study demonstrated the importance of incorporating user generated feedback in the empirical complexity modeling studies
Style APA, Harvard, Vancouver, ISO itp.
20

Mayhew, Dillon. "Matroids and complexity". Thesis, University of Oxford, 2005. http://ora.ox.ac.uk/objects/uuid:23640923-17c3-4ad8-9845-320e3b662910.

Pełny tekst źródła
Streszczenie:
We consider different ways of describing a matroid to a Turing machine by listing the members of various families of subsets, and we construct an order on these different methods of description. We show that, under this scheme, several natural matroid problems are complete in classes thought not to be equal to P. We list various results linking parameters of basis graphs to parameters of their associated matroids. For small values of k we determine which matroids have the clique number, chromatic number, or maximum degree of their basis graphs bounded above by k. If P is a class of graphs that is closed under isomorphism and induced subgraphs, then the set of matroids whose basis graphs belong to P is closed under minors. We characterise the minor-closed classes that arise in this way, and exhibit several examples. One way of choosing a basis of a matroid at random is to select a total ordering of the ground set uniformly at random and use the greedy algorithm. We consider the class of matroids having the property that this procedure chooses a basis uniformly at random. Finally we consider a problem mentioned by Oxley. He asked if, for every two elements and n - 2 cocircuits in an n-connected matroid, there is a circuit that contains both elements and that meets every cocircuit. We show that a slightly stronger property holds for regular matroids.
Style APA, Harvard, Vancouver, ISO itp.
21

Chew, Leroy Nicholas. "QBF proof complexity". Thesis, University of Leeds, 2017. http://etheses.whiterose.ac.uk/18281/.

Pełny tekst źródła
Streszczenie:
Quantified Boolean Formulas (QBF) and their proof complexity are not as well understood as propositional formulas, yet remain an area of interest due to their relation to QBF solving. Proof systems for QBF provide a theoretical underpinning for the performance of these solvers. We define a novel calculus IR-calc, which enables unification of the principal existing resolution-based QBF calculi and applies to the more powerful Dependency QBF (DQBF). We completely reveal the relative power of important QBF resolution systems, settling in particular the relationship between the two different types of resolution-based QBF calculi. The most challenging part of this comparison is to exhibit hard formulas that underlie the exponential separations of the proof systems. In contrast to classical proof complexity we are currently short of lower bound techniques for QBF proof systems. To this end we exhibit a new proof technique for showing lower bounds in QBF proof systems based on strategy extraction. We also find that the classical lower bound techniques of the prover-delayer game and feasible interpolation can be lifted to a QBF setting and provide new lower bounds. We investigate more powerful proof systems such as extended resolution and Frege systems. We define and investigate new QBF proof systems that mix propositional rules with a reduction rule, we find the strategy extraction technique also works and directly lifts lower bounds from circuit complexity. Such a direct transfer from circuit to proof complexity lower bounds has often been postulated, but had not been formally established for propositional proof systems prior to this work. This leads to strong lower bounds for restricted versions of QBF Frege, in particular an exponential lower bound for QBF Frege systems operating with AC0[p] circuits. In contrast, any non-trivial lower bound for propositional AC0[p]-Frege constitutes a major open problem.
Style APA, Harvard, Vancouver, ISO itp.
22

Beheshti, Soosan 1969. "Minimum description complexity". Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/8012.

Pełny tekst źródła
Streszczenie:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2002.
Includes bibliographical references (p. 136-140).
The classical problem of model selection among parametric model sets is considered. The goal is to choose a model set which best represents observed data. The critical task is the choice of a criterion for model set comparison. Pioneer information theoretic based approaches to this problem are Akaike information criterion (AIC) and different forms of minimum description length (MDL). The prior assumption in these methods is that the unknown true model is a member of all the competing sets. We introduce a new method of model selection: minimum description complexity (MDC). The approach is motivated by the Kullback-Leibler information distance. The method suggests choosing the model set for which the model set relative entropy is minimum. We provide a probabilistic method of MDC estimation for a class of parametric model sets. In this calculation the key factor is our prior assumption: unlike the existing methods, no assumption of the true model being a member of the competing model sets is needed. The main strength of the MDC calculation is in its method of extracting information from the observed data.
(cont.) Interesting results exhibit the advantages of MDC over MDL and AIC both theoretically and practically. It is illustrated that, under particular conditions, AIC is a special case of MDC. Application of MDC in system identification and signal denoising is investigated. The proposed method answers the challenging question of quality evaluation in identification of stable LTI systems under a fair prior assumption on the unmodeled dynamics. MDC also provides a new solution to a class of denoising problems. We elaborate the theoretical superiority of MDC over the existing thresholding denoising methods.
by Soosan Beheshti.
Ph.D.
Style APA, Harvard, Vancouver, ISO itp.
23

Uzuner, Tolga. "Effective network complexity". Thesis, University of Cambridge, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.612749.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
24

Washburn, Fred AlDean. "Supervisee cognitive complexity". Diss., University of Iowa, 2015. https://ir.uiowa.edu/etd/1791.

Pełny tekst źródła
Streszczenie:
Supervision literature has indicated the importance of the supervisory working alliance in the development of effective supervision (Landy, Ellis, & Friedlander, 1999). While there has been a wealth of research on the role of the supervisory working alliance within supervision, there is a dearth of information on how this alliance is formed (Cooper & Ng, 2009). The purpose of this study is to examine if supervision cognitive complexity is a unique aspect of cognitive complexity within counseling and better understand its role in the formation of the supervisory working alliance. Forty-two participants were selected from CACREP accredited masters and doctoral programs located in the North Central region of the Association of Counselor Educators and Supervisors (NCACES). Cognitive complexity was measured via two different measures: the Counselor Cognitions Questionnaire (CCQ) and Supervision Cognitive Complexity Questionnaire (SCCQ). The supervisory working alliance was measured by the Supervisory Working Alliance Inventory-Trainee (SWAI-T) which measures the supervisory working alliance from the perspective of the trainee. Results indicated a strong correlation between counseling cognitive complexity and supervision cognitive complexity. Further, the supervision working alliance was not significantly correlated with either measure of cognitive complexity. Supervision cognitive complexity did provide a significant contribution to the variance accounted for in the subscale of client focus in the SWAI-T. Implications for counselor educators and supervisors are discussed.
Style APA, Harvard, Vancouver, ISO itp.
25

Winerip, Jason. "Graph Linear Complexity". Scholarship @ Claremont, 2008. https://scholarship.claremont.edu/hmc_theses/216.

Pełny tekst źródła
Streszczenie:
This thesis expands on the notion of linear complexity for a graph as defined by Michael Orrison and David Neel in their paper "The Linear Complexity of a Graph." It considers additional classes of graphs and provides upper bounds for additional types of graphs and graph operations.
Style APA, Harvard, Vancouver, ISO itp.
26

Aleo, Ignazio. "Complexity in motion". Doctoral thesis, Università di Catania, 2012. http://hdl.handle.net/10761/1072.

Pełny tekst źródła
Streszczenie:
In the last few years a lot of works have been done in the field of motor control and in motion analysis. Several different hypotheses have been described and reviewed to understand living beings on motor coordination. What is commonly referred to, as motor control is indeed an articulated problem that is, at least from a robotic perspective, often more suitably divided in: sensing (perception, cognition), deliberation, planning, kinematic control and dynamic control. Through the pages of this work, several different problems related to motion control, to living beings motion and to its robotic counterpart will be addressed. The strong underling motif of all the proposed algorithms and architectures (both software and hardware) is the presence of a real environment interaction. From reflexes to motion planning, from architecture definition to smart sensor design and from kinematic modelling of the human body to the action-perception loop implementation, this thesis intends to be a first gatherer of information and ideas in this interesting and complex field.
Style APA, Harvard, Vancouver, ISO itp.
27

Dervic, Amina, i Alexander Rank. "ATC complexity measures: Formulas measuring workload and complexity at Stockholm TMA". Thesis, Linköpings universitet, Kommunikations- och transportsystem, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-114534.

Pełny tekst źródła
Streszczenie:
Workload and complexity measures are, as of today, often imprecise and subjective. Currently, two commonly used workload and complexity measuring formulas are Monitor Alert Parameter and the “Bars”, both using the same measurement variables; amount of aircraft and time. This study creates formulas for quantifying ATC complexity. The study is done in an approach environment and is developed and tested on Stockholm TMA by the creation of 20 traffic scenarios. Ten air traffic controllers working in Stockholm TMA studied the complexity of the scenarios individually and ranked the scenarios in reference to each other. Five controllers evaluated scenario A1-A10. These scenarios were used as references when creating the formulas. The other half of the scenarios, B1-B10, ranked by another five controllers, was used as validation scenarios. Factors relevant to an approach environment were identified, and the data from the scenarios were extracted according to the identified factors. Moreover, a regression analysis was made with the ambition to reveal appropriate weights for each variable. At the first regression, called formula #1, some parameter values were identical. Also, some parameter weights became negative in the regression analysis. The basic requirements were not met and consequently, additional regressions were done; eventually forming formula #2. Formula #2 showed stable values and plausible parameter weights. When compared to a workload measuring model of today, formula #2 showed better performance. Despite the small amount of data samples, we were able to prove a genuine relation between three, of each other independent, variables and the traffic complexity.
Style APA, Harvard, Vancouver, ISO itp.
28

Addy, Robert. "Cost of complexity : mitigating transition complexity in mixed-model assembly lines". Thesis, Massachusetts Institute of Technology, 2020. https://hdl.handle.net/1721.1/126942.

Pełny tekst źródła
Streszczenie:
Thesis: M.B.A., Massachusetts Institute of Technology, Sloan School of Management, in conjunction with the Leaders for Global Operations Program at MIT, May, 2020
Thesis: S.M., Massachusetts Institute of Technology, Department of Mechanical Engineering, in conjunction with the Leaders for Global Operations Program at MIT, May, 2020
Cataloged from the official PDF of thesis.
Includes bibliographical references (page 72).
The Nissan Smyrna automotive assembly plant is a mixed-model production facility which currently produces six different vehicle models. This mixed-model assembly strategy enables the production level adjustment of different vehicles to match changing market demand, but it necessitates a trained workforce who are familiar with the different parts and processes required for each vehicle. Currently, the mixed-model production process is not batched; assembly line technicians might switch between assembling different vehicles several times every hour. When a switch or 'transition' occurs between different models, variations in the defect rate could occur as technicians must familiarize themselves with a different set of parts and processes. This thesis identifies this confusion as the consequence of 'transition' complexity, which results not only from variety but also familiarity; how quickly can a new situation be recognized, and how quickly can associates remember what to do and recover the skills needed to succeed. Recommendations follow to mitigate the impact of transition complexity on associate performance, thereby improving vehicle production quality. Transition complexity is an important factor in determining the performance of the assembly system (with respect to defect rates) and could supplement existing models of complexity measurement in assembly systems. Several mitigation measures at the assembly plant level are recommended to limit the impact of transition complexity on system performance. These measures include improvements to the offline kitting system to reduce errors such as reconfiguring the physical layout and implementing a visual error detection system. Additionally, we recommend altering the production scheduling system to ensure low volume models are produced at more regular intervals and with consistently low sequence gaps.
by Robert Addy.
M.B.A.
S.M.
M.B.A. Massachusetts Institute of Technology, Sloan School of Management
S.M. Massachusetts Institute of Technology, Department of Mechanical Engineering
Style APA, Harvard, Vancouver, ISO itp.
29

Lacayo, Virginia. "Communicating Complexity: A Complexity Science Approach to Communication for Social Change". Ohio University / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1367522049.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
30

Pontoizeau, Thomas. "Community detection : computational complexity and approximation". Thesis, Paris Sciences et Lettres (ComUE), 2018. http://www.theses.fr/2018PSLED007/document.

Pełny tekst źródła
Streszczenie:
Cette thèse étudie la détection de communautés dans le contexte des réseaux sociaux. Un réseau social peut être modélisé par un graphe dans lequel les sommets représentent les membres et les arêtes représentent les relations entre les membres. En particulier, j'étudie quatre différentes définitions de communauté. D'abord, une structure en communautés peut être définie par une partition des sommets telle que tout sommet a une plus grande proportion de voisins dans sa partie que dans toute autre partie. Cette définition peut être adaptée pour l'étude d'une seule communauté. Ensuite, une communauté peut être vue comme un sous graphe tel que tout couple de sommets sont à distance 2 dans ce sous graphe. Enfin, dans le contexte des sites de rencontre, je propose d'étudier une définition de communauté potentielle dans le sens où les membres de la communauté ne se connaissent pas, mais sont liés par des connaissances communes. Pour ces trois définitions, j'étudie la complexité computationnelle et l'approximation de problèmes liés à l'existence ou la recherche de telles communautés dans les graphes
This thesis deals with community detection in the context of social networks. A social network can be modeled by a graph in which vertices represent members, and edges represent relationships. In particular, I study four different definitions of a community. First, a community structure can be defined as a partition of the vertices such that each vertex has a greater proportion of neighbors in its part than in any other part. This definition can be adapted in order to study only one community. Then, a community can be viewed as a subgraph in which every two vertices are at distance 2 in this subgraph. Finally, in the context of online meetup services, I investigate a definition for potential communities in which members do not know each other but are related by their common neighbors. In regard to these proposed definitions, I study computational complexity and approximation within problems that either relate to the existence of such communities or to finding them in graphs
Style APA, Harvard, Vancouver, ISO itp.
31

Melkebeek, Dieter van. "Randomness and completeness in computational complexity". New York : Springer, 2000. http://www.springerlink.com/openurl.asp?genre=issue&issn=0302-9743&volume=1950.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
32

Monet, Mikaël. "Combined complexity of probabilistic query evaluation". Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLT003/document.

Pełny tekst źródła
Streszczenie:
L'évaluation de requêtes sur des données probabilistes(probabilistic query evaluation, ou PQE) est généralement très coûteuse enressources et ce même à requête fixée. Bien que certaines restrictions sur les requêtes et les données aient été proposées pour en diminuerla complexité, les résultats existants ne s'appliquent pas à la complexité combinée, c'est-à-dire quand la requête n'est pas fixe.Ma thèse s'intéresse à la question de déterminer pour quelles requêtes et données l'évaluation probabiliste est faisable en complexité combinée.La première contribution de cette thèse est d'étudier PQE pour des requêtes conjonctives sur des schémas d'arité 2. Nous imposons que les requêtes et les données aient la forme d'arbres et montrons l'importance de diverses caractéristiques telles que la présence d'étiquettes sur les arêtes, les bifurcations ou la connectivité.Les restrictions imposées dans ce cadre sont assez sévères, mais la deuxième contribution de cette thèse montreque si l'on est prêts à augmenter la complexité en la requête, alors il devient possible d'évaluer un langage de requête plus expressif sur des données plus générales. Plus précisément, nous montrons que l'évaluation probabiliste d'un fragment particulier de Datalog sur des données de largeur d'arbre bornée peut s'effectuer en temps linéaire en les donnéeset doublement exponentiel en la requête. Ce résultat est prouvé en utilisant des techniques d'automatesd'arbres et de compilation de connaissances. La troisième contribution de ce travail est de montrer les limites de certaines de ces techniques, en prouvant desbornes inférieures générales sur la taille de formalismes de représentation utilisés en compilation de connaissances et en théorie des automates
Query evaluation over probabilistic databases (probabilistic queryevaluation, or PQE) is known to be intractable inmany cases, even in data complexity, i.e., when the query is fixed. Althoughsome restrictions of the queries and instances have been proposed tolower the complexity, these known tractable cases usually do not apply tocombined complexity, i.e., when the query is not fixed. My thesis investigates thequestion of which queries and instances ensure the tractability ofPQE in combined complexity.My first contribution is to study PQE of conjunctive queries on binary signatures, which we rephraseas a probabilistic graph homomorphism problem. We restrict the query and instance graphs to be trees and show the impact on the combined complexity of diverse features such as edge labels, branching,or connectedness. While the restrictions imposed in this setting are quite severe, my second contribution shows that,if we are ready to increase the complexity in the query, then we can evaluate a much more expressive language on more general instances. Specifically, I show that PQE for a particular class of Datalog queries on instances of bounded treewidth can be solved with linear complexity in the instance and doubly exponential complexity in the query.To prove this result, we use techniques from tree automata and knowledge compilation. The third contribution is to show the limits of some of these techniques by proving general lower bounds on knowledge compilation and tree automata formalisms
Style APA, Harvard, Vancouver, ISO itp.
33

Osberg, Deborah Carol. "Curriculum, complexity and representation : rethinking the epistemology of schooling through complexity theory". Thesis, Open University, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.417476.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
34

De, Coning Cedric Hattingh. "Complexity, peacebuilding and coherence : implications of complexity for the peacebuilding coherence dilemma". Thesis, Stellenbosch : Stellenbosch University, 2012. http://hdl.handle.net/10019.1/71891.

Pełny tekst źródła
Streszczenie:
Thesis (PhD)--Stellenbosch University, 2012.
ENGLISH ABSTRACT: This dissertation explores the utility of using Complexity studies to improve our understanding of peacebuilding and the coherence dilemma, which is regarded as one of the most significant problems facing peacebuilding interventions. Peacebuilding is said to be complex, and this study investigates what this implies, and asks whether Complexity could be of use in improving our understanding of the assumed causal link between coherence, effectiveness and sustainability. Peacebuilding refers to all actions undertaken by the international community and local actors to consolidate the peace – to prevent a (re)lapse into violent conflict – in a given conflict-prone system. The nexus between development, governance, politics and security has become a central focus of the international effort to manage transitions, and peacebuilding is increasingly seen as the collective framework within which these diverse dimensions of conflict management can be brought together in one common framework. The coherence dilemma refers to the persistent gap between policy-level assumptions about the value and causal role of coherence in the effectiveness of peacebuilding and empirical evidence to the contrary from peacebuilding practice. The dissertation argues that the peacebuilding process is challenged by enduring and deep-rooted tensions and contradictions, and that there are thus inherent limits and constraints regarding the degree to which coherence can be achieved in any particular peacebuilding context. On the basis of the application of the general characteristics of Complexity to peacebuilding, the following three recommendations reflect the core findings of the study: (1) Peacebuilders need to concede that they cannot, from the outside, definitively analyse complex conflicts and design ‘solutions’ on behalf of a local society. Instead, they should facilitate inductive processes that assist knowledge to emerge from the local context, and such knowledge needs to be understood as provisional and subject to a continuous process of refinement and adaptation. (2) Peacebuilders have to recognise that self-sustainable peace is directly linked to, and influenced by, the extent to which a society has the capacity, and space, to selforganise. For peace consolidation to be self-sustainable, it has to be the result of a home-grown, bottom-up and context-specific process. (3) Peacebuilders need to acknowledge that they cannot defend the choices they make on the basis of pre-determined models or lessons learned elsewhere. The ethical implications of their choices have to be considered in the local context, and the effects of their interventions - intended and unintended - need to be continuously assessed against the lived-experience of the societies they are assisting. Peacebuilding should be guided by the principle that those who will have to live with the consequences should have the agency to make decisions about their own future. The art of peacebuilding lies in pursuing the appropriate balance between international support and home-grown solutions. The dissertation argues that the international community has, to date, failed to find this balance. As a result, peacebuilding has often contributed to the very societal weaknesses and fragilities that it was meant to resolve. On the basis of these insights, the dissertation concludes with a call for a significant re-balancing of the relationship between international influence and local agency, where the role of the external peacebuilder is limited to assisting, facilitating and stimulating the capacity of the local society to self-organise. The dissertation thus argues for reframing peacebuilding as something that must be essentially local.
AFRIKAANSE OPSOMMING: Hierdie proefskrif ondersoek die toepaslikheid van Kompleksiteitstudies om ons begrip van vredesbou en die dilemma van koherensie te verbeter, wat as een van die gewigtigste probleme vir die toetrede tot vredesbou beskou kan word. Vredesbou word as kompleks beskou en die implikasies van hierdie siening word in hierdie proefskrif ondersoek. Dienooreenkomstig word die vraag na die nut van Kompleksiteitstudies vir die verbetering van ons begrip van die veronderstelde kousale verband tussen koherensie, doeltreffendheid en volhoubaarheid aangespreek. Vredesbou verwys na alle handelinge wat deur die internasionale gemeenskap en plaaslike belanghebbendes onderneem word om vrede binne ʼn gegewe sisteem, wat neig na konflik, te konsolideer om sodoende ’n (her)verval in gewelddadige konflik te voorkom. Die aanknopingspunt tussen ontwikkeling, staatsbestuur, staatkunde en sekuriteit is tans die sentrale fokus van die internasionale poging om sodanige oorgange te beheer, en vredesbou word toenemend as ’n kollektiewe raamwerk beskou, waarbinne hierdie onderskeie dimensies van konflikbestuur in een gemeenskaplike raamwerk saamgebring kan word. Die koherensiedilemma verwys na die voortdurende gaping tussen beleidsvlakaannames ten opsigte van die waarde en kousale rol van koherensie vir die doeltreffendheid van vredesboupogings en empiriese data vanuit die vredesboupraktyk wat hierdie aanvaarde kousale verband weerspreek. Die proefskrif toon dat vredesboupogings uitgedaag word deur voortdurende en diepgewortelde spanninge en teenstrydighede, en dat daar dus inherente beperkings en stremmings is ten opsigte van die mate waartoe koherensie binne enige spesifieke vredesboukonteks moontlik is. Op grond van die toepassing van die algemene kenmerke van Kompleksiteitstudies op die vredesbouproses, weerspieël die volgende drie aanbevelings die kernbevindings van die studie: (1) Vredesbouers moet toegee dat hulle nie daartoe in staat is om komplekse konflikte van buite af bepalend te analiseer en ‘oplossings’ namens ’n plaaslike gemeenskap te ontwerp nie. Hulle behoort eerder induktiewe prosesse te fasiliteer om ondersteuning te bied sodat kennis uit die plaaslike konteks na vore kom, en sodanige kennis moet as voorlopig en onderhewig aan ’n voortdurende proses tot verfyning en aanpassing, verstaan word. (2) Vredesbouers moet besef dat die selfvolhoubaarheid van vrede direk verband hou met, en beïnvloed word deur, die mate waartoe ’n gemeenskap oor die vermoë tot en ruimte vir selforganisering beskik. Vir vredeskonsolidering om selfvolhoubaar te wees, moet die proses wat daartoe aanleiding gee inheems, van ‘onder-na-bo’ en konteks-spesifiek wees. (3) Vredesbouers moet aanvaar dat hulle nie die besluite wat hulle neem op grond van voorafbestaande modelle of lesse wat elders geleer is kan regverdig nie. Die etiese implikasies van hulle besluite moet in terme van die plaaslike konteks beoordeel word, en die effekte van hulle ingrepe – bepland en onbepland – moet voortdurend opgeweeg word teen die daaglikse ervaring van die samelewings wat bygestaan word. Vredesbehoupogings behoort gelei te word deur die beginsel dat diegene wat met die gevolge van die proses sal moet saamleef, die agentskap behoort te hê om besluite oor hulle eie toekoms te neem. Die kuns van vredesbou lê in die vasstel van ’n toepaslike balans tussen internasionale ondersteuning en inheemse oplossings. Die proefskrif se argument is dat die internasionale gemeenskap tot dusver daarin gefaal het om hierdie balans te vind. As gevolg hiervan het pogings tot vredesbou dikwels bygedra tot die presiese swakhede en broosheid in die gemeenskap wat dit veronderstel was om aan te spreek. Op grond van hierdie insigte sluit die proefskrif af met ’n beroep tot ’n betekenisvolle herbalansering van die verhouding tussen internasionale invloed en plaaslike agentskap, waarin die rol van die eksterne vredesbouer beperk moet word tot die ondersteuning, fasilitering en stimulering van die plaaslike gemeenskap se vermoë tot selforganisering. Die proefskrif bepleit dus dat vredesbou herontwerp word binne ’n essensieel plaaslike raamwerk.
Style APA, Harvard, Vancouver, ISO itp.
35

Falcioni, Valentina. "Complexity of Seifert manifolds". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/17054/.

Pełny tekst źródła
Streszczenie:
In this thesis, we give an overview over the theory of Seifert fibre spaces and the complexity theory. We start by giving some preliminary notions about 2-dimensional orbifolds, fibre bundles and circle bundles, in order to be able to understand the following part of the thesis, regarding the theory of Seifert fibre spaces. We first see the definition and properties of Seifert fibre spaces and, after giving a combinatorial description, we classify them up to fibre-preserving homeomorphism and up to homeomorphism. Afterwards, we introduce the complexity theory, at first in a general way concerning all compact 3-manifolds and then focusing ourselves on the estimation for the complexity of Seifert fibre spaces. We also give some examples of spine constructions for manifolds with boundary having complexity zero.
Style APA, Harvard, Vancouver, ISO itp.
36

Esteban, Ángeles Juan Luis. "Complexity measures for resolution". Doctoral thesis, Universitat Politècnica de Catalunya, 2003. http://hdl.handle.net/10803/6642.

Pełny tekst źródła
Streszczenie:
Esta obra es una contribución al campo de la Complejidad de la Demostración, que estudia la complejidad de los sistemas de demostración en términos de los recursos necesarios para demostrar o refutar fórmulas proposicionales. La Complejidad de la Demostración es un interesante campo relacionado con otros campos de la Informática como la Complejidad Computacional o la Demostración Automática entre otros. Esta obra se centra en medidas de complejidad para sistemas de demostración refutacionales para fórmulas en FNC. Consideramos varios sistemas de demostración, concretamente Resolución, R(k) y Planos Secantes y nuestros resultados hacen referencia a las medidas de complejidad de tamaño y espacio.

Mejoramos separaciones de tamaño anteriores entre las versiones generales y arbóreas de Resolución y Planos Secantes. Para hacerlo, extendemos una cota inferior de tamaño para circuitos monótonos booleanos de Ran y McKenzie a circuitos monótonos reales. Este tipo de separaciones es interesante porque algunos demostradores automáticos se basan en la versión arbórea de sistemas de demostración, por tanto la separación indica que no es siempre una buena idea restringirnos a la versión arbórea.

Tras la reciente aparición de R(k), que es un sistema de demostración entre Resolución y Frege con profundidad acotada, era importante estudiar cuan potente es y su relación con otros sistemas de demostración. Resolvemos un problema abierto propuesto por Krajícek, concretamente mostramos que R(2) no tiene la propiedad de la interpolación monónota factible. Para hacerlo, mostramos que R(2) es estrictamente más potente que Resolución.

Una pregunta natural es averiguar si se pueden separar sucesivos niveles de R(k) o R(k) arbóreo. Mostramos separaciones exponenciales entre niveles sucesivos de lo que podemos llamar la jerarquía R(k) arbórea. Esto significa que hay formulas que requieren refutaciones de tamaño exponencial en R(k) arbóreo, pero tienen refutaciones de tamaño polinómico en R(k+1) arbóreo.

Propusimos una nueva definición de espacio para Resolución mejorando la anterior de Kleine-Büning y Lettmann. Dimos resultados generales sobre el espacio para Resolución y Resolución arbórea y también una caracterización combinatoria del espacio para Resolución arbórea usando un juego con dos adversarios para fórmulas en FNC. La caracterización permite demostrar cotas inferiores de espacio para la Resolución arbórea sin necesidad de usar el concepto de Resolución o Resolución arbórea. Durante mucho tiempo no se supo si el espacio para Resolución y Resolución arbórea coincidían o no. Hemos demostrado que no coinciden al haber dado la primera separación entre el espacio para Resolución y Resolución arbórea.

También hemos estudiado el espacio para R(k). Demostramos que al igual que pasaba con el tamaño, R(k) arbóreo también forma una jerarquía respecto al
espacio. Por tanto, hay fórmulas que necesitan espacio casi lineal en R(k) arbóreo mientras que tienen refutaciones en R(k+1) arbóreo con espacio contante. Extendemos todas las cotas inferiores de espacio para Resolución conocidas a R(k) de una forma sencilla y unificada, que también sirve para Resolución, usando el concepto de satisfactibilidad dinámica presentado en esta obra.
This work is a contribution to the field of Proof Complexity, which studies the complexity of proof systems in terms of the resources needed to prove or refute propositional formulas. Proof Complexity is an interesting field which has several connections to other fields of Computer Science like Computational Complexity or Automatic Theorem Proving among others. This work focuses in complexity measures for refutational proof systems for CNF formulas. We consider several proof systems, namely Resolution, R(k) and Cutting Planes and our results concern mainly to the size and space complexity measures.

We improve previous size separations between treelike and general versions of Resolution and Cutting Planes. To do so we extend a size lower bound for monotone boolean circuits by Raz and McKenzie, to monotone real circuits. This kind of separations is interesting because some automated theorem provers rely on the treelike version of proof systems, so the separations show that is not always a good idea to restrict to the treelike version.

After the recent apparition of R(k) which is a proof system lying between Resolution and bounded-depth Frege it was important to study how powerful it is and its relation with other proof systems. We solve an open problem posed by Krajícek, namely we show that R(2) does not have the feasible monotone interpolation property. To do so, we show that R(2) is strictly more powerful than Resolution.

A natural question is to find out whether we can separate successive levels of R(k) or treelike R(k). We show exponential separations between successive levels of what we can call now the treelike R(k) hierarchy. That means that there are formulas that require exponential size treelike R(k) refutations whereas they have polynomial size treelike R(k+1) refutations.

We have proposed a new definition for Resolution space improving a previous one from Kleine-Büning and Lettmann. We give general results for Resolution and treelike Resolution space and also a combinatorial characterization of treelike Resolution space via a Player-Adversary game over CNF formulas. The characterization allows to prove lower bounds for treelike Resolution space with no need to use the concept of Resolution or Resolution refutations at all. For a long time it was not known whether Resolution space and treelike Resolution space coincided or not. We have answered this question in the negative because we give the first space separation from Resolution to treelike Resolution.

We have also studied space for R(k). We show that, as happened with respect to size, treelike R(k) forms a hierarchy respect to space. So, there are formulas that require nearly linear space for treelike R(k) whereas they have constant space treelike R(k+1) refutations. We extend all known Resolution space lower bounds to R(k) in an easier and unified way, that also holds for Resolution, using the concept of dynamical satisfiability introduced in this work.
Style APA, Harvard, Vancouver, ISO itp.
37

Chan, Ming-Yan. "Video encoder complexity reduction /". View abstract or full-text, 2005. http://library.ust.hk/cgi/db/thesis.pl?ELEC%202005%20CHANM.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
38

Widmer, Steven. "Topics in word complexity". Thesis, Lyon 1, 2010. http://www.theses.fr/2010LYO10287/document.

Pełny tekst źródła
Streszczenie:
Les principaux sujets d'intérêt de cette thèse concerneront deux notions de la complexité d'un mot infini : la complexité abélienne et la complexité de permutation. La complexité abélienne a été étudiée durant les dernières décennies. La complexité de permutation est, elle, une forme de complexité des mots relativement nouvelle qui associe à chaque mot apériodique de manière naturelle une permutation infinie. Nous nous pencherons sur deux sujets dans le domaine de la complexité abélienne. Dans un premier temps, nous nous intéresserons à une notion abélienne de la maximal pattern complexity définie par T. Kamae. Deuxièmement, nous analyserons une limite supérieure de cette complexité pour les mots C-équilibré. Dans le domaine de la complexité de permutation des mots apériodiques binaires, nous établissons une formule pour la complexité de permutation du mot de Thue-Morse, conjecturée par Makarov, en étudiant la combinatoire des sous-permutations sous l'action du morphisme de Thue-Morse. Par la suite, nous donnons une méthode générale pour calculer la complexité de permutation de l'image de certains mots sous l'application du morphisme du doublement des lettres. Finalement, nous déterminons la complexité de permutation de l'image du mot de Thue-Morse et d'un mot Sturmien sous l'application du morphisme du doublement des lettres
The main topics of interest in this thesis will be two types of complexity, abelian complexity and permutation complexity. Abelian complexity has been investigated over the past decades. Permutation complexity is a relatively new type of word complexity which investigates lexicographical ordering of shifts of an aperiodic word. We will investigate two topics in the area of abelian complexity. Firstly we will consider an abelian variation of maximal pattern complexity. Secondly we consider an upper bound for words with the C-balance property. In the area of permutation complexity, we compute the permutation complexity function for a number of words. A formula for the complexity of Thue-Morse word is established by studying patterns in subpermutations and the action of the Thue-Morse morphism on the subpermutations. We then give a method to calculate the complexity of the image of certain words under the doubling map. The permutation complexity function of the image of the Thue-Morse word under the doubling map and the image of a Sturmian word under the doubling map are established
Style APA, Harvard, Vancouver, ISO itp.
39

Chan, Siu Man. "Pebble Games and Complexity". Thesis, University of California, Berkeley, 2013. http://pqdtopen.proquest.com/#viewpdf?dispub=3593787.

Pełny tekst źródła
Streszczenie:

We study the connection between pebble games and complexity.

First, we derive complexity results using pebble games. It is shown that three pebble games used for studying computational complexity are equivalent: namely, the two-person pebble game of Dymond-Tompa, the two-person pebble game of Raz-McKenzie, and the one-person reversible pebble game of Bennett have the same pebble costs over any directed acyclic graph. The three pebble games have been used for studying parallel complexity and for proving lower bounds under restricted settings, and we show one more such lower bound on circuit-depth.

Second, the pebble costs are applied to proof complexity. Concerning a family of unsatisfiable CNFs called pebbling contradictions, the pebble cost in any of the pebble games controls the scaling of some parameters of resolution refutations. Namely, the pebble cost controls the minimum depth of resolution refutations and the minimum size of tree-like resolution refutations.

Finally, we study the space complexity of computing the pebble costs and of computing the minimum depth of resolution refutations. It is PSPACE-complete to compute the pebble cost in any of the three pebble games, and to compute the minimum depth of resolution refutations.

Style APA, Harvard, Vancouver, ISO itp.
40

Viyuygin, Mikhail. "Mixability and predictive complexity". Thesis, Royal Holloway, University of London, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.414435.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
41

Cooper, D. "Classes of low complexity". Thesis, University of Oxford, 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.375251.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
42

Dam, Wim van. "Nonlocality and communication complexity". Thesis, University of Oxford, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.325982.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
43

Farr, Graham E. "Topics in computational complexity". Thesis, University of Oxford, 1986. http://ora.ox.ac.uk/objects/uuid:ad3ed1a4-fea4-4b46-8e7a-a0c6a3451325.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
44

Hardman, Mark. "Complexity and classroom learning". Thesis, Canterbury Christ Church University, 2015. http://create.canterbury.ac.uk/14466/.

Pełny tekst źródła
Streszczenie:
This thesis provides a theoretical basis for applying complexity theory to classroom learning. Existing accounts of complexity in social systems fail to adequately situate human understanding within those systems. Human understanding and action is embedded within the complex systems that we inhabit. As such, we cannot achieve a full and accurate representation of those systems. This challenges epistemological positions which characterise learning as a simple mechanistic process, those which see it as approaching a view of the world 'as it is' and also positions which see learning as a purely social activity. This thesis develops a materialist position which characterises understandings as emergent from, but not reducible to, the material world. The roles of embodied neural networks as well as our linguistic and symbolic systems are considered in order to develop this materialist position. Context and history are shown to be important within complex systems and allow novel understandings to emerge. Furthermore, shared understandings are seen as emergent from processes of response, replication and manipulation of patterns of behaviour and patterns of association. Thus the complexity of learning is accounted for within a coherent ontological and epistemological framework. The implications of this materialist position for considering classroom learning are expounded. Firstly, our models and descriptions of classrooms are reconciled with the view of our understandings as sophisticated yet incomplete models within complex social systems. Models are characterised as themselves material entities which emerge within social systems and may go on to influence behaviour. Secondly, contemporary accounts of learning as the conceptual representation of the world are challenged.
Style APA, Harvard, Vancouver, ISO itp.
45

Preda, Daniel C. (Daniel Ciprian) 1979. "Quantum query complexity revisited". Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/29689.

Pełny tekst źródła
Streszczenie:
Thesis (M.Eng. and S.B.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2003.
Includes bibliographical references (leaves 30-31).
In this thesis, we look at the polynomial method for quantum query complexity and relate it to the BQPA = PA question for a random oracle A. We will also look at some open problems and improve some bounds relating classical and quantum complexity.
by Daniel C. Preda.
M.Eng.and S.B.
Style APA, Harvard, Vancouver, ISO itp.
46

Kim, Christopher Eric. "Composites cost modeling : complexity". Thesis, Massachusetts Institute of Technology, 1993. http://hdl.handle.net/1721.1/12357.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
47

Collender, Michael. "Complexity and hermeneutic phenomenology". Thesis, Stellenbosch : Stellenbosch University, 2008. http://hdl.handle.net/10019.1/1084.

Pełny tekst źródła
Streszczenie:
Thesis (DPhil (Philosophy))--Stellenbosch University, 2008.
This thesis argues that the study of the brain as a system, which includes the disciplines of cognitive science and neuroscience, is a kind of textual exegesis, like literary criticism. Through research in scientific modeling in the 20th and early 21st centuries, anong with the advances of nonlinear science, and both cognitive science and neuroscience, along with the work of Aristotle, Saussure, and Paul Ricoeur, I argue that the parts of the brain have multiple functions, like words have multiple uses. Ricoeur, through Aristotle, argues that words only have meaning in the act of predication, the sentence. Likewise, a brain act must corporately employ a certain set of parts in the brain system. Using Aristotle, I make the case that human cognition cannot be reduced to mere brain events because the parts, the whole, and the context are integrally important to understanding the function of any given brain process. It follows then that to understand any given brain event we need to know the fullness of human experience as lived experience, not lab experience. Science should progress from what is best known to what is least known. The methodology of reductionist neuroscience does the exact opposite, at times leading to the denial of personhood or even intelligence. I advocate that the relationship between the phenomenology of human experience (which Merleau-Ponty explored famously) and brain science should be that of data to model. When neuroscience interprets the brain as separated from the lived human world, it “reads into the text” in a sense. The lived human world must intersect intimately with whatever the brain and body are doing. The cognitive science research project has traditionally required the researcher to artificially segment human experience into it pure material constituents and then reassemble it. Is the creature reanimated at the end of the dissections really human consciousness? I will suggest that we not assemble the whole out of the parts; rather human brain science should be an exegesis inward. So, brain activities are aspects of human acts, because they are performed by humans, as humans, and interpreting them is a human activity.
Style APA, Harvard, Vancouver, ISO itp.
48

De, Villiers Tanya. "Complexity and the self". Thesis, Stellenbosch : Stellenbosch University, 2002. http://hdl.handle.net/10019.1/52744.

Pełny tekst źródła
Streszczenie:
Thesis (MA)--University of Stellenbosch, 2002.
ENGLISH ABSTRACT: In this thesis it is argued that the age-old philosophical "Problem of the Self' can benefit by being approached from the perspective of a relatively recent science, namely that of Complexity Theory. With this in mind the conceptual features of this theory is highlighted and summarised. Furthermore, the argument is made that the predominantly dualistic approach to the self that is characteristic of the Western Philosophical tradition serves to hinder, rather than edify, our understanding of the phenomenon. The benefits posed by approaching the self as an emergent property of a complex system is elaborated upon, principally with the help of work done by Sigmund Freud, Richard Dawkins, Daniel Dennett, and Paul Cilliers. The aim is to develop a materialistic conception of the self that is plausible in terms of current empirical information and resists the temptation see the self as one or other metaphysical entity within the brain, without "reducing" the self to a crude materialism. The final chapter attempts to formulate a possible foil against the accusation of crude materialism by emphasising that the self is part of a greater system that includes the mental apparatus and its environment (conceived as culture). In accordance with Dawkins's theory the medium of interaction in this system is conceived of as memes and the self is then conceived of as a meme-complex, with culture as a medium for memetransference. The conclusion drawn from this is that the self should be studied through narrative, which provides an approach to the self that is material without being crudely physicalistic.
AFRIKAANSE OPSOMMING: In hierdie tesis word daar aangevoer dat die relatiewe jong wetenskap van Kompleksiteitsteorie 'n nuttige bydra kan lewer tot die eeue-oue filosofiese "Probleem van die Self'. Met die oog hierop word die konseptueie kenmerke van hierdie teorie na vore gebring en opgesom. Die argument word gemaak dat die meerendeels dualistiese benadering van die Westerse filosofiese tradisie tot die self ons verstaan van die fenomeen belemmer eerder as om dit te bemiddel. Die voordele van dié nuwe benadering, wat die self sien as 'n ontluikende (emergent) eienskap van In komplekses sisteem, word bespreek met verwysing na veral die werke van Sigmund Freud, Richard Dawkins, Daniel Dennett en Paul Cilliers. Daar word beoog om In verstaan van die self te ontwikkel wat kontemporêre empiriese insigte in ag neem en wat die versoeking weerstaan om ongeoorloofde metafisiese eienskappe aan die self toe te ken. Terselfdetyd word daar gepoog om geensins die uniekheid van die self te "reduseer" na 'n kru materialisme nie. In die finale hoofstuk word daar gepoog om 'n teenargument vir die voorsiene beswaar van kru materialisme te ontwikkel. Dit word gedoen deur te benadruk dat die self gesien word as deel van 'n groter, komplekse sisteem, wat die masjienerie van denke en die omgewing (wat as kultuur gekonseptualiseer word) insluit. Insgelyks, in die teorie van Dawkins word die medium van interaksie in hierdie sisteem gesien as "memes", waar die self dan n meme-kompleks vorm, en kultuur die medium van meme-oordrag is. Daar word tot die konklusie gekom dat die self op 'n narratiewe manier bestudeer behoort te word, wat dan 'n benadering tot die self voorsien wat materialisties is, sonder om kru fisikalisties te wees.
Style APA, Harvard, Vancouver, ISO itp.
49

Gurr, Douglas J. "Semantic frameworks for complexity". Thesis, University of Edinburgh, 1990. http://hdl.handle.net/1842/13968.

Pełny tekst źródła
Streszczenie:
This thesis extends denotational semantics to take account of the resource requirements of programs. We describe the approach we have taken in modelling the resource requirements of programs, and motivate the definition of a monoid M of resource values. A connection is established with Moggi's categorical semantics of computations, and this connection is exploited to study complexity as a monad constructor. A formal system, the λcom-calculus, for reasoning the resource requirements of programs is developed. Operational and denotational semantics are defined for this system, and we prove a correspondence theorem. We show that Moggi's framework is not sufficiently general to capture all the examples of interest to us. Therefore, we define a new class of models based on the idea of an external datum. We investigate the relationship between these two approaches. The new framework is used to investigate various concepts of importance in complexity theory and the analysis of algorithms. In particular, we show how to capture the notions of input measures, upper bounds on complexity and non-exact complexity.
Style APA, Harvard, Vancouver, ISO itp.
50

Jones, Charles H., i Lee S. Gardner. "COMPLEXITY OF PCM FORMATTING". International Foundation for Telemetering, 1997. http://hdl.handle.net/10150/609697.

Pełny tekst źródła
Streszczenie:
International Telemetering Conference Proceedings / October 27-30, 1997 / Riviera Hotel and Convention Center, Las Vegas, Nevada
How difficult is it to develop a pulse code modulation (PCM) stream data format? Specifically, given a size, in bits, and a set of parameter sample rates, how hard is it to find a mapping of the sample rates that fits into the frame size -- if one even exists? Using telemetry trees this paper will show that the number of possible mappings for a given set of parameters and sample rates grows exponentially in terms of the number of parameters. The problem can thus be stated in terms of finding a specific instance, or showing that no such instance exists, among an exponentially large number of potential mappings. Although not proof, this provides strong evidence that the PCM format design problem is NP-complete (meaning it is a member of nondeterministic polynomial space but not a member of deterministic polynomial space). That is, that the problem can not be solved in polynomial time and would take a computer years or centuries to solve relatively small instances of the problem. However, if the problem requirements are relaxed slightly, telemetry trees can be used to reduce the PCM formatting problem to linear time in terms of the number of parameters. This paper describes a technique that can provide an optimal and fully packed PCM format.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii