Dissertations / Theses on the topic 'Complexity theory'

To see the other types of publications on this topic, follow the link: Complexity theory.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Complexity theory.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Gomaa, Walid. "Model theory and complexity theory." College Park, Md. : University of Maryland, 2007. http://hdl.handle.net/1903/7227.

Full text
Abstract:
Thesis (Ph. D.) -- University of Maryland, College Park, 2007.
Thesis research directed by: Computer Science. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO, and other styles
2

Gopalakrishnan, K. S. "Complexity cores in average-case complexity theory." [Ames, Iowa : Iowa State University], 2009. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:1473222.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Batista, Sandra Leonidas. "Martingales and complexity theory." Diss., Restricted to subscribing institutions, 2009. http://proquest.umi.com/pqdweb?did=1971757781&sid=1&Fmt=2&clientId=1564&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hansen, Claire Gwendoline. "Shakespeare and Complexity Theory." Thesis, The University of Sydney, 2015. http://hdl.handle.net/2123/13667.

Full text
Abstract:
Abstract: This thesis argues that Shakespeare is a complex system and that the framework of complexity theory can be of use to Shakespeare studies and literary studies more broadly. Moving from smaller subsystems to the global Shakespeare system itself, this thesis explores how Shakespeare’s narrative, play composition, pedagogy and cultural presence can be re-examined through a complexivist lens. By adopting different methodological approaches across the chapters, this thesis also refines the application of complexity theory and trials implementation strategies for the humanities. The Introduction offers a foundation for complexity theory in literary studies, including core characteristics and methods of implementation. Chapter One reads dance in A Midsummer Night’s Dream as a series of complex interactions which create or respond to moments of crisis or ‘bounded instability’. Chapter Two conceptualises Titus Andronicus as a self-organised complex system and interrogates three self-organising interactions: the relationships between co-authors; authors and space; and fictional and environmental space. Chapter Three’s pedagogical focus reconsiders the role of unexpected emergence in the teaching of Shakespeare and in The Merchant of Venice. Chapter Four examines the function of system attractors in the real-world system of Shakespeare in Stratford-upon-Avon and in the systems of Julius Caesar. Each chapter demonstrates complexivism as an illuminating framework for Shakespeare studies, identifying complex behavioural patterns in the plays, their contexts, and in literary criticism. This thesis also demonstrates complexity theory’s interdisciplinary applicability in fields of inquiry including the philosophy of dance, authorship studies, ecocriticism and cultural studies. ‘Shakespeare and Complexity Theory’ offers a novel and valuable framework to enrich our understanding of Shakespeare, and lays the foundation for complexity theory in Shakespeare studies and the humanities.
APA, Harvard, Vancouver, ISO, and other styles
5

Böhler, Elmar. "Algebraic closures in complexity theory." [S.l.] : [s.n.], 2005. http://deposit.ddb.de/cgi-bin/dokserv?idn=978707176.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Webb, Paul, and Pam Austin. "Family Maths and Complexity Theory." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2012. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-81109.

Full text
Abstract:
The importance of family involvement is highlighted by findings that parents’ behaviours, beliefs and attitudes affect children’s behaviour in a major way. The Family Maths programme, which is the focus of this study, provides support for the transformative education practices targeted by the South African Department of Education by offering an intervention which includes teachers, learners and their families in an affirming learning community. In this study participating parents were interviewed to investigate their perceptions of the Family Maths programme mainly in terms of their engagement, enjoyment and confidence levels. The major themes and ideas that were generated in this study include the development of positive attitudes, parents and children working and talking together, and the skills exhibited by Family Maths facilitators. These findings are analysed within the parameters of complexity science and the pre-requisite conditions for developing a complex learning community, viz. internal diversity, redundancy, decentralized control, organised randomness and neighbour interactions.
APA, Harvard, Vancouver, ISO, and other styles
7

Yamakami, Tomoyuki. "Average case computational complexity theory." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/nq28091.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Lee, Tae-Sik 1974. "Complexity theory in axiomatic design." Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/29631.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2003.
Includes bibliographical references (p. 177-182).
During the last couple of decades, the term complexity has been commonly found in use in many fields of science, sometimes as a measurable quantity with a rigorous but narrow definition and other times as merely an ad hoc label. With an emphasis on pragmatic engineering applications, this thesis investigates the complexity concept defined in axiomatic design theory to avoid vague use of the term 'complexity' in engineering system design, to provide deeper insight into possible causes of complexity, and to develop a systematic approach to complexity reduction. The complexity concept in axiomatic design theory is defined as a measure of uncertainty in achieving a desired set of functional requirements. In this thesis, it is revisited to refine its definition. Four different types of complexity are identified in axiomatic design complexity theory: time-independent real complexity, time-independent imaginary complexity, time-dependent combinatorial complexity and time-dependent periodic complexity. Time-independent real complexity is equivalent to the information content, which is a measure of a probability of achieving functional requirements. Time-independent imaginary complexity is defined as the uncertainty due to ignorance of the interactions between functional requirements and design parameters. Time-dependent complexity consists of combinatorial complexity and periodic complexity, depending on whether the uncertainty increases indefinitely or occasionally stops increasing at certain point and returns to the initial level of uncertainty. In this thesis, existing definitions for each of the types of complexity are further elaborated with a focus on time-dependent complexity. In particular, time-dependent complexity is clearly defined using the concepts of time-varying system ranges and time-dependent sets of functional requirements.
(cont.) Clear definition of the complexity concept that properly addresses the causes of complexity leads to a systematic approach for complexity reduction. As techniques for reducing time-independent complexity are known within and beyond axiomatic design theory, this thesis focuses on dealing with time-dependent complexity. From the definition of time-dependent complexity, combinatorial complexity must be transformed into periodic complexity to prevent the uncertainty from growing unboundedly. Time-dependence of complexity is attributed to two factors. One is a time-varying system range and the other is a time-dependent set of functional requirements. This thesis shows that achieving periodicity in time-varying system ranges and maintaining functional periodicity of time-dependent sets of functional requirements prevent a system from developing time-dependent combinatorial complexity. Following this argument, a re-initialization concept as a means to achieve and maintain periodicity is presented. Three examples are drawn from different fields, tribology, manufacturing system, and the cell biology, to support the periodicity argument and illustrate the re-initialization concept.
by Taesik Lee.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
9

Colijn, Caroline. "Addressing complexity, exploring social change through chaos and complexity theory." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/mq43374.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Causley, Trisha Kathleen. "Complexity and markedness in optimality theory." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape9/PQDD_0004/NQ41121.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Rajan, Angelique Chettiparamb. "Complexity theory and planning : methodological insights." Thesis, Cardiff University, 2005. http://orca.cf.ac.uk/55567/.

Full text
Abstract:
The main research question that the thesis addresses is 'what is the relevance of complexity theory for planning'. Having set out to examine a theoretical question, the thesis is guided by the nature of theory development. The realm of generalised discourse, theory contextualisation and empirical examination are thus addressed. The argument starts from an understanding of the nature of complexity theory as it emerges from within the natural sciences. The philosophical grounds of the theory and the way in which complexity theory might relate to the social realm are then discussed. Planning is conceptualised in specific ways and the relevance for second order planning is advanced. The use of complexity theory in the non-quantitative stream within planning is discussed leading to the formulation of a methodology for theory transfer derived from the theory of metaphors. Two concepts for theory transfer and contextualisation are chosen on methodological grounds ---fractals and autopoiesis. The chapter on fractals uses the methodology derived and advances a causal claim for use in the second level of planning defined and argued for earlier empirically demonstrated by re-conceptualising a case-study--- the People's Planning Campaign of Kerala, India. The chapters on autopoiesis focus on the use of concepts from autopoiesis to raise separate sets of questions for planning illustrated by discussing secondary case studies. Instances of ways in which answers might be found to these questions in actual planning practice is then discussed through re-interpreting the case study. In summary, the thesis advances an argument for the relevance of complexity theory for planning and sees this relevance as a contribution to methodological issues that arise from a systemic conception for planning in the second level, foregrounding society such that planning as an activity is undertaken by society leading to an ordering emerging out of local specificity and detail.
APA, Harvard, Vancouver, ISO, and other styles
12

Larsson, Erik. "Topological Lower Bounds in Complexity Theory." Thesis, KTH, Matematik (Avd.), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-161067.

Full text
Abstract:
The first goal of this thesis is to present two different methods, originally developed by Björner, Lovász and Yao [4], for proving lower bounds on the worst-case complexity in the linear decision tree model, by applying them to the k-EQUAL PROBLEM. Both methods are based on the computation of topological properties of the intersection lattice of a subspace arrangement. The second goal is to use one of these methods (based on estimates of Betti numbers) to improve upon a lower bound, due to Linusson [12], on the linear decision tree complexity c′(n,k) of the k-of-EACH PROBLEM on n elements. We show that c′(n,k) ≥ n log₃(n/6k).
APA, Harvard, Vancouver, ISO, and other styles
13

Rowan-Button, Wendy Jane. "Method and metanota : exploring complexity theory." Thesis, University of Salford, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.402114.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Eickmeyer, Kord. "Randomness in complexity theory and logics." Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, 2011. http://dx.doi.org/10.18452/16364.

Full text
Abstract:
Die vorliegende Dissertation besteht aus zwei Teilen, deren gemeinsames Thema in der Frage besteht, wie mächtig Zufall als Berechnungsressource ist. Im ersten Teil beschäftigen wir uns mit zufälligen Strukturen, die -- mit hoher Wahrscheinlichkeit -- Eigenschaften haben können, die von Computeralgorithmen genutzt werden können. In zwei konkreten Fällen geben wir bis dahin unbekannte deterministische Konstruktionen solcher Strukturen: Wir derandomisieren eine randomisierte Reduktion von Alekhnovich und Razborov, indem wir bestimmte unbalancierte bipartite Expandergraphen konstruieren, und wir geben eine Reduktion von einem Problem über bipartite Graphen auf das Problem, den minmax-Wert in Dreipersonenspielen zu berechnen. Im zweiten Teil untersuchen wir die Ausdrucksstärke verschiedener Logiken, wenn sie durch zufällige Relationssymbole angereichert werden. Unser Ziel ist es, Techniken aus der deskriptiven Komplexitätstheorie für die Untersuchung randomisierter Komplexitätsklassen nutzbar zu machen, und tatsächlich können wir zeigen, dass unsere randomisierten Logiken randomisierte Komlexitätsklassen einfangen, die in der Komplexitätstheorie untersucht werden. Unter Benutzung starker Ergebnisse über die Logik erster Stufe und die Berechnungsstärke von Schaltkreisen beschränkter Tiefe geben wir sowohl positive als auch negative Derandomisierungsergebnisse für unsere Logiken. Auf der negativen Seite zeigen wir, dass randomisierte erststufige Logik gegenüber normaler erststufiger Logik an Ausdrucksstärke gewinnt, sogar auf Strukturen mit einer eingebauten Additionsrelation. Außerdem ist sie nicht auf geordneten Strukturen in monadischer zweitstufiger Logik enthalten, und auch nicht in infinitärer Zähllogik auf beliebigen Strukturen. Auf der positiven Seite zeigen wir, dass randomisierte erststufige Logik auf Strukturen mit einem unären Vokabular derandomisiert werden kann und auf additiven Strukturen in monadischer Logik zweiter Stufe enthalten ist.
This thesis is comprised of two main parts whose common theme is the question of how powerful randomness as a computational resource is. In the first part we deal with random structures which possess -- with high probability -- properties than can be exploited by computer algorithms. We then give two new deterministic constructions for such structures: We derandomise a randomised reduction due to Alekhnovich and Razborov by constructing certain unbalanced bipartite expander graphs, and we give a reduction from a problem concerning bipartite graphs to the problem of computing the minmax-value in three-player games. In the second part we study the expressive power of various logics when they are enriched by random relation symbols. Our goal is to bridge techniques from descriptive complexity with the study of randomised complexity classes, and indeed we show that our randomised logics do capture complexity classes under study in complexity theory. Using strong results on the expressive power of first-order logic and the computational power of bounded-depth circuits, we give both positive and negative derandomisation results for our logics. On the negative side, we show that randomised first-order logic gains expressive power over standard first-order logic even on structures with a built-in addition relation. Furthermore, it is not contained in monadic second-order logic on ordered structures, nor in infinitary counting logic on arbitrary structures. On the positive side, we show that randomised first-order logic can be derandomised on structures with a unary vocabulary and is contained in monadic second-order logic on additive structures.
APA, Harvard, Vancouver, ISO, and other styles
15

Yang, Jing. "Designing Superior Evolutionary Algorithms via Insights From Black-Box Complexity Theory." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLX054/document.

Full text
Abstract:
Il a été observé que l'exécution des heuristiques de recherche aléatoire dépend d'un ou de plusieurs paramètres. Un certain nombre de résultats montrent un avantage des paramètres dynamiques, c'est-à-dire que les paramètres de l'algorithme sont modifiés au cours de son exécution. Dans ce travail, nous montrons que la complexité de la boîte noire sans biais de la classe de fonction de référence OneMax est $n ln(n) - cn pm o(n)$ pour une constante $c$ comprise entre $0.2539$ et $0.2665$. L'exécution peut être réalisé avec un algorithme simple de type-(1+1) utilisant une puissance de mutation fitness dépendant. Une fois traduite dans le cas du budget fixe, notre algorithme trouve des solutions plus proches de l'optimum de 13% que celles des meilleurs algorithmes connus.Basé sur la puissance de mutation optimale analysée pour OneMaX, nous montrons qu'un choix auto-ajusté du nombre de bits à retourner atteint le même temps d'exécution (excepté $o(n)$ termes inférieurs) et le même (asymptotique) 13% amélioration de la fitness-distance par rapport au RLS. Le mécanisme d'ajustement doit apprendre de manière adaptative la puissance de mutation actuellement optimale des itérations précédentes. Cela vise à la fois à exploiter le fait que des problèmes généralement différents peuvent nécessiter des puissances de mutation différentes et que, pour un problème fixe, différentes puissances peuvent devenir optimales à différentes étapes du processus d'optimisation.Nous étendons ensuite notre stratégie d'auto-ajustement aux algorithmes évolutifs basés sur la population dans des espaces discrets de recherche. Grosso modo, il consiste à créer la moitié de la descendance avec un taux de mutation qui est deux fois plus élevé que le taux de mutation actuel et l'autre moitié avec la moitié du taux actuel. Le taux de mutation est ensuite mis à jour au taux utilisé dans cette sous-population qui contient la meilleure descendance. Nous analysons comment l'algorithme d'évolution $(1+lambda)$ avec ce taux de mutation auto-ajustable optimise la fonction de test OneMax. Nous montrons que cette version dynamique de $(1+lambda)$~EA trouve l'optimum dans un temps d'optimisation attendu (nombre d'évaluations de la fitness) de $O(nlambda/loglambda+nlog n)$. Le temps est asymptotiquement plus petit que le temps d'optimisation de l'EA classique $(1+lambda)$. Des travaux antérieurs montrent que cette performance est la meilleure possible parmi tous les algorithmes de boîtes noires sans biais unaire basés sur des mutations $lambda$-parallèles.Nous proposons et analysons également une version auto-réglage de l'algorithme évolutionnaire $(1,lambda)$ dans lequel le taux de mutation actuel fait partie de l'individu et donc également sujet à mutation. Une analyse d'exécution rigoureuse sur la fonction de référence OneMax révèle qu'un simple schéma de mutation pour le taux conduit à un temps d'optimisation attendu du meilleur $O(nlambda/loglambda+nlog n)$. Notre résultat montre que l'auto-réglage dans le calcul évolutif peut trouver automatiquement des paramètres optimaux complexes. En même temps, cela prouve qu'un schéma d'auto-ajustement relativement compliqué pour le taux de mutation peut être remplacé par notre schéma endogène simple
It has been observed that the runtime of randomized search heuristics depend on one or more parameters. A number of results show an advantage of dynamic parameter settings, that is, the parameters of the algorithm are changed during its execution. In this work, we prove that the unary unbiased black-box complexity of the OneMax benchmark function class is $n ln(n) - cn pm o(n)$ for a constant $c$ which is between $0.2539$ and $0.2665$. This runtime can be achieved with a simple (1+1)-type algorithm using a fitness-dependent mutation strength. When translated into the fixed-budget perspective, our algorithm finds solutions which are roughly 13% closer to the optimum than those of the best previously known algorithms.Based on the analyzed optimal mutation strength for OneMax, we show that a self-adjusting choice of the number of bits to be flipped attains the same runtime (apart from $o(n)$ lower-order terms) and the same (asymptotic) 13% fitness-distance improvement over RLS. The adjusting mechanism is to adaptively learn the currently optimal mutation strength from previous iterations. This aims both at exploiting that generally different problems may need different mutation strengths and that for a fixed problem different strengths may become optimal in different stages of the optimization process.We then extend our self-adjusting strategy to population-based evolutionary algorithms in discrete search spaces. Roughly speaking, it consists of creating half the offspring with a mutation rate that is twice the current mutation rate and the other half with half the current rate. The mutation rate is then updated to the rate used in that subpopulation which contains the best offspring. We analyze how the $(1+lambda)$ evolutionary algorithm with this self-adjusting mutation rate optimizes the OneMax test function. We prove that this dynamic version of the $(1+lambda)$~EA finds the optimum in an expected optimization time (number of fitness evaluations) of $O(nlambda/loglambda+nlog n)$. This time is asymptotically smaller than the optimization time of the classic $(1+lambda)$ EA. Previous work shows that this performance is best-possible among all $lambda$-parallel mutation-based unbiased black-box algorithms.We also propose and analyze a self-adaptive version of the $(1,lambda)$ evolutionary algorithm in which the current mutation rate is part of the individual and thus also subject to mutation. A rigorous runtime analysis on the OneMax benchmark function reveals that a simple local mutation scheme for the rate leads to an expected optimization time of the best possible $O(nlambda/loglambda+nlog n)$. Our result shows that self-adaptation in evolutionary computation can find complex optimal parameter settings on the fly. At the same time, it proves that a relatively complicated self-adjusting scheme for the mutation rate can be replaced by our simple endogenous scheme
APA, Harvard, Vancouver, ISO, and other styles
16

Osberg, Deborah Carol. "Curriculum, complexity and representation : rethinking the epistemology of schooling through complexity theory." Thesis, Open University, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.417476.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Hiltgen, Alain P. L. "Cryptographically relevant contributions to combinational complexity theory /." Zürich, 1993. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=10382.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Lotz, Martin. "On numerical invariants in algebraic complexity theory." [S.l.] : [s.n.], 2005. http://deposit.ddb.de/cgi-bin/dokserv?idn=976761483.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Tourlakis, Iannis. "Some results in non-uniform complexity theory." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0009/MQ53393.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Bürgisser, Peter. "Completeness and reduction in algebraic complexity theory /." Berlin [u.a.] : Springer, 2000. http://www.loc.gov/catdir/enhancements/fy0817/00029647-d.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Baikovicius, Jimmy. "Theory and applications of predictive stochastic complexity." Thesis, McGill University, 1992. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=39533.

Full text
Abstract:
The objective of this thesis is to solve model order selection, and change-point detection problems in real time using a form of predictive stochastic complexity. A consistent method for finding the best model order for certain kinds of ARMA processes is presented. An interesting fact is that the estimator "badness", obtained when using fixed gain, increases the "badness" of overparametrization. Model order selection simulations involving AR processes illustrate this fact very clearly. A successful model order selection simulation for ARMA processes is presented.
A change-point detection method for certain kinds of ARMA processes is obtained for time variant change-points. Also, the abrupt jump parameter case and change-point detection using undermodeling are considered. A novel result is that undermodeling could in many cases improve the performance of the change-point detection scheme. Some results of the analysis of the change-point detection scheme are obtained and extensive simulations show that the approach exhibits surprisingly good detection capabilities.
Lastly, we prove that the original form of the adaptive controller for linear time invariant systems, as obtained in (Ger90a), can be computed in a much less expensive manner. Simulations for an ARX system exemplify the stability and tracking capability of the adaptive controller. Moreover, the effect of dithering on closed loop performance is illustrated.
APA, Harvard, Vancouver, ISO, and other styles
22

Mittmann, Johannes [Verfasser]. "Independence in Algebraic Complexity Theory / Johannes Mittmann." Bonn : Universitäts- und Landesbibliothek Bonn, 2013. http://d-nb.info/1045872121/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Abusidualghoul, Victoria Jemma. "Complexity theory & the measure of organisations." Thesis, University of Leicester, 2014. http://hdl.handle.net/2381/28920.

Full text
Abstract:
This dissertation explores the literature relating to organisational complexity, organisational measurement, educational institution measurement and qualitative research methods with a specific focus on participant-centredness. This is with a view to seeing whether complexity can provide a suitable underpinning for the exploration of educational institution assessment; whether effictility is a more useful and measurable construct than efficiency for school assessment; and whether the participant-guided tour is a viable first round research tool for recognising effictility. Early on, apparently immeasureable efficiency is replaced with measureable effictility: the efficient and effective utility of human and non-human resources within the constraints of a spatial and temporal context. The study is cross-disciplinary because it draws from such fields as management, human geography, sociology, educational management theory, education policy and philosophy, and the theoretical and real threads of complexity, space and time wend their way through the discourse. The first four literature-based chapters build together to provide the foundation for the practicalities explored in two case studies. These are set up to consist of a four-phase process at two technically similar and yet operationally very different schools. Greatly contrasting measures of success are achieved which in turn richly inform the discussion on the realities of institutional measurement. The research process also throws up some interesting themes through experimentation with innovative interview stimuli. Thus, the study’s contribution to knowledge is four-fold. It juxtaposes a theory and context that have rarely been put together – namely, complexity and education. It provides evidence to support the controversial notion that organisational efficiency cannot be measured. It introduces the concept of effictility and the methodological innovation: the participant-guided tour.
APA, Harvard, Vancouver, ISO, and other styles
24

Parsa, Mahdi. "Parameterized Complexity Applied in Algorithmic Game Theory." Thesis, Griffith University, 2011. http://hdl.handle.net/10072/367212.

Full text
Abstract:
The modern mathematical treatment of the study of decisions taken by participants whose interests are in conflict is now generally labeled as “game theory”. To understand these interactions the theory provides some solution concepts. An important such a concept is the notion of Nash equilibrium, which provides a way of predicting the behavior of strategic participants in situations of conflicts. However, many decision problems regarding to the computation of Nash equilibrium are computationally hard. Motivated by these hardness results, we study the parameterized complexity of the Nash equilibrium. In parameterized complexity one considers computational problems in a twodimensional setting: the first dimension is the usual input size n, the second dimension is a positive integer k, the parameter. A problem is fixed-parameter tractable (FPT) if it can be solved in time f(k)nO(1) where f denotes a computable, possibly exponential, function. We show that some decision problems regarding to the computation of Nash equilibrium are hard even in parameterized complexity theory. However, we provide FPT algorithms for some other problems relevant to the computation of Nash equilibrium.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
Institute for Integrated and Intelligent Systems
Science, Environment, Engineering and Technology
Full Text
APA, Harvard, Vancouver, ISO, and other styles
25

Simpson, Mark Aloysius. "Complexity Theory of Leadership and Management Information." ScholarWorks, 2018. https://scholarworks.waldenu.edu/dissertations/6121.

Full text
Abstract:
Implementing effective leadership strategies in management of information systems (MIS) can positively influence overall organizational performance. This study was an exploration of the general problem of failure to lead effectively in the current knowledge-based economy and the resulting deleterious effects on organizational performance and threats to continuing organizational viability. The specific problem was the lack of understanding regarding the interaction of leadership processes with MIS functions and the impact on organizational success. Managers' and employees' lived experiences of leadership in small- to medium-sized enterprises were explored, as well as how those experiences influenced the organization's adaptive responses regarding technology and performance in the knowledge-based economy. The complexity theory of leadership was applied as the theoretical foundation for this study. A phenomenological methodology was used. Data were collected through semi-structured interviews and analyzed through open coding to identify emergent themes from the data. The themes were leaders motivate employees' positive work-related behaviors, effective communication skills ensure accessibility and efficiency of the organizational information system, and leadership practices influence business productivity. This study contributes to social change by providing insights for managers and employees regarding effective strategies for working as teams and networks via the use of nontraditional leadership theory, which promotes company sustainability by demonstrating the benefits of responding to the changing economy.
APA, Harvard, Vancouver, ISO, and other styles
26

Zuppiroli, Sara <1979&gt. "Probabilistic Recursion Theory and Implicit Computational Complexity." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2014. http://amsdottorato.unibo.it/6723/1/thesis.pdf.

Full text
Abstract:
In this thesis we provide a characterization of probabilistic computation in itself, from a recursion-theoretical perspective, without reducing it to deterministic computation. More specifically, we show that probabilistic computable functions, i.e., those functions which are computed by Probabilistic Turing Machines (PTM), can be characterized by a natural generalization of Kleene's partial recursive functions which includes, among initial functions, one that returns identity or successor with probability 1/2. We then prove the equi-expressivity of the obtained algebra and the class of functions computed by PTMs. In the the second part of the thesis we investigate the relations existing between our recursion-theoretical framework and sub-recursive classes, in the spirit of Implicit Computational Complexity. More precisely, endowing predicative recurrence with a random base function is proved to lead to a characterization of polynomial-time computable probabilistic functions.
APA, Harvard, Vancouver, ISO, and other styles
27

Zuppiroli, Sara <1979&gt. "Probabilistic Recursion Theory and Implicit Computational Complexity." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2014. http://amsdottorato.unibo.it/6723/.

Full text
Abstract:
In this thesis we provide a characterization of probabilistic computation in itself, from a recursion-theoretical perspective, without reducing it to deterministic computation. More specifically, we show that probabilistic computable functions, i.e., those functions which are computed by Probabilistic Turing Machines (PTM), can be characterized by a natural generalization of Kleene's partial recursive functions which includes, among initial functions, one that returns identity or successor with probability 1/2. We then prove the equi-expressivity of the obtained algebra and the class of functions computed by PTMs. In the the second part of the thesis we investigate the relations existing between our recursion-theoretical framework and sub-recursive classes, in the spirit of Implicit Computational Complexity. More precisely, endowing predicative recurrence with a random base function is proved to lead to a characterization of polynomial-time computable probabilistic functions.
APA, Harvard, Vancouver, ISO, and other styles
28

Chan, Ming-Yan. "Video encoder complexity reduction /." View abstract or full-text, 2005. http://library.ust.hk/cgi/db/thesis.pl?ELEC%202005%20CHANM.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Hearn, John. "Kolmogorov Complexity of Graphs." Scholarship @ Claremont, 2006. https://scholarship.claremont.edu/hmc_theses/182.

Full text
Abstract:
Kolmogorov complexity is a theory based on the premise that the complexity of a binary string can be measured by its compressibility; that is, a string’s complexity is the length of the shortest program that produces that string. We explore applications of this measure to graph theory.
APA, Harvard, Vancouver, ISO, and other styles
30

Fournier, Bradford M. "Towards a Theory of Recursive Function Complexity: Sigma Matrices and Inverse Complexity Measures." ScholarWorks@UNO, 2015. http://scholarworks.uno.edu/td/2072.

Full text
Abstract:
This paper develops a data structure based on preimage sets of functions on a finite set. This structure, called the sigma matrix, is shown to be particularly well-suited for exploring the structural characteristics of recursive functions relevant to investigations of complexity. The matrix is easy to compute by hand, defined for any finite function, reflects intrinsic properties of its generating function, and the map taking functions to sigma matrices admits a simple polynomial-time algorithm . Finally, we develop a flexible measure of preimage complexity using the aforementioned matrix. This measure naturally partitions all functions on a finite set by characteristics inherent in each function's preimage structure.
APA, Harvard, Vancouver, ISO, and other styles
31

Ho, Hai Pang. "Low complexity decoding of cyclic codes." Thesis, University of Surrey, 1998. http://epubs.surrey.ac.uk/844222/.

Full text
Abstract:
This thesis presents three novel low complexity decoding algorithms for Cyclic codes. These algorithms are the Extended Kasami Algorithm (EKA), Permutation Error Trapping (PET) and the Modified Dorsch Algorithm (MDA). The Extended Kasami Algorithm is a novel decoding algorithm combining the Error Trapping Algorithm with cover polynomial techniques. With a revised searching method to locate the best combination of cover positions, the Extended Kasami Algorithm can achieve bounded distance performance with complexity many times lower than other efficient decoding algorithms. In comparison with the Minimum Weight Decoding (MWD) Algorithm on (31,16) BCH codes, the complexity of EKA is only 5% of MWD at 0 dB Eb/No. Comparing EKA with the Kasami Algorithm on the (23,12) Golay code, EKA reduces the complexity consistently for all values of Eb/No. When dealing with Reed Solomon codes, it is found that the additional complexity incurred by finding the error values is a function that increases exponentially with the number of bits in each symbol. To eliminate the problem of finding the error values, Permutation Error Trapping uses a specific cyclic code property to re-shuffle symbol positions. This complements well the Error Trapping approach and most decodable error patterns can be trapped by using this simple approach. PET achieves performance close to that of MWD on the (15,9) RS code with much lower complexity. For more complex codes, like the four-symbol-error correcting (15,7) RS code. Modified Permutation Error Trapping combines part of the cover polynomial approach of EKA with PET resulting in retaining good performance with low complexity. For attempting to decode Reed Solomon codes using soft decision values, the application of a modified Dorsch Algorithm to Reed Solomon codes on various issues has been evaluated. Using a binary form of Reed Solomon codes has been found to be able to achieve near maximum likelihood performance with very few decodings.
APA, Harvard, Vancouver, ISO, and other styles
32

Mayhew, Dillon. "Matroids and complexity." Thesis, University of Oxford, 2005. http://ora.ox.ac.uk/objects/uuid:23640923-17c3-4ad8-9845-320e3b662910.

Full text
Abstract:
We consider different ways of describing a matroid to a Turing machine by listing the members of various families of subsets, and we construct an order on these different methods of description. We show that, under this scheme, several natural matroid problems are complete in classes thought not to be equal to P. We list various results linking parameters of basis graphs to parameters of their associated matroids. For small values of k we determine which matroids have the clique number, chromatic number, or maximum degree of their basis graphs bounded above by k. If P is a class of graphs that is closed under isomorphism and induced subgraphs, then the set of matroids whose basis graphs belong to P is closed under minors. We characterise the minor-closed classes that arise in this way, and exhibit several examples. One way of choosing a basis of a matroid at random is to select a total ordering of the ground set uniformly at random and use the greedy algorithm. We consider the class of matroids having the property that this procedure chooses a basis uniformly at random. Finally we consider a problem mentioned by Oxley. He asked if, for every two elements and n - 2 cocircuits in an n-connected matroid, there is a circuit that contains both elements and that meets every cocircuit. We show that a slightly stronger property holds for regular matroids.
APA, Harvard, Vancouver, ISO, and other styles
33

Horton, Steven Bradish. "Graph approximation : issues and complexity." Thesis, Georgia Institute of Technology, 1991. http://hdl.handle.net/1853/24830.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Williams, Richard G. C. "Low complexity block coded modulation." Thesis, University of Manchester, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.329600.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Raynard, Mia. "Deconstructing Complexity: Configurations of Institutional Complexity and Structural Hybridity." SAGE Publications, 2016. http://dx.doi.org/10.1177/1476127016634639.

Full text
Abstract:
This article unpacks the notion of institutional complexity and highlights the distinct sets of challenges confronting hybrid structural arrangements. The framework identifies three factors that contribute to the experience of complexity - namely, the extent to which the prescriptive demands of logics are incompatible, whether there is a settled or widely accepted prioritization of logics within the field, and the degree to which the jurisdictions of the logics overlap. The central thesis is that these "components" of complexity variously combine to produce four distinct institutional landscapes, each with differing implications for the challenges organizations face and for how they might respond. The article explores the situational relevance of an array of hybridizing responses and discusses their implications for organizational legitimacy and performance. It concludes by specifying the boundary conditions of the framework and highlighting fruitful directions for future scholarship.
APA, Harvard, Vancouver, ISO, and other styles
36

Annan, J. D. "The complexity of counting problems." Thesis, University of Oxford, 1994. http://ora.ox.ac.uk/objects/uuid:52070098-14fa-4cf1-ae6e-9f9ce6a626d8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Bonmatí, Coll Ester. "Study of brain complexity using information theory tools." Doctoral thesis, Universitat de Girona, 2016. http://hdl.handle.net/10803/404384.

Full text
Abstract:
The human brain is a complex network that shares and processes information by using the structural paths between areas in order to perform a function. The connectome models the brain as a graph where nodes correspond to brain regions and edges to structural or functional connections. In this thesis, we investigate and provide new methods to study the brain complexity and improve the understanding of the brain functioning by using information theory. Firstly, we focus on brain parcellation, which is a key step to perform brain studies since determines the regions to be analyzed. Secondly, we focus on the definition of measures to characterize the complexity of the brain networks. Finally, the consistency of the results across healthy subjects using functional or structural connectivity data, demonstrates the flexibility and robustness of the proposed methods
El cervell humà és una xarxa complexa que comparteix i processa la informació mitjançant els camins estructurals per tal de realitzar una funció. El connectoma és una representació del cervell en forma de graf, on els nodes corresponen a regions del cervell i les arestes a connexions estructurals o funcionals. En aquesta tesi, s'investiga i es proporcionen nous mètodes per estudiar la complexitat del cervell i millorar la comprensió del seu funcionament mitjançant l'ús de la teoria de la informació. En primer lloc, ens centrem en mètodes de parcel.lació del cervell, el qual és un pas clau per realitzar estudis de complexitat ja que determina les regions a analitzar. En segon lloc, ens centrem en la definició de mesures per a caracteritzar la complexitat de les xarxes cerebrals. Finalment, la consistència dels resultats entre els subjectes sans a partir de dades de connectivitat funcional o estructural, demostra la flexibilitat i robustesa dels mètodes proposats
APA, Harvard, Vancouver, ISO, and other styles
38

Koch, Andreas. "Five essays on economic theory : complexity and ccordination /." Copenhagen, 2005. http://www.gbv.de/dms/zbw/480791325.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Williams, Alan John. "Theoretical and empirical studies in VLSI complexity theory." Thesis, University of Leeds, 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.329174.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Kamath, Pritish. "Some hardness escalation results in computational complexity theory." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/128290.

Full text
Abstract:
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2020
Cataloged from student-submitted PDF of thesis. "February 2020."
Includes bibliographical references (pages 92-105).
In this thesis, we prove new hardness escalation results in computational complexity theory; a phenomenon where hardness results against seemingly weak models of computation for any problem can be lifted, in a black box manner, to much stronger models of computation by considering a simple gadget composed version of the original problem. For any unsatisfiable CNF formula F that is hard to refute in the Resolution proof system, we show that a gadget-composed version of F is hard to refute in any proof system whose lines are computed by efficient communication protocols. This allows us to prove new lower bounds for: -- Monotone Circuit Size : we get an exponential lower bound for an explicit monotone function computable by linear sized monotone span programs and also in (non-monotone) NC². -- Real Monotone Circuit Size : Our proof technique extends to real communication protocols, which yields similar lower bounds against real monotone circuits. -- Cutting Planes Length : we get exponential lower bound for an explicit CNF contradiction that is refutable with logarithmic Nullstellensatz degree. Finally, we describe an intimate connection between computational models and communication complexity analogs of the sub-classes of TFNP, the class of all total search problems in NP. We show that the communication analog of PPA[subscript p] captures span programs over F[subscript p] for any prime p. This complements previously known results that communication FP captures formulas (Karchmer- Wigderson, 1988) and that communication PLS captures circuits (Razborov, 1995).
by Pritish Kamath.
Ph. D.
Ph.D. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
APA, Harvard, Vancouver, ISO, and other styles
41

Pavan, A. "Average-case complexity theory and polynomial-time reductions." Buffalo, N.Y. : Dept. of Computer Science, State University of New York at Buffalo, 2001. http://www.cse.buffalo.edu/tech%2Dreports/2001%2D10.ps.Z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

BAIGUERA, STEFANO. "Developments in non-relativistic field theory and complexity." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2020. http://hdl.handle.net/10281/258694.

Full text
Abstract:
Questa tesi si focalizza sullo studio di due ambiti di ricerca principali: teorie di campo non relativistiche e complessità olografica. Nella prima parte riesaminiamo la classificazione generale dell'anomalia di traccia per teorie di campo 2+1 dimensionali accoppiate a una geometria di Newton-Cartan e revisioniamo anche il metodo dell'heat kernel, che è utilizzato per studiare azioni effettive a un loop e permette quindi di calcolare le anomalie per una data teoria. Applichiamo questa tecnica per estrarre i coefficienti esatti dei termini di curvatura dell'anomalia di traccia sia per uno scalare che per un fermione liberi e non relativistici, trovando una relazione con l'anomalia conforme della controparte relativistica in 3+1 dimensioni che suggerisce l'esistenza di una versione non relativistica del teorema a su cui commentiamo. Proseguiamo l'analisi dello scalare e del fermione libero e non relativistico con il metodo dell'heat kernel accendendo una sorgente per la massa delle particelle: in questa geometria, troviamo che non c'è anomalia gravitazionale, ma l'anomalia di traccia non è invariante di gauge. In seguito consideriamo un modello specifico che realizza un'estensione supersimmetrica N=2 del gruppo di Bargmann in 2+1 dimensioni con superpotenziale non nullo, ottenuto dalla riduzione dimensionale lungo una direzione nulla del modello di Wess-Zumino relativistico. Controlliamo che il superpotenziale è protetto contro le correzioni quantistiche come nella controparte relativistica, così trovando une versione non relativistica del teorema di non rinormalizzazione. Inoltre, abbia una forte evidenza che la teoria è esatta a un loop, a causa della struttura causale del propagatore non relativistico e a causa della conservazione della massa. Nella seconda parte della tesi revisioniamo le congetture olografiche proposte da Susskind per descrivere l'evoluzione temporale del ponte di Einstein-Rosen in teorie gravitazionali: la complessità=volume e la complessità=azione. Queste quantità possono essere usate come un modo per studiare dualità, e noi consideriamo sia il volume che l'azione di buchi neri in spazi AdS_3 warped, che costituiscono delle deformazioni non banali dell'usuale AdS_3, aventi isometrie al bordo non relativistiche. In particolare, calcoliamo analiticamente la dipendenza temporale della complessità trovando una derivata temporale asintotica proporzionale al prodotto della temperatura di Hawking e dell'entropia di Bekenstein-Hawking. In questo contesto, esistono estensioni delle proposte olografiche quando lo stato duale in teoria dei campi è misto, cioé consideriamo soltanto una sottoregione del bordo. Calcoliamo la struttura delle divergenze ultraviolette, il comportamento sub/super-additivo della complessità e la sua dipendenza dalla temperatura per buchi neri warped in 2+1 dimensioni, quando la sottoregione è uno dei bordi disconnessi dello spaziotempo. Infine, calcoliamo analiticamente la complessità=azione per una sottoregione data da un segmento nella geometria del buco nero BTZ, dimostrando che è uguale alla somma di un termine divergente direttamente proporzionale alla lunghezza della sottoregione, più un termine proporzionale all'entropia di entanglement. Mentre questo risultato suggerisce un forte legame tra complessità ed entropia di entanglement, dimostriamo che nel caso di due segmanti disgiunti nella geometria del BTZ ci sono dei termini finiti aggiuntivi: quindi, la precedente elegante struttura rimane vera solo per le parti divergenti.
This thesis focuses on the investigation of two broad research areas: non-relativistic field theories and holographic complexity. In the first part we review the general classification of the trace anomaly for 2+1 dimensional field theories coupled to a Newton-Cartan background and we also review the heat kernel method, which is used to study one-loop effective actions and then allows to compute anomalies for a given theory. We apply this technique to extract the exact coefficients of the curvature terms of the trace anomaly for both a non-relativistic free scalar and a fermion, finding a relation with the conformal anomaly of the 3+1 dimensional relativistic counterpart which suggests the existence of a non-relativistic version of the a-theorem on which we comment. We continue the analysis of non-relativistic free scalar and fermion with the heat kernel method by turning on a source for the particle mass: on this background, we find that there is no gravitational anomaly, but the trace anomaly is not gauge invariant. We then consider a specific model realizing a N=2 supersymmetric extension of the Bargmann group in 2+1 dimensions with non-vanishing superpotential, obtained by null reduction of a relativistic Wess-Zumino model. We check that the superpotential is protected against quantum corrections as in the relativistic parent theory, thus finding a non-relativistic version of the non-renormalization theorem. Moreover, we find strong evidence that the theory is one-loop exact, due to the causal structure of the non-relativistic propagator together with mass conservation. In the second part of the thesis we review the holographic conjectures proposed by Susskind to describe the time-evolution of the Einstein-Rosen bridge in gravitational theories: the complexity=volume and complexity=action. These quantities may be used as a tool to investigate dualities, and we investigate both the volume and the action for black holes living in warped AdS_3 spacetime, which is a non-trivial modification of usual AdS_3 with non-relativistic boundary isometries. In particular, we analytically compute the time dependence of complexity finding an asymptotic growth rate proportional to the product of Hawking temperature and Bekenstein-Hawking entropy. In this context, there exist extensions of the holographic proposals when the dual state from the field theory side is mixed, i.e. we consider only a subregion on the boundary. We study the structure of UV divergences, the sub/super-additivity behaviour of complexity and its temperature dependence for warped black holes in 2+1 dimensions when the subregion is taken to be one of the two disconnected boundaries. Finally, we analytically compute the subregion action complexity for a general segment on the boundary in the BTZ black hole background, finding that it is equal to the sum of a linearly divergent term proportional to the size of the subregion and of a term proportional to the entanglement entropy. While this result suggests a strong relation of complexity with entanglement entropy, we find after investigating the case of two disjoint segments in the BTZ background that there are additional finite contributions: then the previous elegant structure holds only for the divergent parts.
APA, Harvard, Vancouver, ISO, and other styles
43

Hardman, Mark. "Complexity and classroom learning." Thesis, Canterbury Christ Church University, 2015. http://create.canterbury.ac.uk/14466/.

Full text
Abstract:
This thesis provides a theoretical basis for applying complexity theory to classroom learning. Existing accounts of complexity in social systems fail to adequately situate human understanding within those systems. Human understanding and action is embedded within the complex systems that we inhabit. As such, we cannot achieve a full and accurate representation of those systems. This challenges epistemological positions which characterise learning as a simple mechanistic process, those which see it as approaching a view of the world 'as it is' and also positions which see learning as a purely social activity. This thesis develops a materialist position which characterises understandings as emergent from, but not reducible to, the material world. The roles of embodied neural networks as well as our linguistic and symbolic systems are considered in order to develop this materialist position. Context and history are shown to be important within complex systems and allow novel understandings to emerge. Furthermore, shared understandings are seen as emergent from processes of response, replication and manipulation of patterns of behaviour and patterns of association. Thus the complexity of learning is accounted for within a coherent ontological and epistemological framework. The implications of this materialist position for considering classroom learning are expounded. Firstly, our models and descriptions of classrooms are reconciled with the view of our understandings as sophisticated yet incomplete models within complex social systems. Models are characterised as themselves material entities which emerge within social systems and may go on to influence behaviour. Secondly, contemporary accounts of learning as the conceptual representation of the world are challenged.
APA, Harvard, Vancouver, ISO, and other styles
44

Sanchez-Arroyo, Abdon. "Colourings, complexity, and some related topics." Thesis, University of Oxford, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.280009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Di, Clemente Riccardo. "Essays on economic and social complexity." Thesis, IMT Alti Studi Lucca, 2014. http://e-theses.imtlucca.it/135/1/Di_Clemente_phdthesis.pdf.

Full text
Abstract:
The multidisciplinary approach to problem solving involves drawing appropriately from different viewpoints to redefine problems outside of normal boundaries and reach solutions based on a new understanding of complex situations. Social and economics science have always borrowed and embraced tools and instruments from mathematics and physics to develop their theories. Historically a real multidisciplinary methodology to economic and social issues has been neglected by the academic researchers due to a widespread gap in their formal approach. Recently a new interdisciplinary framework has been developed connecting together social and economics theories with the complex systems analysis; this approach reveals new conceptual prospectives and methodologies thanks to its multiple-level viewpoint, which are able to disclose novel challenges and problems. This thesis collects three different multidisciplinary approaches to social and economic behaviors formalized with the complex systems tools. Chapter 2: We introduce a statistical agent based model to describe the phenomenon of drug abuse and its dynamical evolution at the individual and global level. The agents are heterogeneous with respect to their intrinsic inclination to drugs, to their budget attitude and social environment. The various levels of drug use were inspired by the professional description of the phenomenon and this permits a direct comparison with all available data. We show that certain elements have a great importance to start the use of drugs, for example the rare events in the personal experiences which permit to overcome the barrier of drug use occasionally. The analysis of how the system reacts to perturbations is very important to understand its key elements and it provides strategies for effective policy making. The present model represents the first step of a realistic description of this phenomenon and can be easily generalized in various directions. Chapter 3: We characterize the statistical law according to which Italian primary school-size distributes. We find that the schoolsize can be approximated by a log-normal distribution, with a fat lower tail that collects a large number of very small schools. The upper tail of the school-size distribution decreases exponentially and the growth rates are distributed with a Laplace PDF. These distributions are similar to those observed for firms and are consistent with a Bose-Einstein preferential attachment process. The body of the distribution features a bimodal shape suggesting some source of heterogeneity in the school organization that we uncover by an indepth analysis of the relation between schools-size and citysize. We propose a novel cluster methodology and a new spatial interaction approach among schools which outline the variety of policies implemented in Italy. Different regional policies are also discussed shedding lights on the relation between policy and geographical features. Chapter 4: By analyzing the distribution of revenues across the production sectors of quoted firms we suggest a novel dimension that drives the firms diversification process at country level. Data show a non trivial macro regional clustering of the diversification process, which underlines the relevance of geopolitical environments in determining the microscopic dynamics of economic entities. These findings demonstrate the possibility of singling out in complex ecosystems those microfeatures that emerge at macro-levels, which could be of particular relevance for decision-makers in selecting the appropriate parameters to be acted upon in order to achieve desirable results. The understanding of this micro-macro information exchange is further deepened through the introduction of a simplified dynamic model.
APA, Harvard, Vancouver, ISO, and other styles
46

Gabay, Michael. "High-multiplicity Scheduling and Packing Problems : Theory and Applications." Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENM086/document.

Full text
Abstract:
L'encodage High-Multiplicity est un encodage naturel des données consistant, en ordonnancement, à réunir les tâches similaires et, pour chaque type, décrire les caractéristiques d'une seule tâche et le nombre de tâches de ce type. Cet encodage est très compact et lorsqu'il est considéré, pose de nombreux problème pour analyser la complexités des problèmes considérés. L'objectif de cette thèse est de proposer des techniques permettant de traiter les problèmes high-multiplicity. Nous exposons celles-ci à travers divers exemples. Nous étudions le problème d'ordonnancement high-multiplicity avec indisponibilités des opérateurs et montrons que celui-ci est polynomial lorsque le nombre de type de tâches est supérieur au nombre d'instants d'indisponibilités. Nous étudions les problème d'ordonnancement de tâches couplées identiques et montrons sur ce problème de nombreuses difficultés majeures de l'ordonnancement high-multiplicity. Nous améliorons les meilleures bornes supérieures et inférieures sur le problème de bin stretching. Nous étudions le problème de vector packing avec des récipients hétérogènes et proposons des heuristiques pour celui-ci. Enfin, nous proposons un algorithme de réduction très général pour les problèmes de placement d'objets. Cet algorithme peut être appliqué en temps polynomial en le nombre de type d'objets et de récipients
High-Multiplicity encoding is a natural encoding of data. In scheduling, it simply consists in stating once the characteristics of each type of tasks and the number of tasks of this type. This encoding is very compact and natural but it is generally supposed in scheduling that all tasks are specified separately. High-Multiplicity scheduling, when considered, raises many complexity issues because of the small input size. The aim of this thesis is to provide insights on how to cope with high-multiplicity scheduling problems. We also seek to contribute to scheduling and packing in general. We expose different techniques and approaches and use them to solve specific scheduling and packing problems. We study the high-multiplicity single machine scheduling problem with forbidden start and completion times and show that this problem is polynomial with large diversity instances. We present the identical coupled-task scheduling problem and display many difficulties and issues occurring in high-multiplicity scheduling on this problem. We improve the best upper and lower bounds on the bin stretching problem. We study the vector packing problems with heterogeneous bins and propose heuristics for this problem. Finally, we present a general reduction algorithm for packing problems which can be applied in polynomial time, even with high-multiplicity encoding of the input
APA, Harvard, Vancouver, ISO, and other styles
47

Twist, Benjamin Robert John. "Taking the complexity turn to steer carbon reduction policy : applying practice theory, complexity theory and cultural practices to policies addressing climate change." Thesis, University of Edinburgh, 2018. http://hdl.handle.net/1842/31451.

Full text
Abstract:
Achieving the Scottish Government's carbon reduction targets requires not only the decarbonisation of industry and electricity generation, which is now largely underway, but also significant changes in the actions and decisions of millions of individuals, whose carbon emissions fall outside the areas which Government can control. Transport, much of it undertaken by individuals, accounts for around 20% of Scotland's carbon emissions. Policy aimed at changing individual travel behaviours will therefore become increasingly important. Commonly applied behaviour change strategies based on rational actor theory face conceptual problems and cannot overcome the lack of agency experienced by individuals buffeted by a range of influences in a complex world. Practice theory relocates the site of analysis from the individual to the social and helps to overcome these problems, but it is not clear how to deliberately change practices to achieve the carbon reductions required. Understanding practices as emergent properties of complex social systems suggests that working to alter the complex social system may lead to different emergent properties, i.e. more sustainable practices. My research explored this approach by conducting an experiment in Aberdeen that sought to influence the complex social system within which audiences travel to a large theatre in the city. Emergent properties of the system encouraged travel by private car: problems of (in)convenience and insecurity were shaping individuals' travel practices. Collaboration between actors powerful enough to affect the system - a transport provider, a local authority and the theatre itself - was needed to influence it sufficiently to bring about a change in the main travel mode from private cars to public transport. Analysis of this case identifies the need to acknowledge the relevance of complexity theory when developing carbon reduction policy. Perverse incentives encouraging public organisations to focus on their own 'direct' carbon emissions need to be replaced with a duty to collaborate with others to reduce society's overall carbon emissions. Those making policy and those implementing it will therefore need to understand and apply complexity theory, and will need highly developed skills in managing long-term collaborative projects rather than 'delivering' one-off changes. These attributes may be found in practitioners from diverse and less obvious fields, including the cultural sector.
APA, Harvard, Vancouver, ISO, and other styles
48

Lovelace, April L. "On the Complexity of Scheduling University Courses." DigitalCommons@CalPoly, 2010. https://digitalcommons.calpoly.edu/theses/245.

Full text
Abstract:
It has often been said that the problem of creating timetables for scheduling university courses is hard, even as hard as solving an NP-Complete problem. There are many papers in the literature that make this assertion but rarely are precise problem definitions provided and no papers were found which offered proofs that the university course scheduling problem being discussed is NP-Complete. This thesis defines a scheduling problem that has realistic constraints. It schedules professors to sections of courses they are willing to teach at times when they are available without overloading them. Both decision and optimization versions are precisely defined. An algorithm is then provided which solves the optimization problem in polynomial time. From this it is concluded that the decision problem is unlikely to be NP-Complete because indeed it is in P. A second more complex timetable design problem, that additionally seeks to assign appropriate rooms in which the professors can teach the courses, is then introduced. Here too both decision and optimization versions are defined. The second major contribution of this thesis is to prove that this decision problem is NP-Complete and hence the corresponding optimization problem is NP-Hard.
APA, Harvard, Vancouver, ISO, and other styles
49

Smith, Justin N. "Computational complexity, bounded rationality and the theory of games." Thesis, University of Oxford, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.365642.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Dziemba, Friederike Anna [Verfasser]. "Disturbed witnesses in quantum complexity theory / Friederike Anna Dziemba." Hannover : Gottfried Wilhelm Leibniz Universität Hannover, 2019. http://d-nb.info/1179909658/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography