To see the other types of publications on this topic, follow the link: Computation.

Dissertations / Theses on the topic 'Computation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Computation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Cattinelli, I. "INVESTIGATIONS ON COGNITIVE COMPUTATION AND COMPUTATIONAL COGNITION." Doctoral thesis, Università degli Studi di Milano, 2011. http://hdl.handle.net/2434/155482.

Full text
Abstract:
This Thesis describes our work at the boundary between Computer Science and Cognitive (Neuro)Science. In particular, (1) we have worked on methodological improvements to clustering-based meta-analysis of neuroimaging data, which is a technique that allows to collectively assess, in a quantitative way, activation peaks from several functional imaging studies, in order to extract the most robust results in the cognitive domain of interest. Hierarchical clustering is often used in this context, yet it is prone to the problem of non-uniqueness of the solution: a different permutation of the same input data might result in a different clustering result. In this Thesis, we propose a new version of hierarchical clustering that solves this problem. We also show the results of a meta-analysis, carried out using this algorithm, aimed at identifying specific cerebral circuits involved in single word reading. Moreover, (2) we describe preliminary work on a new connectionist model of single word reading, named the two-component model because it postulates a cascaded information flow from a more cognitive component that computes a distributed internal representation for the input word, to an articulatory component that translates this code into the corresponding sequence of phonemes. Output production is started when the internal code, which evolves in time, reaches a sufficient degree of clarity; this mechanism has been advanced as a possible explanation for behavioral effects consistently reported in the literature on reading, with a specific focus on the so called serial effects. This model is here discussed in its strength and weaknesses. Finally, (3) we have turned to consider how features that are typical of human cognition can inform the design of improved artificial agents; here, we have focused on modelling concepts inspired by emotion theory. A model of emotional interaction between artificial agents, based on probabilistic finite state automata, is presented: in this model, agents have personalities and attitudes that can change through the course of interaction (e.g. by reinforcement learning) to achieve autonomous adaptation to the interaction partner. Markov chain properties are then applied to derive reliable predictions of the outcome of an interaction. Taken together, these works show how the interplay between Cognitive Science and Computer Science can be fruitful, both for advancing our knowledge of the human brain and for designing more and more intelligent artificial systems.
APA, Harvard, Vancouver, ISO, and other styles
2

Berzowska, Joanna Maria 1972. "Computational expressionism : a study of drawing with computation." Thesis, Massachusetts Institute of Technology, 1998. http://hdl.handle.net/1721.1/61101.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, February 1999.
Includes bibliographical references (leaves 68-73).
This thesis presents computational expressionism, an exploration of drawing using a computer that redefines the concepts of line and composition for the digital medium. It examines the artistic process involved in computational drawing, addressing the issues of skill, algorithmic style, authorship, re-appropriation, interactivity, dynamism, and the creative/evaluative process. The computational line augments the traditional concept of line making as a direct deposit or a scratching on a surface. Digital representation is based on computation; appearance is procedurally determined. The computational line embodies not only an algorithmic construction, but also dynamic and interactive behavior. A computer allows us to construct drawing instruments that take advantage of the dynamism, interactivity, behavioral elements and other features of a programming environment. Drawing becomes a two-fold process, at two distinct levels of interaction with the computer. The artist has to program the appearance and behavior of lines and subsequently draw with these lines by dragging a mouse or gesturing with some other input device. The compositions incorporate the beauty of computation with the creative impetus of the hand, whose apparent mistakes, hesitations and inspirations form a complex and critical component of visual expression.
by Joanna Maria Berzowska.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
3

Miller, Jacob K. "Disentanglement Puzzles and Computation." The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1500630352520138.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Bogan, Nathaniel Rockwood. "Economic allocation of computation time with computation markets." Thesis, Massachusetts Institute of Technology, 1994. http://hdl.handle.net/1721.1/32603.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1994.
Includes bibliographical references (leaves 88-91).
by Nathaniel Rockwood Bogan.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
5

Giannakopoulos, Dimitrios. "Quantum computation." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1999. http://handle.dtic.mil/100.2/ADA365665.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Brekne, Tønnes. "Encrypted Computation." Doctoral thesis, Norwegian University of Science and Technology, Faculty of Information Technology, Mathematics and Electrical Engineering, 2001. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-27.

Full text
Abstract:

The ability to construct software, call it a functional ciphertext, which can be remotely executed in encrypted form as an entirely self-contained unit, has the potential for some interesting applications. One such application is the construction of autonomous mobile agents capable of entering into certain types of legally binding contracts on behalf of the sender. At a premium in such circumstances is the ability to protect secret cryptographic keys or other secret information, which typically is necessary for legally binding contracts. Also important is the ability to do powerful computations, that are more than just one-off secure function evaluations.

The problem of constructing computation systems that achieve this, has been attempted by many to little or no avail. This thesis presents three similar cryptographic systems that take a step closer to making such encrypted software a reality.

First is demonstrated how one can construct mappings from finite automata, that through iteration can do computations. A stateless storage construction, called a Turing platform, is defined and it is shown that such a platform, in conjunction with a functional representation of a finite automaton, can perform Turing universal computation.

The univariate, multivariate, and parametric ciphers for the encryption of multivariate mappings are presented and cryptanalyzed. Cryptanalysis of these ciphers shows that they must be used very carefully, in order to resist cryptanalysis. Entirely new to cryptography is the ability to remotely and securely re-encrypt functional ciphertexts made with either univariate or multivariate encryption.

Lastly it is shown how the ciphers presented can be applied to the automaton representations in the form of mappings, to do general encrypted computation. Note: many of the novel constructions in this thesis are covered by a patent application.

APA, Harvard, Vancouver, ISO, and other styles
7

Barenco, Adriano. "Quantum computation." Thesis, University of Oxford, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.360152.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Gourlay, Iain. "Quantum computation." Thesis, Heriot-Watt University, 2000. http://hdl.handle.net/10399/568.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Li, Fulu 1970. "Community computation." Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/63016.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Materials Science and Engineering, 2009.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 171-186).
In this thesis we lay the foundations for a distributed, community-based computing environment to tap the resources of a community to better perform some tasks, either computationally hard or economically prohibitive, or physically inconvenient, that one individual is unable to accomplish efficiently. We introduce community coding, where information systems meet social networks, to tackle some of the challenges in this new paradigm of community computation. We design algorithms, protocols and build system prototypes to demonstrate the power of community computation to better deal with reliability, scalability and security issues, which are the main challenges in many emerging community-computing environments, in several application scenarios such as community storage, community sensing and community security. For example, we develop a community storage system that is based upon a distributed P2P (peer-to-peer) storage paradigm, where we take an array of small, periodically accessible, individual computers/peer nodes and create a secure, reliable and large distributed storage system. The goal is for each one of them to act as if they have immediate access to a pool of information that is larger than they could hold themselves, and into which they can contribute new stuff in a both open and secure manner. Such a contributory and self-scaling community storage system is particularly useful where reliable infrastructure is not readily available in that such a system facilitates easy ad-hoc construction and easy portability. In another application scenario, we develop a novel framework of community sensing with a group of image sensors. The goal is to present a set of novel tools in which software, rather than humans, examines the collection of images sensed by a group of image sensors to determine what is happening in the field of view. We also present several design principles in the aspects of community security. In one application example, we present community-based email spain detection approach to deal with email spams more efficiently.
by Fulu Li.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
10

Pratt, Gill. "Pulse computation." Thesis, Massachusetts Institute of Technology, 1990. http://hdl.handle.net/1721.1/14260.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1990.
Includes bibliographical references (leaves 134-135).
by Gill Andrews Pratt.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
11

Amos, Martyn. "DNA computation." Thesis, University of Warwick, 1997. http://wrap.warwick.ac.uk/4238/.

Full text
Abstract:
This is the first ever doctoral thesis in the field of DNA computation. The field has its roots in the late 1950s, when the Nobel laureate Richard Feynman first introduced the concept of computing at a molecular level. Feynman's visionary idea was only realised in 1994, when Leonard Adleman performed the first ever truly molecular-level computation using DNA combined with the tools and techniques of molecular biology. Since Adleman reported the results of his seminal experiment, there has been a flurry of interest in the idea of using DNA to perform computations. The potential benefits of using this particular molecule are enormous: by harnessing the massive inherent parallelism of performing concurrent operations on trillions of strands, we may one day be able to compress the power of today's supercomputer into a single test tube. However, if we compare the development of DNA-based computers to that of their silicon counterparts, it is clear that molecular computers are still in their infancy. Current work in this area is concerned mainly with abstract models of computation and simple proof-of-principle experiments. The goal of this thesis is to present our contribution to the field, placing it in the context of the existing body of work. Our new results concern a general model of DNA computation, an error-resistant implementation of the model, experimental investigation of the implementation and an assessment of the complexity and viability of DNA computations. We begin by recounting the historical background to the search for the structure of DNA. By providing a detailed description of this molecule and the operations we may perform on it, we lay down the foundations for subsequent chapters. We then describe the basic models of DNA computation that have been proposed to date. In particular, we describe our parallel filtering model, which is the first to provide a general framework for the elegant expression of algorithms for NP-complete problems. The implementation of such abstract models is crucial to their success. Previous experiments that have been carried out suffer from their reliance on various error-prone laboratory techniques. We show for the first time how one particular operation, hybridisation extraction, may be replaced by an error-resistant enzymatic separation technique. We also describe a novel solution read-out procedure that utilizes cloning, and is sufficiently general to allow it to be used in any experimental implementation. The results of preliminary tests of these techniques are then reported. Several important conclusions are to be drawn from these investigations, and we report these in the hope that they will provide useful experimental guidance in the future. The final contribution of this thesis is a rigorous consideration of the complexity and viability of DNA computations. We argue that existing analyses of models of DNA computation are flawed and unrealistic. In order to obtain more realistic measures of the time and space complexity of DNA computations we describe a new strong model, and reassess previously described algorithms within it. We review the search for "killer applications": applications of DNA computing that will establish the superiority of this paradigm within a certain domain. We conclude the thesis with a description of several open problems in the field of DNA computation.
APA, Harvard, Vancouver, ISO, and other styles
12

Lucas, Christoph. "Combining computational and information-theoretic security in multi-party computation." Zurich : ETH, Swiss Federal Institute of Technology, Department of Computer Science, Institute of Theoretical Computer Science, 2008. http://e-collection.ethbib.ethz.ch/show?type=dipl&nr=426.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Sanders, Tom. "Sensory computation and decision making in C. elegans : a computational approach." Thesis, University of Leeds, 2016. http://etheses.whiterose.ac.uk/15442/.

Full text
Abstract:
In Caenorhabditis elegans (C. elegans) and in neuroscience generally, a hierarchical view of nervous systems prevails. Roughly speaking, sensory neurons encode the external environment, interneurons encode internal state and decisions, and motor neurons encode muscle activation. Here, using an integrated approach to model sensory computation and decision making in C. elegans, I show a striking phenomenon. Via the simplest modulation possible, sensitization and desensitization, sensory neurons in C. elegans can also encode the animal’s internal state. In this thesis, I present a modeling framework, and use it to implement two detailed models of sensory adaptation and decision making. In the first model I consider a decision making task, in which worms need to cross a lethal barrier in order to reach an attractant on the other side. My model captures the experimental results, and predicts a minimal set of requirements. This model‘s mechanism is reminiscent of similar top-down attention modulation motifs in mammalian cortex. In the second model, I consider a form of plasticity in which animals alternate their perception of a signal from attractive to repulsive. I show how the model encodes high and low-level behavioral states, balancing attraction and aversion, exploration and exploitation, pushing the ‘decision making’ into the sensory layer. Furthermore, this model predicts that specific sensory neurons may have the capacity to selectively control distinct motor programs. To accomplish these results, the modeling framework was designed to simulate a full sensory motor pathway and an in silico simulation arena, allowing it to reproduce experimental findings from multiple assays. Hopefully, this allows the model to be used by the C. elegans community and to be extended, bringing us closer to the larger aim of understanding distributed computation and the integrated neural control of behavior in a whole animal.
APA, Harvard, Vancouver, ISO, and other styles
14

Rinker, Robert E. "Reducing Computational Expense of Ray-Tracing Using Surface Oriented Pre-Computation." UNF Digital Commons, 1991. http://digitalcommons.unf.edu/etd/26.

Full text
Abstract:
The technique of rendering a scene using the method of ray-tracing is known to produce excellent graphic quality, but is also generally computationally expensive. Most of this computation involves determining intersections between objects in the scene and ray projections. Previous work to reduce this expense has been directed towards ray oriented optimization techniques. This paper presents a different approach, one that bases pre-computation on the characteristics of the scene itself, making the results independent of the position of the observer. This means that the results of one pre-computation run can be applied to renderings of the scene from multiple view points. Using this method on a scene of random triangular planar patches, impressive reductions in the number of intersection computations was realized, along with significant, reductions in the time required to render the scene.
APA, Harvard, Vancouver, ISO, and other styles
15

Delgado, Pin Jordi. "On Collective Computation." Doctoral thesis, Universitat Politècnica de Catalunya, 1997. http://hdl.handle.net/10803/6662.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Roland, Jérémie. "Adiabatic quantum computation." Doctoral thesis, Universite Libre de Bruxelles, 2004. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/211148.

Full text
Abstract:
Le développement de la Théorie du Calcul Quantique provient de l'idée qu'un ordinateur est avant tout un système physique, de sorte que ce sont les lois de la Nature elles-mêmes qui constituent une limite ultime sur ce qui peut être calculé ou non. L'intérêt pour cette discipline fut stimulé par la découverte par Peter Shor d'un algorithme quantique rapide pour factoriser un nombre, alors qu'actuellement un tel algorithme n'est pas connu en Théorie du Calcul Classique. Un autre résultat important fut la construction par Lov Grover d'un algorithme capable de retrouver un élément dans une base de donnée non-structurée avec un gain de complexité quadratique par rapport à tout algorithme classique. Alors que ces algorithmes quantiques sont exprimés dans le modèle ``standard' du Calcul Quantique, où le registre évolue de manière discrète dans le temps sous l'application successive de portes quantiques, un nouveau type d'algorithme a été récemment introduit, où le registre évolue continûment dans le temps sous l'action d'un Hamiltonien. Ainsi, l'idée à la base du Calcul Quantique Adiabatique, proposée par Edward Farhi et ses collaborateurs, est d'utiliser un outil traditionnel de la Mécanique Quantique, à savoir le Théorème Adiabatique, pour concevoir des algorithmes quantiques où le registre évolue sous l'influence d'un Hamiltonien variant très lentement, assurant une évolution adiabatique du système. Dans cette thèse, nous montrons tout d'abord comment reproduire le gain quadratique de l'algorithme de Grover au moyen d'un algorithme quantique adiabatique. Ensuite, nous montrons qu'il est possible de traduire ce nouvel algorithme adiabatique, ainsi qu'un autre algorithme de recherche à évolution Hamiltonienne, dans le formalisme des circuits quantiques, de sorte que l'on obtient ainsi trois algorithmes quantiques de recherche très proches dans leur principe. Par la suite, nous utilisons ces résultats pour construire un algorithme adiabatique pour résoudre des problèmes avec structure, utilisant une technique, dite de ``nesting', développée auparavant dans le cadre d'algorithmes quantiques de type circuit. Enfin, nous analysons la résistance au bruit de ces algorithmes adiabatiques, en introduisant un modèle de bruit utilisant la théorie des matrices aléatoires et en étudiant son effet par la théorie des perturbations.
Doctorat en sciences appliquées
info:eu-repo/semantics/nonPublished
APA, Harvard, Vancouver, ISO, and other styles
17

Thorup, Mikkel. "Topics in computation." Thesis, University of Oxford, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.357621.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Lie, Nga-sze, and 李雅詩. "Abduction and computation." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2012. http://hub.hku.hk/bib/B4819928X.

Full text
Abstract:
In the thesis, Fodor’s arguments against computationalism are defeated. His arguments appeal to syntactic constraints and intractability. We argue that arguments based on syntactic constraints are not satisfactory. We then argue that the argument via intractability is not satisfactory either. We also discuss various approaches to the problem of abduction in a computationalist setting. We argue that the social solution that human everyday cognitive activity is not isotropic and Quinean is correct. Secondly, we argue that the local solution is too preliminary a proposal. We give our objections concerning the calculation of the effect to effort ratio and the claim that memory organization leads one to relevant information. Thirdly, we argue that the natural language approach is circular. Fourthly, we arguedthat the web search approach provides a partial account of finding relevant information but leaves out the key problem of evaluating the search results. Fifthly, we argue that the global workspace approach relegates the most important part of the solution to consciousness. In the end, we give a framework sketching mechanisms that could solve the problem of abduction.
published_or_final_version
Philosophy
Doctoral
Doctor of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
19

Woodfin, Thomas R. (Thomas Richard) 1979. "Self-distributing computation." Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/8074.

Full text
Abstract:
Thesis (M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2002.
Includes bibliographical references (leaves 54-55).
In this thesis, I propose a new model for distributing computational work in a parallel or distributed system. This model relies on exposing the topology and performance characteristics of the underlying architecture to the application. Responsibility for task distribution is divided between a run-time system, which determines when tasks should be distributed or consolidated, and the application, which specifies to the runtime system its first-choice distribution based on a representation of the current state of the underlying architecture. Discussing my experience in implementing this model as a Java-based simulator, I argue for the advantages of this approach as they relate to performance on changing architectures and ease of programming.
by Thomas R. Woodfin.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
20

Mansinghka, Vikash Kumar. "Natively probabilistic computation." Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/47892.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Brain and Cognitive Sciences, 2009.
Includes bibliographical references (leaves 129-135).
I introduce a new set of natively probabilistic computing abstractions, including probabilistic generalizations of Boolean circuits, backtracking search and pure Lisp. I show how these tools let one compactly specify probabilistic generative models, generalize and parallelize widely used sampling algorithms like rejection sampling and Markov chain Monte Carlo, and solve difficult Bayesian inference problems. I first introduce Church, a probabilistic programming language for describing probabilistic generative processes that induce distributions, which generalizes Lisp, a language for describing deterministic procedures that induce functions. I highlight the ways randomness meshes with the reflectiveness of Lisp to support the representation of structured, uncertain knowledge, including nonparametric Bayesian models from the current literature, programs for decision making under uncertainty, and programs that learn very simple programs from data. I then introduce systematic stochastic search, a recursive algorithm for exact and approximate sampling that generalizes a popular form of backtracking search to the broader setting of stochastic simulation and recovers widely used particle filters as a special case. I use it to solve probabilistic reasoning problems from statistical physics, causal reasoning and stereo vision. Finally, I introduce stochastic digital circuits that model the probability algebra just as traditional Boolean circuits model the Boolean algebra.
(cont.) I show how these circuits can be used to build massively parallel, fault-tolerant machines for sampling and allow one to efficiently run Markov chain Monte Carlo methods on models with hundreds of thousands of variables in real time. I emphasize the ways in which these ideas fit together into a coherent software and hardware stack for natively probabilistic computing, organized around distributions and samplers rather than deterministic functions. I argue that by building uncertainty and randomness into the foundations of our programming languages and computing machines, we may arrive at ones that are more powerful, flexible and efficient than deterministic designs, and are in better alignment with the needs of computational science, statistics and artificial intelligence.
by Vikash Kumar Mansinghka.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
21

Pritchard, David (David Alexander Griffith). "Robust network computation." Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/37069.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.
Includes bibliographical references (p. 91-98).
In this thesis, we present various models of distributed computation and algorithms for these models. The underlying theme is to come up with fast algorithms that can tolerate faults in the underlying network. We begin with the classical message-passing model of computation, surveying many known results. We give a new, universally optimal, edge-biconnectivity algorithm for the classical model. We also give a near-optimal sub-linear algorithm for identifying bridges, when all nodes are activated simultaneously. After discussing some ways in which the classical model is unrealistic, we survey known techniques for adapting the classical model to the real world. We describe a new balancing model of computation. The intent is that algorithms in this model should be automatically fault-tolerant. Existing algorithms that can be expressed in this model are discussed, including ones for clustering, maximum flow, and synchronization. We discuss the use of agents in our model, and give new agent-based algorithms for census and biconnectivity. Inspired by the balancing model, we look at two problems in more depth.
(cont.) First, we give matching upper and lower bounds on the time complexity of the census algorithm, and we show how the census algorithm can be used to name nodes uniquely in a faulty network. Second, we consider using discrete harmonic functions as a computational tool. These functions are a natural exemplar of the balancing model. We prove new results concerning the stability and convergence of discrete harmonic functions, and describe a method which we call Eulerization for speeding up convergence.
by David Pritchard.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
22

Margolus, Norman. "Physics and computation." Thesis, Massachusetts Institute of Technology, 1987. http://hdl.handle.net/1721.1/14862.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Delgado, Jordi. "On Collective Computation." Doctoral thesis, Universitat Politècnica de Catalunya, 1997. http://hdl.handle.net/10803/6662.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Bader, Christoph Ph D. Massachusetts Institute of Technology. "Translational design computation." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/112913.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2017
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 175-183).
This thesis introduces, demonstrates and implements translational design computation: a theoretical approach and technical framework for mediating living and nonliving matter through design computation. I propose that computational design can act as a "language" for the enablement of design at the intersection of the material and the biological domains. I support and validate this proposition by formulating, deploying and evaluating a triad of strategies as follows: (1) Programmable Matter-utilizing computational design in combination with synthetic material systems to enable biologically inspired and informed design; (2) Programmable Templating-utilizing computational design in combination with, and at the intersection of, synthetic and biological systems in order to facilitate their synergetic relationships; and (3) Programmable Growth-utilizing computational design in combination with biological systems to grow material architectures.
Each of these design strategies is demonstrated through specific design challenges. For Programmable Matter; a data-driven material modeling method that allows to reinterpret visual complexities found in nature is presented and subsequently extended to a design framework for the 3D printing of functionally graded structures. For Programmable Templating; a design approach for creating a macrofluidic habitat, exploring phototrophic and heterotrophic bacterial augmentation templated by continuous opacity gradients, is presented. Following, spatio-temporal templating of engineered microorganisms via 3D printed diffusion gradients is investigated. Finally, for Programmable Growth; a framework is proposed with the objective of importing computer-aided design capabilities to biology. Enforcing the design-centric approach, a design collection called Vespers-a reinterpretation of the practice of the ancient death mask-is presented and discussed in the context of the introduced concepts.
Thesis contributions are not limited to innovations in computational design and digital fabrication but also to materials engineering and biology by proposing new ecological perspectives on and for design.
by Christoph Bader.
S.M.
S.M. Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences
APA, Harvard, Vancouver, ISO, and other styles
25

Bader, Christoph Ph D. Massachusetts Institute of Technology. "Translational design computation." Thesis, Massachusetts Institute of Technology, 2021. https://hdl.handle.net/1721.1/130836.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, February, 2021
Cataloged from the official PDF of thesis.
Includes bibliographical references (pages 218-240).
Synergetic tensions have evolved the dichotomy between the physical and digital design domains into a symbiotic unity. New capabilities in digital fabrication give rise to sophisticated tools of computational design, while new affordances in computational design inspire innovation in digital fabrication. The role of design in this process is that of synthesis through mediation. As designers, we mediate between different principles and fields, and their synergies and conflicts generate new elements of design. The challenge to mediate in a universal language across domains becomes critical as a third domain encompassing biological entities grows more amenable to design. Biological systems offer reproduction, self-organization and growth -- among other features and benefits --
which in turn enable previously unattainable properties to design systems. At the same time, their own modes of intelligence, expression, and agency demand a promising shift in design thinking. This thesis hypothesizes that the relations across design domains can be established through translational design computation, which is a framework that uses computational design as a language to mediate between physical, digital, and biological entities. We build this framework in two parts --
Systems and Mediations. The first part, Systems, explores whether computational design can serve as a mediating language between the three entities. The second part, Mediations, examines how these mediations can occur. In Systems, we show that computational design can mediate between living and nonliving matter along the spectrum of biomimetic, biointegrated, and biosynthetic systems. As part of this, we demonstrate three systems of computational mediation: (i) programmable matter applies computational design to physical systems to enable biologically inspired design strategies, (ii) programmable templating applies computational design to the intersection of physical and biological systems to facilitate synergistic relationships, and (iii) programmable growth applies computational design to biological systems to give rise to material architectures.
In Mediations, we present dynamic, synergetic, and emergent strategies for how computational mediations can occur within cocreation systems. The living and nonliving parts of any cocreation system may interact to form synergies. Combined, these synergies produce complexes that give rise to new macro-level organizations -- products of the synergies of the parts and not simply of the parts themselves. Thus, the mediation between physical, digital, and biological entities needs to address the design of dynamic relations guiding synergetic behaviors, the design of the synergetic behaviors themselves or ultimately, the design of emergent self-expression of the system. Throughout this thesis, the framework is developed theoretically and applied in practice. It is documented in publications such as Making Data Matter and Hybrid Living Materials and projects such as Wanderers, Living Mushtari, the Vespers Series, Rottlace, Lazarus, Totems, Fiberbots, and Silk Pavilion II.
by Christoph Bader.
Ph. D.
Ph.D. Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences
APA, Harvard, Vancouver, ISO, and other styles
26

Dong, Renren. "Secure multiparty computation." Bowling Green, Ohio : Bowling Green State University, 2009. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=bgsu1241807339.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Barker, Blake. "Evans function computation /." Diss., CLICK HERE for online access, 2009. http://contentdm.lib.byu.edu/ETD/image/etd3004.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Barker, Blake H. "Evans Function Computation." BYU ScholarsArchive, 2009. https://scholarsarchive.byu.edu/etd/1796.

Full text
Abstract:
In this thesis, we review the stability problem for traveling waves and discuss the Evans function, an emerging tool in the stability analysis of traveling waves. We describe some recent developments in the numerical computation of the Evans function and discuss STABLAB, an interactive MATLAB based tool box that we developed. In addition, we verify the Evans function for shock layers in Burgers equation and the p-system with and without capillarity, as well as pulses in the generalized Kortweg-de Vries (gKdV) equation. We conduct a new study of parallel shock layers in isentropic magnetohydrodynamics (MHD) obtaining results consistent with stability.
APA, Harvard, Vancouver, ISO, and other styles
29

BIANCHI, MARCO STEFANO. "Superspace computation 3D." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2011. http://hdl.handle.net/10281/27052.

Full text
Abstract:
In my thesis I describe applications of superspace techniques to perturbative aspects of supersymmetric field theories in three dimensions. Namely I discuss the computation of the exactly marginal deformations of three dimensional Chern-Simons conformal field theories, and then I focus on the computation of scattering amplitudes in the ABJM model.
APA, Harvard, Vancouver, ISO, and other styles
30

Dau, Hai Dang. "Sequential Bayesian Computation." Electronic Thesis or Diss., Institut polytechnique de Paris, 2022. http://www.theses.fr/2022IPPAG006.

Full text
Abstract:
Cette thèse est composée de deux parties. La première concerne les échantillonneurs dits de Monte-Carlo séquentiel (les échantillonneurs SMC). Il s'agit d'une famille d'algorithmes pour produire des échantillons venant d'une suite de distributions, grâce à une combinaison de l'échantillonnage pondéré et la méthode de Monte-Carlo par chaîne de Markov (MCMC). Nous proposons une version améliorée qui exploite les particules intermédiaires engendrées par l'application de plusieurs pas de MCMC. Elle a une meilleure performance, est plus robuste et permet la construction d'estimateurs de la variance. La deuxième partie analyse des algorithmes de lissage existants et en propose des nouveaux pour les modèles espace-état. Le lissage étant coûteux en temps de calcul, l'échantillonnage par rejet a été proposé dans la littérature comme une solution. Cependant, nous démontrons que son temps d'exécution est très variable. Nous développons des algorithmes ayant des coûts de calcul plus stables et ainsi plus adaptés aux architectures parallèles. Notre cadre peut aussi traiter des modèles dont la densité de transition n'est pas calculable
This thesis is composed of two parts. The first part focuses on Sequential Monte Carlo samplers, a family of algorithms to sample from a sequence of distributions using a combination of importance sampling and Markov chain Monte Carlo (MCMC). We propose an improved version of these samplers which exploits intermediate particles created by the application of multiple MCMC steps. The resulting algorithm has a better performance, is more robust and comes with variance estimators. The second part analyses existing and develops new smoothing algorithms in the context of state space models. Smoothing is a computationally intensive task. While rejection sampling has been proposed as a solution, we prove that it has a highly variable execution time. We develop algorithms which have a more stable computational cost and thus are more suitable for parallel environments. We also extend our framework to handle models with intractable transition densities
APA, Harvard, Vancouver, ISO, and other styles
31

Vadivelu, Somasundaram. "Sensor data computation in a heavy vehicle environment : An Edge computation approach." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-235486.

Full text
Abstract:
In a heavy vehicle, internet connection is not reliable, primarily because the truck often travels to a remote location where network might not be available. The data generated from the sensors in a vehicle might not be sent to the internet when the connection is poor and hence it would be appropriate to store and do some basic computation on those data in the heavy vehicle itself and send it to the cloud when there is a good network connection. The process of doing computation near the place where data is generated is called Edge computing. Scania has its own Edge computation solution, which it uses for doing computations like preprocessing of sensor data, storing data etc. Scania’s solution is compared with a commercial edge computing platform called as AWS (Amazon Web Service’s) Greengrass. The comparison was in terms of Data efficiency, CPU load, and memory footprint. In the conclusion it is shown that Greengrass solution works better than the current Scania solution in terms of CPU load and memory footprint, while in data efficiency even though Scania solution is more efficient compared to Greengrass solution, it was shown that as the truck advances in terms of increasing data size the Greengrass solution might prove competitive to the Scania solution.One more topic that is explored in this thesis is Digital twin. Digital twin is the virtual form of any physical entity, it can be formed by obtaining real-time sensor values that are attached to the physical device. With the help of sensor values, a system with an approximate state of the device can be framed and which can then act as the digital twin. Digital twin can be considered as an important use case of edge computing. The digital twin is realized with the help of AWS Device shadow.
I ett tungt fordonsscenario är internetanslutningen inte tillförlitlig, främst eftersom lastbilen ofta reser på avlägsna platser nätverket kanske inte är tillgängligt. Data som genereras av sensorer kan inte skickas till internet när anslutningen är dålig och det är därför bra att ackumulera och göra en viss grundläggande beräkning av data i det tunga fordonet och skicka det till molnet när det finns en bra nätverksanslutning. Processen att göra beräkning nära den plats där data genereras kallas Edge computing. Scania har sin egen Edge Computing-lösning, som den använder för att göra beräkningar som förbehandling av sensordata, lagring av data etc. Jämförelsen skulle vara vad gäller data efficiency, CPU load och memory consumption. I slutsatsen visar det sig att Greengrass-lösningen fungerar bättre än den nuvarande Scania-lösningen när det gäller CPU-belastning och minnesfotavtryck, medan det i data-effektivitet trots att Scania-lösningen är effektivare jämfört med Greengrass-lösningen visades att när lastbilen går vidare i Villkor för att öka datastorleken kan Greengrass-lösningen vara konkurrenskraftig för Scania-lösningen. För att realisera Edge computing används en mjukvara som heter Amazon Web Service (AWS) Greengrass.Ett annat ämne som utforskas i denna avhandling är digital twin. Digital twin är den virtuella formen av någon fysisk enhet, den kan bildas genom att erhålla realtidssensorvärden som är anslutna till den fysiska enheten. Med hjälp av sensorns värden kan ett system med ungefärligt tillstånd av enheten inramas och som sedan kan fungera som digital twin. Digital twin kan betraktas som ett viktigt användningsfall vid kantkalkylering. Den digital twin realiseras med hjälp av AWS Device Shadow.
APA, Harvard, Vancouver, ISO, and other styles
32

Dodd, Jennifer L. "Universality in quantum computation /." [St. Lucia, Qld], 2004. http://www.library.uq.edu.au/pdfserve.php?image=thesisabs/absthe18197.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Conde, Pueyo Núria 1983. "Biological computation in yeast." Doctoral thesis, Universitat Pompeu Fabra, 2014. http://hdl.handle.net/10803/320193.

Full text
Abstract:
Ongoing efforts within synthetic biology have been directed towards the building of artificial computational devices using engineered biological units as basic building blocks. Such efforts, are limited by the wiring problem: each connection of the basic computational units (logic gates), must be implemented by a different molecule. We propose a non-standard way of implementing logic computations that reduces wiring requirements thanks to a multicellular design with distribution of the output among cells. Practical implementations are presented using a library of engineered yeast cells, in which each genetic construct defines a logic function. This shows the great potential for re-utilization of genetic elements to build distinct cells. The cells are combined in multiple ways to easyly build diferent complex synthetic circuits. In the first manuscript, we proposed a multi-layer design. The engineered cells can perform the IDENTITY, NOT, AND and N-IMPLIES logics and are able to communicate with two different wiring molecules. As a proof of principle, we have implemented many logic gates and more complex circuits such as a 1--bit adder with carry. In the second manuscript, a general architecture to engineer cellular consortia that is independent of the circuit’s complexity is proposed. This design involves cells, performing IDENTITY and NOT logics, organized in two layers. The key aspect of the architecture is the spatial insulation. That design, permits implementation of complex logical functions, such as 4to1—multiplexer only with one wire.
En el camp de la biologia sintètica els esforços s'han dirigit a construir dispositius computacionals artificials connectant les unitats lògiques bàsiques (portes lògiques). Aquests esforços, estan limitats per l'anomenat “wiring problem”: cada connexió entre les unitats lògiques s'ha d'implementar amb una molècula diferent. En aquesta tesi es mostra una manera no-estàndard d'implementar funcions lògiques que redueix el nombre de cables necessaris gràcies a un disseny multicel·lular amb una distribució de la sortida en diferents cèl·lules. Es presenta una implementació pràctica utilitzant una llibreria de cèl·lules de llevat enginyeritzades, on cada constructe genètic defineix una funció lògica. Això posa de manifest el gran potencial que suposa la re-utilització dels elements genètics per construir les diferents cèl·lules. Al mateix temps, les cèl·lules es poden combinar de múltiples maneres permetent la construcció fàcil de diferents circuits sintètics complexes. En el primer article, proposem un disseny en múltiples capes. Les cèl·lules modificades genèticament poden realitzar les lògiques: IDENTITY, NOT, AND i NIMPLIES i són capaces de comunicar-se utilitzant dues connexions diferents. Com a demostració experimental, s'han implementat varies portes lògiques i circuits més complexos tals com un sumador d'un bit. En el segon article, es proposa una arquitectura general, que defineix un consorci cèl·lular, capaç d'implementar qualsevol circuit independentment de la seva complexitat. Aquest disseny es basa en cèl·lules que realitzen les lògiques IDENTITY i NOT, organitzades en dues capes. L’aspecte clau d’aquesta arquitectura és l’aïllament espaial. Aquest disseny permet implementar funcions lògiques molt complexes tals com multiplexor—4a1 utilitzant una sola molècula cable.
APA, Harvard, Vancouver, ISO, and other styles
34

Fox, Paul James. "Massively parallel neural computation." Thesis, University of Cambridge, 2013. https://www.repository.cam.ac.uk/handle/1810/245013.

Full text
Abstract:
Reverse-engineering the brain is one of the US National Academy of Engineering’s “Grand Challenges.” The structure of the brain can be examined at many different levels, spanning many disciplines from low-level biology through psychology and computer science. This thesis focusses on real-time computation of large neural networks using the Izhikevich spiking neuron model. Neural computation has been described as “embarrassingly parallel” as each neuron can be thought of as an independent system, with behaviour described by a mathematical model. However, the real challenge lies in modelling neural communication. While the connectivity of neurons has some parallels with that of electrical systems, its high fan-out results in massive data processing and communication requirements when modelling neural communication, particularly for real-time computations. It is shown that memory bandwidth is the most significant constraint to the scale of real-time neural computation, followed by communication bandwidth, which leads to a decision to implement a neural computation system on a platform based on a network of Field Programmable Gate Arrays (FPGAs), using commercial off- the-shelf components with some custom supporting infrastructure. This brings implementation challenges, particularly lack of on-chip memory, but also many advantages, particularly high-speed transceivers. An algorithm to model neural communication that makes efficient use of memory and communication resources is developed and then used to implement a neural computation system on the multi- FPGA platform. Finding suitable benchmark neural networks for a massively parallel neural computation system proves to be a challenge. A synthetic benchmark that has biologically-plausible fan-out, spike frequency and spike volume is proposed and used to evaluate the system. It is shown to be capable of computing the activity of a network of 256k Izhikevich spiking neurons with a fan-out of 1k in real-time using a network of 4 FPGA boards. This compares favourably with previous work, with the added advantage of scalability to larger neural networks using more FPGAs. It is concluded that communication must be considered as a first-class design constraint when implementing massively parallel neural computation systems.
APA, Harvard, Vancouver, ISO, and other styles
35

Forrest, Michael. "Biophysics of Purkinje computation." Thesis, University of Warwick, 2008. http://wrap.warwick.ac.uk/84008/.

Full text
Abstract:
Although others have reported and characterised different patterns of Purkinje firing (Womack and Khodakhah, 2002, 2003, 2004; McKay and Turner, 2005) this thesis is the first study that moves beyond their description and investigates the actual basis of their generation. Purkinje cells can intrinsically fire action potentials in a repeating trimodal or bimodal pattern. The trimodal pattern consists of tonic spiking, bursting and quiescence. The bimodal pattern consists of tonic spiking and quiescence. How these firing patterns are generated, and what ascertains which firing pattern is selected, has not been determined to date. We have constructed a detailed biophysical Purkinje cell model that can replicate these patterns and which shows that Na+/K+ pump activity sets the model’s operating mode. We propose that Na+/K+ pump modulation switches the Purkinje cell between different firing modes in a physiological setting and so innovatively hypothesise the Na+/K+ pump to be a computational element in Purkinje information coding. We present supporting in vitro Purkinje cell recordings in the presence of ouabain, which irreversibly blocks the Na+/K+ pump. Climbing fiber (CF) input has been shown experimentally to toggle a Purkinje cell between an up (firing) and down (quiescent) state and set the gain of its response to parallel fiber (PF) input (Mckay et al., 2007). Our Purkinje cell model captures these toggle and gain computations with a novel intracellular calcium computation that we hypothesise to be applicable in real Purkinje cells. So notably, our Purkinje cell model can compute, and importantly, relates biophysics to biological information processing. Our Purkinje cell model is biophysically detailed and as a result is very computationally intensive. This means that, whilst it is appropriate for studying properties of the 8 individual Purkinje cell (e.g. relating channel densities to firing properties), it is unsuitable for incorporation into network simulations. We have overcome this by deploying mathematical transforms to produce a simpler, surrogate version of our model that has the same electrical properties, but a lower computational overhead. Our hope is that this model, of intermediate biological fidelity and medium computational complexity, will be used in the future to bridge cellular and network studies and identify how distinctive Purkinje behaviours are important to network and system function.
APA, Harvard, Vancouver, ISO, and other styles
36

Hurlbert, Anya C. "The Computation of Color." Thesis, Massachusetts Institute of Technology, 1989. http://hdl.handle.net/1721.1/7021.

Full text
Abstract:
This thesis takes an interdisciplinary approach to the study of color vision, focussing on the phenomenon of color constancy formulated as a computational problem. The primary contributions of the thesis are (1) the demonstration of a formal framework for lightness algorithms; (2) the derivation of a new lightness algorithm based on regularization theory; (3) the synthesis of an adaptive lightness algorithm using "learning" techniques; (4) the development of an image segmentation algorithm that uses luminance and color information to mark material boundaries; and (5) an experimental investigation into the cues that human observers use to judge the color of the illuminant. Other computational approaches to color are reviewed and some of their links to psychophysics and physiology are explored.
APA, Harvard, Vancouver, ISO, and other styles
37

Cho, Eun Hea. "Computation for Markov Chains." NCSU, 2000. http://www.lib.ncsu.edu/theses/available/etd-20000303-164550.

Full text
Abstract:

A finite, homogeneous, irreducible Markov chain $\mC$ with transitionprobability matrix possesses a unique stationary distribution vector. The questions one can pose in the area of computation of Markov chains include the following:
- How does one compute the stationary distributions?
- How accurate is the resulting answer?
In this thesis, we try to provide answers to these questions.

The thesis is divided in two parts. The first part deals with the perturbation theory of finite, homogeneous, irreducible Markov Chains, which is related to the first question above. The purpose of this part is to analyze the sensitivity of the stationarydistribution vector to perturbations in the transition probabilitymatrix. The second part gives answers to the question of computing the stationarydistributions of nearly uncoupled Markov chains (NUMC).

APA, Harvard, Vancouver, ISO, and other styles
38

Block, Aaron. "Quantum computation an introduction /." Diss., Connect to the thesis, 2002. http://hdl.handle.net/10066/1468.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Vines, Susan Karen. "Bayesian computation in epidemiology." Thesis, University of Cambridge, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.285259.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Chaplin, Jack Christopher. "Computation with photochromic memory." Thesis, University of Nottingham, 2013. http://eprints.nottingham.ac.uk/13850/.

Full text
Abstract:
Unconventional computing is an area of research in which novel materials and paradigms are utilised to implement computation and data storage. This includes attempts to embed computation into biological systems, which could allow the observation and modification of living processes. This thesis explores the storage and computational capabilities of a biocompatible light-sensitive (photochromic) molecular switch (NitroBIPS) that has the potential to be embedded into both natural and synthetic biological systems. To achieve this, NitroBIPS was embedded in a (PDMS) polymer matrix and an optomechanical setup was built in order to expose the sample to optical stimulation and record fluorescent emission. NitroBIPS has two stable forms - one fluorescent and one non-fluorescent - and can be switched between the two via illumination with ultraviolet or visible light. By exposing NitroBIPS samples to specific stimulus pulse sequences and recording the intensity of fluorescence emission, data could be stored in registers and logic gates and circuits implemented. In addition, by moving the area of illumination, sub-regions of the sample could be addressed. This enabled parallel registers, Turing machine tapes and elementary cellular automata to be implemented. It has been demonstrated, therefore, that photochromic molecular memory can be used to implement conventional universal computation in an unconventional manner. Furthermore, because registers, Turing machine tapes, logic gates, logic circuits and elementary cellular automata all utilise the same samples and same hardware, it has been shown that photochromic computational devices can be dynamically repurposed. NitroBIPS and related molecules have been shown elsewhere to be capable of modifying many biological processes. This includes inhibiting protein binding, perturbing lipid membranes and binding to DNA in a manner that is dependent on the molecule's form. The implementation of universal computation demonstrated in this thesis could, therefore, be used in combination with these biological manipulations as key components within synthetic biology systems or in order to monitor and control natural biological processes.
APA, Harvard, Vancouver, ISO, and other styles
41

Urban, Christian. "Classical logic and computation." Thesis, University of Cambridge, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.621950.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Kırlı, Zeliha D. "Mobile computation with functions." Thesis, University of Edinburgh, 2002. http://hdl.handle.net/1842/369.

Full text
Abstract:
The practice of computing has reached a stage where computers are seen as parts of a global computing platform. The possibility of exploiting resources on a global scale has given rise to a new paradigm -- the mobile computation paradigm -- for computation in large-scale distributed networks. Languages which enable the mobility of code over the network are becoming widely used for building distributed applications. This thesis explores distributed computation with languages which adopt functions as the main programming abstraction and support code mobility through the mobility of functions between remote sites. It aims to highlight the benefits of using languages of this family in dealing with the challenges of mobile computation. The possibility of exploiting existing static analysis techniques suggests that having functions at the core of a mobile code language is a particularly apt choice. A range of problems which have impact on the safety, security and performance of systems are discussed here. It is shown that types extended with effects and other annotations can capture a significant amount of information about the dynamic behaviour of mobile functions and offer solutions to the problems under investigation. The thesis presents a survey of the languages Concurrent ML, Facile and PLAN which remain loyal to the principles of the functional language ML and hence inherit its strengths in the context of concurrent and distributed computation. The languages which are defined in the subsequent chapters have their roots in these languages. Two chapters focus on using types to statically predict whether functions are used locally or may become mobile at runtime. Types are exploited for distributed calltracking to estimate which functions are invoked at which sites in the system. Compilers for mobile code languages would benefit from such estimates in dealing with the heterogeneity of the network nodes, in providing static profiling tools and in estimating the resource-consumption of programs. Two chapters are devoted to the use of types in controlling the flow of values in a system where users have different trust levels. The confinement of values within a specified mobility region is the subject of one of these. The other focuses on systems where values are classified with respect to their confidentiality level. The sources of undesirable flows of information are identified and a solution based on noninterference is proposed.
APA, Harvard, Vancouver, ISO, and other styles
43

Grimmelmann, James Taylor Lewis. "Quantum Computation: An Introduction." Thesis, Harvard University, 1999. http://nrs.harvard.edu/urn-3:HUL.InstRepos:14485381.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Heggarty, Jonathan W. "Parallel R-matrix computation." Thesis, Queen's University Belfast, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.287468.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Marshall, Joseph. "Computation in hyperbolic groups." Thesis, University of Warwick, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.369403.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Shelley, A. J. "Reconfigurable logic for computation." Thesis, University of Sheffield, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.287658.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Marshall, G. S. "Muiticomponent fluid flow computation." Thesis, Teesside University, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.384659.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Hearn, Robert A. (Robert Aubrey) 1965. "Games, puzzles, and computation." Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/37913.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.
Includes bibliographical references (p. 147-153).
There is a fundamental connection between the notions of game and of computation. At its most basic level, this is implied by any game complexity result, but the connection is deeper than this. One example is the concept of alternating nondeterminism, which is intimately connected with two-player games. In the first half of this thesis, I develop the idea of game as computation to a greater degree than has been done previously. I present a general family of games, called Constraint Logic, which is both mathematically simple and ideally suited for reductions to many actual board games. A deterministic version of Constraint Logic corresponds to a novel kind of logic circuit which is monotone and reversible. At the other end of the spectrum, I show that a multiplayer version of Constraint Logic is undecidable. That there are undecidable games using finite physical resources is philosophically important, and raises issues related to the Church-Turing thesis. In the second half of this thesis, I apply the Constraint Logic formalism to many actual games and puzzles, providing new hardness proofs. These applications include sliding-block puzzles, sliding-coin puzzles, plank puzzles, hinged polygon dissections, Amazons, Kohane, Cross Purposes, Tip over, and others.
(cont.) Some of these have been well-known open problems for some time. For other games, including Minesweeper, the Warehouseman's Problem, Sokoban, and Rush Hour, I either strengthen existing results, or provide new, simpler hardness proofs than the original proofs.
by Robert Aubrey Hearn.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
49

Streeter, Kenneth Brett. "A partitioned computation machine." Thesis, Massachusetts Institute of Technology, 1990. http://hdl.handle.net/1721.1/80459.

Full text
Abstract:
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1990.
Includes bibliographical references (leaves 86-88).
by Kenneth Brett Streeter.
M.S.
APA, Harvard, Vancouver, ISO, and other styles
50

Idsardi, William James. "The computation of prosody." Thesis, Massachusetts Institute of Technology, 1992. http://hdl.handle.net/1721.1/12897.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography