Dissertations / Theses on the topic 'Turing machines'

To see the other types of publications on this topic, follow the link: Turing machines.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Turing machines.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Chen, Yin Fu. "SIMTM turing machine simulator." CSUSB ScholarWorks, 1995. https://scholarworks.lib.csusb.edu/etd-project/1229.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Krebs, Peter R. History &amp Philosophy of Science UNSW. "Turing machines, computers and artificial intelligence." Awarded by:University of New South Wales. History & Philosophy of Science, 2002. http://handle.unsw.edu.au/1959.4/19053.

Full text
Abstract:
This work investigates some of the issues and consequences for the field of artificial intelligence and cognitive science, which are related to the perceived limits of computation with current digital equipment. The Church -Turing thesis and the specific properties of Turing machines are examined and some of the philosophical 'in principle' objections, such as the application of G??del's incompleteness theorem, are discussed. It is argued that the misinterpretation of the Church-Turing thesis has led to unfounded assumptions about the limitations of computing machines in general. Modern digital computers, which are based on the von Neuman architecture, can typically be programmed so that they interact effectively with the real word. It is argued that digital computing machines are supersets of Turing machines, if they are, for example, programmed to interact with the real world. Moreover, computing is not restricted to the domain of discrete state machines. Analog computers and real or simulated neural nets exhibit properties that may not be accommodated in a definition of computing, which is based on Turing machines. Consequently, some of the philosophical 'in principle' objections to artificial intelligence may not apply in reference to engineering efforts in artificial intelligence.
APA, Harvard, Vancouver, ISO, and other styles
3

Davidsdottir, Agnes. "Algorithms, Turing machines and algorithmic undecidability." Thesis, Uppsala universitet, Algebra och geometri, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-441282.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lin, Jack Chen-Hung. "Structural properties of one-tape Turing machines." Thesis, University of Ottawa (Canada), 2002. http://hdl.handle.net/10393/6125.

Full text
Abstract:
The model of Turing machines has been studied since its birth in 1936. Researchers have continuously proposed variants of such a model. Upon imposing different constraints, the power of each model varies or even remains the same, accordingly. Some well-known result (for example, the equivalence of finite state automata and one-tape linear-time deterministic Turing machines) has proven that the abilities of overwriting the tape content and scanning the tape content more than once cannot gain any advantage under certain restrictions. In this thesis, we study the behaviors and the fundamental properties of variants of one-tape Turing machines, such as deterministic, reversible, nondeterministic, probabilistic, and quantum Turing machines. This gives us a better understanding about the strength and the weakness of each machine type. For example, under the one-tape linear-time restriction, reversible, nondeterministic, co-nondeterministic, and bounded-error probabilistic computations recognize exactly regular languages whereas probabilistic and quantum Turing machines can recognize even non-regular languages.
APA, Harvard, Vancouver, ISO, and other styles
5

Michel, Pascal. "Etude de machines de turing et complexite algorithmique." Paris 7, 1992. http://www.theses.fr/1992PA077129.

Full text
Abstract:
La these est constituee de trois parties independantes. Premiere partie: comportement de quelques castors affaires et probleme ouvert en theorie des nombres. Un castor affaire est une machine de turing qui prend beaucoup de temps pour s'arreter quand elle part de la bande vide. Un lien est etabli entre le probleme de l'arret pour plusieurs castors affaires et le probleme de l'evolution des iterees d'une fonction definie par cas selon des congruences. Deuxieme partie: complexite de theories logiques contenant la relation de coprimalite. Des majorants sont donnes pour la complexite algorithmique de plusieurs theories logiques, dont la theorie de l'ensemble des naturels muni de la relation de coprimalite, avec ou sans la relation d'egalite. Troisieme partie : un langage np-complet accepte en temps lineaire par une machine de turing a une seule bande
APA, Harvard, Vancouver, ISO, and other styles
6

Atger, Dominique. "A Turing machines simulator using a Microsoft Windows' interface." Virtual Press, 1993. http://liblink.bsu.edu/uhtbin/catkey/865965.

Full text
Abstract:
The purpose of this thesis is to develop a software system simulating Turing machines using a Microsoft Windows' Interface.Developed in the 1930's by Alan Turing and Emil Post, Turing machines are defined as "abstract computers" . These machines seem able to solve all problems a modern computer can solve, however complex the problems may be. A Turing machine is a basic computational model for algorithms.The software provides a practical tool to students with a relative notion of Turing machines. The software contains introduction and general information on Turing machines that gives the beginner enough background to use the program. The user can create, modify or run Turing machines saved onto MS-DOS files. Some examples of Turing machines are preloaded. These examples give more help to the beginner.An on-line help facility is provided in order to direct and inform the learning student at each level of the software.The Microsoft Windows' Interface makes the software easy and friendly to use. The software has the modularity which will ease any future enhancement.
Department of Computer Science
APA, Harvard, Vancouver, ISO, and other styles
7

Goutefangea, Patrick. "Alan Turing : La "pensée" des machines et l'idée de pratique." Nantes, 1999. http://www.theses.fr/1999NANT3003.

Full text
Abstract:
La plausibilite de l'equivalence enoncee par la + these de church-turing ; entre procedure effective de calcul et procede mecanique repose sur la possibilite, pour une + machine universelle ;, de simuler les conditions intuitives du calcul chez un individu humain. En ce sens, la machine universelle renvoie a l'homme, non seulement en tant qu'il calcule, mais en tant qu'il est son createur : la machine doit pouvoir simuler les conditions de sa propre construction. Le + jeu de l'imitation ; imagine par turing a pour fonction d'etablir cette possibilite. Selon turing, l'adversaire humain de la machine au jeu de l'imitation peut etre surpris par celle-ci. Bien plus, il ne dispose d'aucun moyen de distinguer l'imprevisible auquel il est ainsi confronte de celui qu'il attend d'un individu humain dont il postule qu'il pense. L'adversaire humain de la machine est conduit au cours du jeu a faire comme si son interlocuteur mecanique etait pour lui un semblable, c'est-a-dire un autrui. Par la, l'hypothese d'une victoire de la machine au jeu de l'imitation fait appel a la problematique qui regit, dans une optique kantienne, l'examen des conditions de possibilite de la reconnaissance d'un autrui. C'est parce qu'il eleve son adversaire mecanique a la dignite du sujet kantien qu'un individu humain quelconque est defait au cours du jeu de l'imitation. Ainsi, dans le cadre de l'experience imaginee par turing, l'idee de sujet prend sens en tant que moment necessaire du processus d'enonciation-communication. Sous cet angle, l'hypothese de turing conduit a l'affirmation du primat, sur le sujet, du processus d'enonciation-communication. L'idee kantienne de pratique est alors bouleversee : elle contribue a la mise en forme du processus d'enonciation-communication, mais a travers l'erreur de l'adversaire humain de la machine. De sorte que c'est par la critique de sa propre histoire que l'idee philosophique de pratique est a meme de rendre compte du processus d'enonciation-communication
The plausibility of an equivalence, enunciated through the + church-turing's thesis ;, between effective procedure of calculus and mechanical procedure, is built on the possibility, for a + universal machine ;, to simulate the intuitive conditions of calculating for a human being. In that sense, the universal machine refers to man not only as he calculates, but also as he is its creator : the machine must be able to simulate the conditions of its own building. Establishing such a possibility is the role of the + imitation game ; proposed by turing. According to turing, a human individual can be surprised during the imitation game by the machine. Furthermore he does not have any means to distinguish the kind of surprise he feels in that case from the one he is expecting from a human individual about which he postulates that he is thinking. During the game, the human adversary of the machine is led to act as if the machine was for him a fellow creature, i. E. Another people. In this way, the machine's victory hypothesis requires the problematic which, from a kantian point of view, governs the study of the conditions of possibility to recognize not only another mind, but another people. A human individual is defeated during the imitation game because he tends to raise his mechanical adversary to the dignity of the kantian subject. Within the frame of the experiment imaginated by turing, the idea of subject takes sense as a necessary moment of the enunciation-communication process. In other words, the turing's hypothesis leads to state the primacy of the enunciation-communication process on the subject. The kantian idea of practice is altered : it takes part to the elaboration of the enunciation-communication process, but it does it through the human error. Then, it is through the critics of its own history that the philosophical idea of practice can explicit the enunciation-communication process. The idea of practice refers to the dynamics of its own criticizing
APA, Harvard, Vancouver, ISO, and other styles
8

Dalle, Vedove Nosaki Gregorio. "Chaos and Turing Machines on Bidimensional Models at Zero Temperature." Thesis, Bordeaux, 2020. http://www.theses.fr/2020BORD0309.

Full text
Abstract:
En mécanique statistique d'équilibre ou formalisme thermodynamique un des objectifs est de décrire le comportement des familles de mesures d'équilibre pour un potentiel paramétré par la température inverse. Nous considérons ici une mesure d'équilibre comme une mesure shift invariant qui maximise la pression. Il existe d'autres constructions qui prouvent le comportement chaotique de ces mesures lorsque le système se fige, c'est-à-dire lorsque la température tend vers zéro. Un des exemples les plus importants a été donné par Chazottes et Hochman où ils prouvent la non-convergence des mesures d'équilibre pour un potentiel localement constant lorsque la dimension est supérieure à 3. Dans ce travail, nous présentons une construction d'un exemple bidimensionnel décrit sur un alphabet fini et par un potentiel localement constant tel qu'il existe une séquence eta_k où la non-convergence est assurée pour toute suite de mesures d'équilibre à l'inverse de la température eta_k lorsque la température tend vers zéro. Pour cela nous utilisons la construction décrite par Aubrun et Sablik qui améliore le résultat de Hochman utilisé dans la construction de Chazottes et Hochman
In equilibrium statistical mechanics or thermodynamics formalism one of the main objectives is to describe the behavior of families of equilibrium measures for a potential parametrized by the inverse temperature. Here we consider equilibrium measure as the shift invariant measures that maximizes the pressure. Other constructions already prove the chaotic behavior of these measures when the system freezes, that is, when the temperature goes to zero. One of the most important examples was given by Chazottes and Hochman. They prove the non-convergence of the equilibrium measures for a locally constant potential when the dimension is bigger then 3. In this work we present a construction of a bidimensional example described by a finite alphabet and a locally constant potential there exists a sequence eta_k where the non-convergence occurs for any sequence of equilibrium measures at inverse of temperature eta_k when the temperature goes to zero. For that we use the construction described by Aubrun and Sablik which improves the result of Hochman used in the construction of Chazottes and Hochman
APA, Harvard, Vancouver, ISO, and other styles
9

Kalyanasundaram, Subrahmanyam. "Turing machine algorithms and studies in quasi-randomness." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/42808.

Full text
Abstract:
Randomness is an invaluable resource in theoretical computer science. However, pure random bits are hard to obtain. Quasi-randomness is a tool that has been widely used in eliminating/reducing the randomness from randomized algorithms. In this thesis, we study some aspects of quasi-randomness in graphs. Specifically, we provide an algorithm and a lower bound for two different kinds of regularity lemmas. Our algorithm for FK-regularity is derived using a spectral characterization of quasi-randomness. We also use a similar spectral connection to also answer an open question about quasi-random tournaments. We then provide a "Wowzer" type lower bound (for the number of parts required) for the strong regularity lemma. Finally, we study the derandomization of complexity classes using Turing machine simulations. 1. Connections between quasi-randomness and graph spectra. Quasi-random (or pseudo-random) objects are deterministic objects that behave almost like truly random objects. These objects have been widely studied in various settings (graphs, hypergraphs, directed graphs, set systems, etc.). In many cases, quasi-randomness is very closely related to the spectral properties of the combinatorial object that is under study. In this thesis, we discover the spectral characterizations of quasi-randomness in two different cases to solve open problems. A Deterministic Algorithm for Frieze-Kannan Regularity: The Frieze-Kannan regularity lemma asserts that any given graph of large enough size can be partitioned into a number of parts such that, across parts, the graph is quasi-random. . It was unknown if there was a deterministic algorithm that could produce a parition satisfying the conditions of the Frieze-Kannan regularity lemma in deterministic sub-cubic time. In this thesis, we answer this question by designing an O(n[superscript]w) time algorithm for constructing such a partition, where w is the exponent of fast matrix multiplication. Even Cycles and Quasi-Random Tournaments: Chung and Graham in had provided several equivalent characterizations of quasi-randomness in tournaments. One of them is about the number of "even" cycles where even is defined in the following sense. A cycle is said to be even, if when walking along it, an even number of edges point in the wrong direction. Chung and Graham showed that if close to half of the 4-cycles in a tournament T are even, then T is quasi-random. They asked if the same statement is true if instead of 4-cycles, we consider k-cycles, for an even integer k. We resolve this open question by showing that for every fixed even integer k geq 4, if close to half of the k-cycles in a tournament T are even, then T must be quasi-random. 2. A Wowzer type lower bound for the strong regularity lemma. The regularity lemma of Szemeredi asserts that one can partition every graph into a bounded number of quasi-random bipartite graphs. Alon, Fischer, Krivelevich and Szegedy obtained a variant of the regularity lemma that allows one to have an arbitrary control on this measure of quasi-randomness. However, their proof only guaranteed to produce a partition where the number of parts is given by the Wowzer function, which is the iterated version of the Tower function. We show here that a bound of this type is unavoidable by constructing a graph H, with the property that even if one wants a very mild control on the quasi-randomness of a regular partition, then any such partition of H must have a number of parts given by a Wowzer-type function. 3. How fast can we deterministically simulate nondeterminism? We study an approach towards derandomizing complexity classes using Turing machine simulations. We look at the problem of deterministically counting the exact number of accepting computation paths of a given nondeterministic Turing machine. We provide a deterministic algorithm, which runs in time roughly O(sqrt(S)), where S is the size of the configuration graph. The best of the previously known methods required time linear in S. Our result implies a simulation of probabilistic time classes like PP, BPP and BQP in the same running time. This is an improvement over the currently best known simulation by van Melkebeek and Santhanam.
APA, Harvard, Vancouver, ISO, and other styles
10

Thathachar, Jayram S. "Time-space tradeoffs and functional representations via branching programs and their generalizations /." Thesis, Connect to this title online; UW restricted, 1998. http://hdl.handle.net/1773/6951.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Araújo, Anderson. "Uma abordagem modelo-teórica da computabilidade de Turing clássica." [s.n.], 2011. http://repositorio.unicamp.br/jspui/handle/REPOSIP/281062.

Full text
Abstract:
Orientador: Walter Alexandre Carnielli
Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Filosofia e Ciências Humanas
Made available in DSpace on 2018-08-17T17:02:46Z (GMT). No. of bitstreams: 1 Araujo_Anderson_D.pdf: 1286485 bytes, checksum: 1e51db7a5721f4affeaf8f512d23269e (MD5) Previous issue date: 2011
Resumo: Esta tese propõe uma nova abordagem da computabilidade de Turing clássica, denominada abordagem modelo-teórica. De acordo com essa abordagem, estruturas e teorias são associadas às máquinas de Turing a fim de investigar as características de suas computações. Uma abordagem modelo-teórica da computabilidade de Turing através da lógica de primeira ordem é desenvolvida, e resultados de correspondência, correção, representação e completude entre máquinas, estruturas e teorias de Turing são demonstrados. Nessa direção, os resultados obtidos a respeito de propriedades tais como estabilidade, absoluticidade, universalidade e logicidade enfatizam as potencialidades da computabilidade modelo-teórica de primeira ordem. Demonstra-se que a lógica subjacente às teorias de Turing é uma lógica minimal intuicio-nista, sendo capaz, inclusive, de internalizar um operador de negação clássico. As técnicas formuladas nesta tese permitem, sobretudo, investigar a computabilidade de Turing em modelos não-padrão da aritmética. Nesse contexto, uma nova perspectiva acerca do fenômeno de Tennenbaum e uma avaliação crítica da abordagem de Dershowitz e Gurevich da tese de Church-Turing sào apresentadas. Como conseqüência, postula-se um princípio de interna-lidade aritmética na computabilidade, segundo o qual o próprio conceito de computação é relativo ao modelo aritmético em que as máquinas de Turing operam. Assim, a tese unifica as caracterizações modelo-aritméticas do problema P versus NP existentes na literatura, revelando, por fim, uma barreira modelo-aritmética para a possibilidade de solução desse problema central em complexidade computacional no que diz respeito a certos métodos. Em sua totalidade, a tese sustenta que características cruciais do conceito de computação podem ser vislumbradas a partir da dualidade entre finitude e infinitude presente na distinção entre números naturais padrão e não-padrão
Abstract: This PhD thesis proposes a new approach to classical Turing computability, called a model-theoretic approach. In that approach, structures and theories are associated to Turing machines in order to study the characteristics of their computations. A model-theoretic approach to Turing computability through first-order logic is developed, and first results about correspondence, soundness, representation and completeness among Turing machines, structures and theories are proved. In this line, the results about properties as stability, absoluteness, universality and logicality emphasize the importance of the model-theoretic standpoint. It is shown that the underlying logic of Turing theories is a minimal intuicionistic logic, being able to internalize a classical negation operator. The techniques obtained in the present dissertation permit us to examine the Turing computability over nonstandard models of arithmetic as well. In this context, a new perspective about Tennenbaum's phenomenon and a critical evaluation of Dershowitz and Gurevich's account on Church-Turing's thesis are given. As a consequence, an arithmetic internality principle is postulated, according to which the concept of computation itself is relative to the arithmetic model that Turing machines operate. In this way, the dissertation unifies the existing model-arithmetic characterizations of the P versus NP problem, leading, as a by-product, to a model-arithmetic barrier to the solvability of that central problem in computational complexity with respect to certain techniques. As a whole, the dissertation sustains that crucial characteristics of the concept of computation may be understood from the duality between finiteness and infiniteness inherent within the distinction between standard and nonstandard natural numbers
Doutorado
Filosofia
Doutor em Filosofia
APA, Harvard, Vancouver, ISO, and other styles
12

Durand, Bruno. "Automates cellulaires : réversibilité et complexité." Lyon 1, 1994. http://www.theses.fr/1994LYO10030.

Full text
Abstract:
Nous etudions les automates cellulaires depuis plusieurs points de vue. Nous commencons par les envisager comme des systemes dynamiques et presentons une nouvelle preuve d'un resultat celebre du a richardson: les automates cellulaires sont exactement les fonctions continues qui commutent avec les translations. Cette preuve nous permet d'etudier la minimisation de la representation des automates cellulaires. Ensuite, nous presentons une classification des automates cellulaires en fonction de leur comportement limite, classification que nous prouvons etre partiellement decidable en dimension 1. Nous montrons aussi une extension en termes de probabilites d'un theoreme du a karel culik en 1989. Les methodes utilisees relevent de la topologie et de la combinatoire. La deuxieme partie est plus tournee vers l'algorithmique et la complexite. Nous y presentons deux reductions de problemes de pavages en problemes concernant les automates cellulaires. A l'aide de ces reductions, nous montrons simplement que la surjectivite des automates cellulaires est indecidable en dimension deux (theoreme du a jarkko kari en 1989) ainsi que la co-np-completude et la co-np-completude en moyenne des problemes de decision suivants: - etant donne un automate cellulaire en dimension 2, est-il bijectif quand il est restreint aux configurations finies plus petites que sa taille? - etant donne un automate cellulaire en dimension 2, est-il bijectif quand il est restreint aux configurations periodiques de periode inferieure a sa taille?
APA, Harvard, Vancouver, ISO, and other styles
13

Almeida, João Paulo da Cruz [UNESP]. "Indução finita, deduções e máquina de Turing." Universidade Estadual Paulista (UNESP), 2017. http://hdl.handle.net/11449/151718.

Full text
Abstract:
Submitted by JOÃO PAULO DA CRUZ ALMEIDA (joaopauloalmeida2010@gmail.com) on 2017-09-26T16:20:50Z No. of bitstreams: 1 Minha Dissertação.pdf: 1021011 bytes, checksum: 1717c0a1baae32699bdf06c781a9ed31 (MD5)
Approved for entry into archive by Monique Sasaki (sayumi_sasaki@hotmail.com) on 2017-09-28T12:58:50Z (GMT) No. of bitstreams: 1 almeida_jpc_me_sjrp.pdf: 1021011 bytes, checksum: 1717c0a1baae32699bdf06c781a9ed31 (MD5)
Made available in DSpace on 2017-09-28T12:58:50Z (GMT). No. of bitstreams: 1 almeida_jpc_me_sjrp.pdf: 1021011 bytes, checksum: 1717c0a1baae32699bdf06c781a9ed31 (MD5) Previous issue date: 2017-06-29
Este trabalho apresenta uma proposta relacionada ao ensino e prática do pensamento dedutivo formal em Matemática. São apresentados no âmbito do conjunto dos números Naturais três temas essencialmente interligados: indução/boa ordem, dedução e esquemas de computação representados pela máquina teórica de Turing. Os três temas se amalgamam na teoria lógica de dedução e tangem os fundamentos da Matemática, sua própria indecidibilidade e extensões / limites de tudo que pode ser deduzido utilizando a lógica de Aristóteles, caminho tão profundamente utilizado nos trabalhos de Gödel, Church, Turing, Robinson e outros. São apresentadas inúmeros esquemas de dedução referentes às “fórmulas” e Teoremas que permeiam o ensino fundamental e básico, com uma linguagem apropriada visando treinar os alunos (e professores) para um enfoque mais próprio pertinente à Matemática.
This work deals with the teaching and practice of formal deductive thinking in Mathematics. Three essentially interconnected themes are presented within the set of Natural Numbers: induction, deduction and computation schemes represented by the Turing theoretical machine. The three themes are put together into the logical theory of deduction and touch upon the foundations of Mathematics, its own undecidability and the extent / limits of what can be deduced by using Aristotle's logic, that is the subject in the works of Gödel, Church, Turing, Robinson, and others. There are a large number of deduction schemes referring to the "formulas" and Theorems that are usual subjects in elementary and basic degrees of the educational field, with an appropriate language in order to train students (and teachers) for a more pertinent approach to Mathematics.
APA, Harvard, Vancouver, ISO, and other styles
14

Grabowski, Łukasz [Verfasser], Andreas [Akademischer Betreuer] Thom, and Thomas [Akademischer Betreuer] Schick. "On Turing machines, groupoids, and Atiyah problem / Łukasz Grabowski. Gutachter: Andreas Thom ; Thomas Schick. Betreuer: Andreas Thom." Göttingen : Niedersächsische Staats- und Universitätsbibliothek Göttingen, 2011. http://d-nb.info/1042968675/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Raymundo, F., G. Quispe, and C. Raymundo-Ibañeez. "Heavy Object Lifting Platform to Correct Human Balance and Posture." IOP Publishing Ltd, 2019. http://hdl.handle.net/10757/656295.

Full text
Abstract:
This research document on technological development is aimed at workers in small companies, those that are developing well and soon to emerge as companies that will be positioned at the top of their labor sector. Starting from the problem at present with respect to the bad manipulation of loads it is arranged to solve the problem with established objectives to try to correct the corporal posture at the time of lifting an object with a machine that replaces the functions of the worker. The design and the solution was made based on the requirements of the worker, workers who have the need to load objects at a certain height, here the man-machine relationship, demands and desires that help to achieve the general objective are analyzed. In addition, it has the main function of this machine: To lift heavy objects of up to 150 kg of mass, this due to the implementation of a hydraulic piston in the lifting platform. The appropriate solution is an adaptation of a platform in the form of a truck forming an angle of 47.17 between the structure of the truck and the horizontal surface, this position is the most adequate so that the back does not suffer injuries during the handling of loads at work. The results give assurance that it is a machine of reliability and easy handling since it only requires an operator to control the objects and a flat position on the back without damaging the spine.
APA, Harvard, Vancouver, ISO, and other styles
16

Portier, Natacha. "Complexité algébrique : stabilité polynomiale des corps différentiels & résolutions universelles pour des problèmes NP-complets." Lyon 1, 1998. http://www.theses.fr/1998LYO10240.

Full text
Abstract:
La première partie est consacrée a l'étude de la complexité algébrique des corps différentiels de caractéristique nulle. Nous pouvons définir les classes P et NP sur un corps différentiel K (cf le livre "les petits cailloux" de B. Poizat). En utilisant le théorème des témoins de L. Blum, F. Cucker, M. Shub et S. Smale, nous montrons la P-stabilité de la théorie des corps différentiels de caractéristique nulle : la restriction d'un problème P sur un corps différentiel K à un sous corps K de K est encore P. Nous en déduisons que la question P = NP? A la même réponse dans tous les corps différentiellement clos de caractéristique nulle. Nous montrons ensuite que la dérivée ne nous permet pas de calculer plus vite les grandes racines et les grandes puissances d'un élément. La deuxième partie est consacrée à l'étude d'une généralisation d'une notion définie par M. Agrawal et S. Biswas : les résolutions universelles. Les auteurs définissent pour le cas des calculs sur les booléens, une notion plus forte que la NP-complétude. Par définition, à chaque problème NP-complet x est associé un probleme y, que nous appelons résolution associée à x, qui est P, et tel que x soit dans x si et seulement s'il existe y, de longueur polynomiale en elle de x, tel que (x, y) soit dans y. Un tel y est appelé solution de x pour x. Si toute résolution R se réduit a y, de manière à ce que les solutions pour R se déduisent des solutions pour y, y sera appelée résolution universelle. Les auteurs montrent ensuite un théorème qui permet de prouver qu'une résolution est universelle. Nous généralisons cette définition et ce théorème aux calculs effectués dans une structure arbitraire, en particulier aux calculs sur les réels définis dans l'article "on a theory of computation and complexity over the real numbers : NP-completeness, recursive functions and universal machines" de L. Blum, M. Shub et S. Smale, puis nous étudions des exemples.
APA, Harvard, Vancouver, ISO, and other styles
17

Tan, Sovanna. "Calcul et vérification parallèles de problèmes d'algèbre linéaire." Paris 11, 1994. http://www.theses.fr/1994PA112360.

Full text
Abstract:
Le sujet de cette thèse est l'étude des problèmes d'algèbre linéaire du point de vue de leur calcul et de leur vérification. Les problèmes de calcul sont dans nc2. On dit que leur vérification est facile lorsqu'elle est dans nc1. On présente différents modèles de vérification, dont les modèles probabilistes comme les testeurs d'instance (Blum), les preuves interactives et les preuves transparentes, puis des modèles de calculs parallèles. On décrit ensuite la classe Det des problèmes réductibles au calcul du déterminant d'une matrice d'entiers. Alors que certains problèmes sont faciles à vérifier, le problème de vérification du déterminant v-Det est difficile à calculer dans le sens suivant: si v-Det est dans nc1 alors les classes de complexité l et nl sont égales. On isole la classe v-Det des problèmes réductibles à v-Det qui contient d'autres problèmes difficiles. On montre ensuite l'existence de testeurs d'instance parallèles nc1 pour chaque programme p qui calcule une fonction complète f dans la classe Det et on donne une procédure pour les construire. Par définition, un testeur d'instance utilise p comme oracle pour vérifier avec une grande probabilité que p calcule f effectivement. Les testeurs que nous introduisons, vérifient en parallèle avec une probabilité 1. La dernière partie de l'étude est consacrée au problème de l'implantation de ces algorithmes sur une machine massivement parallèle, la connection machine 2. On s'attache à montrer l'importance des algorithmes de routage probabilistes pour obtenir des algorithmes dont le comportement suit l'analyse théorique de la complexité
APA, Harvard, Vancouver, ISO, and other styles
18

Feuillade, Guillaume. "Spécification logique de réseaux de Petri." Rennes 1, 2005. http://www.theses.fr/2005REN1S168.

Full text
Abstract:
Cette thèse porte sur la spécification logique de systèmes concurrents et la synthèse d'un modèle : les systèmes sont les réseaux de Petri non étiquetés et les spécifications sont des formules temporelles. Par l'introduction des réseaux de compteurs et de formules de " test à zéro " d'un compteur, nous réduisons le problème indécidable de la vacuité du langage d'une machine à deux compteurs à celui de la synthèse de réseau à partir d'une formule du Mu-Calcul. Nous affaiblissons la logique en le Nu-Calcul Conjonctif (NCC), et nous lui associons un reconnaisseur, les spécifications modales ; nous en exhibons une sous-classe pour laquelle la synthèse est décidable. Nous introduisons la théorie structurelle d'un réseau donné, exprimable en NCC, dont les modèles sont tous les réseaux qui contiennent le premier. Grâce à cette théorie, nous établissons l'indécidabilité de la synthèse pour le NCC par réduction du problème de l'exécution bornée d'une machine à compteurs.
APA, Harvard, Vancouver, ISO, and other styles
19

Ouazzani, Sabrina. "Construction de liens entre algorithmique et logique par du calcul à temps infini." Thesis, Montpellier, 2016. http://www.theses.fr/2016MONTT280.

Full text
Abstract:
Cette thèse s'inscrit dans le contexte du calcul en temps infini. Par cette désignation, nous faisons référence au temps indicé par des ordinaux, ces derniers possédant de bonnes propriétés pour ``compter''en leur long. En 2000, le modèle des machines de Turing à temps infini fut proposé par Hamkins et Lewis. Ce modèle généralise le processus de calcul des machines de Turing aux étapes de temps représentées par des ordinaux. Dans ce modèle de calcul, les étapes sont indicées par des ordinaux dénombrables, bien que le ruban soit toujours indicé par des entiers naturels. Les entrées du modèle sont donc les suites infinies de lettres. Un certain nombre de comportements nouveaux et étonnants apparaissent avec ces machines. Dans notre thèse, nous nous intéressons à certains de ces comportements.Naturellement, plus les temps de calcul sont longs, plus le modèle est puissant, et plus il devient possible de décider de nouveaux ensembles.À partir d’ordinaux assez grands, de nouvelles propriétés structurelles apparaissent également. L'une d'entre elles est l'existence de brèches dans les temps possibles d'arrêts de programmes. Lorsque ces brèches furent découvertes, de premiers liens entre elles et le caractère admissible des ordinaux qui les commencent furent établis. Notre approche utilise l'algorithmique pour préciser les liens entre les propriétés logiques des ordinaux et les propriétés calculatoires de ces machines à temps infini.Plus précisément, grâce à des des algorithmes spécifiques, nous découvrons et prouvons de nouvelles propriétés sur ces brèches,amenant à une meilleure compréhension de leur structure. Nous montrons notamment que les brèches peuvent être de toutes les tailles (limites) écrivables, qu'il en existe même de taille au moins aussi grande que leur ordinal de début. Jusqu’à la première brèche ayant cette caractéristique, la structure des brèches est assez proche de celle des ordinaux : elles apparaissent en ordre croissant en fonction de leur taille. Nous montrons également que jusqu'à cette brèche spéciale, si les ordinaux admissibles sont exactement les ordinaux débutant les brèches, au-dessus, des ordinaux admissibles peuvent apparaître au milieu de très grandes brèches et la structure des brèches devient désordonnée
This thesis is centred on the study of infinite time computations. Infinite time here means having a time axis indexed by ordinals — the ordinals are convenient objects along which we can count. The infinite time Turing machine model was introduced by Hamkins and Lewis in 2000. This model generalises the Turing machine computation model to ordinal time. In this model, stages are indexed by (countable) ordinals, even though the tape is indexed by the integers as in the classical model. This model can thus now have infinite strings as input. The main focus of this thesis is the study of some of the new and surprising behaviours that these machines exhibit. Naturally, the longer, i.e., the greater ordinal, the computations run, the more powerful the model is, i.e. it computes/recognizes more sets. If the computations run beyond certain ordinal times, new structural properties also appear. One of these properties is the existence of gaps in the halting times of the machines. When these gaps had been discovered, some first links had been established between these gaps and the admissible character of the ordinals starting them. Our approach uses algorithmics as a mean to emphasize the links between the logical properties of the ordinals involved and the computational properties of the infinite time Turing machines. Moreover, thanks to some specific algorithms, we discover and prove new properties of these gaps, leading to a better understanding of their structure. We show in particular that gaps can have precisely every possible writable ordinal size and that there are gaps whose length is greater or equal than their starting ordinal point. Until the first of such a gap, the gaps appear in increasing sizes. We also show that even if, before this special gap, admissible ordinals only appear at the beginning of gaps, the gaps structure becomes quite disordered beyond that point, with admissible ordinals appearing not only at the beginning but also inside some (huge) gaps
APA, Harvard, Vancouver, ISO, and other styles
20

Agudelo, Juan Carlos Agudelo. "Computação paraconsistente : uma abordagem logica a computação quantica." [s.n.], 2009. http://repositorio.unicamp.br/jspui/handle/REPOSIP/280061.

Full text
Abstract:
Orientador: Walter Alexandre Carnielli
Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Filosofia e Ciencias Humanas
Made available in DSpace on 2018-08-14T17:27:49Z (GMT). No. of bitstreams: 1 Agudelo_JuanCarlosAgudelo_D.pdf: 1223911 bytes, checksum: 92e4a3e06e1921aefd3476374d0726f2 (MD5) Previous issue date: 2009
Resumo: Neste trabalho levantamos, e investigamos do ponto de vista conceitual, evidências de que a complexidade algorítmica pode ser vista como relativa à lógica. Propomos, para tanto, novos modelos de computação fundados sobre lógicas não-clássicas, estudando suas características quanto à expressabilidade computacional e eficiência. A partir desta visão, sugerimos um novo caminho para estudar a eficiência dos modelos de computação quântica, enfatizando a análise de uma lógica subjacente a tais modelos. O conteúdo da tese está estruturado da seguinte maneira: no primeiro capítulo apresentamos uma análise conceitual da noção de 'computação', indicando como este conceito tem mudado desde os trabalhos fundacionais da década de 1930, e discutindo se o conceito deve ser considerado como puramente físico, puramente lógicomatemático ou uma combinação de ambos. O Capítulo 2 introduz duas versões de 'máquinas de Turing paraconsistentes', usando sistemas lógicos diferentes e obtendo modelos com diferentes poderes computacionais (quanto à eficiência); tal resultado constitui uma primeira evidência a favor da relatividade lógica da computação que queremos defender. Outra evidência na mesma direção é apresentada no Capitulo 3, através da generalização dos circuitos booleanos para lógicas não-clássicas, em particular para a lógica paraconsistente mbC e para a lógica modal S5, e da análise do poder computacional de tais generalizações. O Capítulo 4 consiste numa introdução à computação quântica, para logo (no Capítulo 5) estabelecer algumas relações entre modelos de computação quântica e modelos de computação paraconsistente, de maneira a propor uma interpretação lógica dos modelos quânticos. No capítulo final (Capítulo 6) descrevemos várias relações entre mecânica quântica e lógica paraix consistente, relações estas que sugerem potencialidades com alto grau de relevância a respeito da abordagem paraconsistente dos fenômenos computacionais quânticos e que incitam a continuar explorando esta alternativa.
Abstract: This work provides evidences to view computational complexity as logic-relative, by introducing new models of computation through non-classical logics and by studying their features with respect to computational expressivity and efficiency. From this point of view, we suggest a new way to study the efficiency of quantum computational models consisting in the analysis of an underlying logic. The contents of the thesis is structured in the following way: the first chapter presents a conceptual analysis of the notion of 'computation', showing how this concept evolved since the decade of 1930 and discussing whether it can be considered a pure physical or a pure logic-mathematical concept, or a combination of both paradigms. Chapter 2 introduces two versions of 'paraconsistent Turing machines', by considering different logic systems and obtaining models with different computational capabilities (with respect to efficiency); such a result constitute a first evidence in favor of the logical relativity of computation that we are defending here. Another evidence in the same direction is presented in Chapter 3 through a generalization of boolean circuits to non-classical logics, particularly for the paraconsistent logic mbC and for the modal logic S5, and by analyzing the computational power of such generalizations. Chapter 4 consists in an introduction to quantum computation. This is used in Chapter 5 to establish some relationships between quantum and paraconsistent models of computation, in order to propose a logic interpretation of quantum models. The final chapter (Chapter 6) describes several connections between quantum mechanics and paraconsistent logic; such relationship suggests highly relevant potentialities in favor of the paraconsistent approach to quantum computation phenomena encouraging to continue exploring this alternative.
Doutorado
Logica
Doutor em Filosofia
APA, Harvard, Vancouver, ISO, and other styles
21

Koiran, Pascal. "Puissance de calcul des réseaux de neurones artificiels." Lyon 1, 1993. http://www.theses.fr/1993LYO19003.

Full text
Abstract:
Depuis quelques annees on s'est beaucoup interesse a la resolution de problemes d'ingenierie avec des reseaux de neurones artificiels (par exemple en reconnaissance de formes, robotique, prediction de series temporelles, optimisation. . . ). La plupart de ces travaux sont de nature empirique, et ne comportent que peu ou pas du tout d'analyse mathematique rigoureuse. Cette these se situe dans une perspective tout-a-fait differente: il s'agit d'etudier les relations entre les reseaux de neurones et les modeles de calculs classiques ou moins classiques de l'informatique theorique (automates finis, machines de turing, circuits booleens, machines de turing reelles de blum, shub et smale). Les principaux resultats sont les suivants: 1) simulation d'une machine de turing universelle par des reseaux recurrents; 2) etude generale de la puissance de calcul des systemes dynamiques definis par des iterations de fonctions, notamment en petites dimensions; 3) etude de modeles de calculs sur les nombres reels, avec application aux reseaux recurrents et acycliques. On montre que la classe des fonctions (discretes) calculables en temps polynomial est p/poly
APA, Harvard, Vancouver, ISO, and other styles
22

Munnich, Nicolas. "Operational and categorical models of PCF : addressing machines and distributing semirings." Electronic Thesis or Diss., Paris 13, 2024. http://www.theses.fr/2024PA131015.

Full text
Abstract:
Bien qu'elle ait été introduite il y a plus de 60 ans, PCF reste intéressante. Bien que la quête d'un modèle satisfaisant et totalement abstrait de PCF ait été résolue au tournant du millénaire, de nouveaux modèles de PCF apparaissent encore fréquemment dans la littérature, explorant des voies inexplorées ou utilisant PCF comme une lentille ou un outil pour étudier une autre construction mathématique. Dans cette thèse, nous nous appuyons sur notre connaissance des modèles de PCF de deux manières distinctes : En construisant un tout nouveau modèle, et en s'appuyant sur les modèles existants. Les machines d'adressage sont un type relativement nouveau de machine abstraite qui s'inspire des machines de Turing. Il a déjà été démontré que ces machines peuvent modéliser l'ensemble du ?-calcul non typé. Nous nous appuyons sur ces machines pour construire des machinesd'adressage étendues (EAM) et les doter d'un système de type. Nous montrons ensuite que ces machines peuvent être utilisées pour obtenir un nouveau modèle entièrement abstrait et distinct de PCF : Nous montrons que les machines simulent fidèlement PCF de telle sorte qu'un terme PCF se termine par un chiffre exactement lorsque la machine d'adressage étendue correspondante se termine par le même chiffre. De même, nous montrons que chaque machine d'adressage étendue typée peut être transformée en un programme PCF avec le même comportement d'observation. De ces deux résultats, il découle que le modèle de PCF obtenu en quotientant les EAM typables par une relation logique appropriée est totalement abstrait. Il existe une pléthore de modèles catégoriels solides de PCF, en raison de sa relation étroite avec le ?-calcul. Nous considérons deux modèles similaires (qui sont aussi des modèles de logique linéaire) qui sont basés sur des sémirings : Les modèles pondérés, qui utilisent les sémirations pour quantifier une valeur interne, et les modèles de multiplicité, qui utilisent les sémirations pour modéliser linéairement des fonctions (modèle de l'exponentielle !). Nous étudionsl'intersection entre ces deux modèles en examinant les conditions sous lesquelles deux monades dérivées de sémirings spécifiques se distribuent. Nous découvrons que le fait qu'un semi-anneau ait ou non une somme idempotente fait une grande différence dans sa capacité à distribuer. Notre étude nous conduit à découvrir la notion de distribution non naturelle, qui forme une monade sur une catégorie de Kleisli. Enfin, nous présentons des conditions précises sous lesquelles une distribution particulière peut se former entre deux semis
Despite being introduced over 60 years ago, PCF remains of interest. Though the quest for a satisfactory fully abstract model of PCF was resolved around the turn of the millennium, new models of PCF still frequently appear in the literature, investigating unexplored avenues or using PCF as a lens or tool to investigate some other mathematical construct. In this thesis, we build upon our knowledge of models of PCF in two distinct ways: Constructing a brand new model, and building upon existing models. Addressing Machines are a relatively new type of abstract machine taking inspiration from Turing Machines. These machines have been previously shown to model the full untyped ?- calculus. We build upon these machines to construct Extended Addressing Machines (EAMs) and endow them with a type system. We then show that these machines can be used to obtain a new and distinct fully abstract model of PCF: We show that the machines faithfully simulatePCF in such a way that a PCF term terminates in a numeral exactly when the corresponding Extended Addressing Machine terminates in the same numeral. Likewise, we show that every typed Extended Addressing Machine can be transformed into a PCF program with the same observational behaviour. From these two results, it follows that the model of PCF obtained by quotienting typable EAMs by a suitable logical relation is fully abstract. There exist a plethora of sound categorical models of PCF, due to its close relationship with the ?-calculus. We consider two similar models (which are also models of Linear Logic) that are based on semirings: Weighted models, using semirings to quantify some internal value, and Multiplicity models, using semirings to linearly model functions (model the exponential !). We investigate the intersection between these two models by investigating the conditions under which two monads derived from specific semirings distribute. We discover that whether or not a semiring has an idempotent sum makes a large difference in its ability to distribute. Our investigation leads us to discover the notion of an unnatural distribution, which forms a monad on a Kleislicategory. Finally, we present precise conditions under which a particular distribution can form between two semirings
APA, Harvard, Vancouver, ISO, and other styles
23

Park, Seongmin. "A hypertext learning system for theory of computation." Virtual Press, 1993. http://liblink.bsu.edu/uhtbin/catkey/897499.

Full text
Abstract:
The Hypertext concept was introduced about 50 years ago. This thesis presents the development of a reference system using the Hypertext concept. HYATS (HYpertext Automata and Turing Theory Learning 5ys,em) is a system which helps users learn many topics in the area of theory of computation. The system is implemented by Guide which is a general purpose Hypertext system running on PC-Windows environment. HYATS also includes a Turing machine simulating program which was written by Dominique Atger as her Master's Thesis in 1993, so that users can actually experiment with Turing machines learned through HYATS. HYATS will be not only the reference system, but also the complete package of actual learning system. The motivation behind this project is to study basic concepts of a Hypertext system so that it will also contribute to G-Net research. HYATS can be used as a prototype for future development of versions of by using other Hypertext systems such as NoteCards.
Department of Computer Science
APA, Harvard, Vancouver, ISO, and other styles
24

Feuillade, Guillaume Pinchinat Sophie. "Spécification logique de réseaux de Petri." [S.l.] : [s.n.], 2005. ftp://ftp.irisa.fr/techreports/theses/2005/feuillade.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Meyer, Antoine Caucal Didier. "Graphes infinis de présentation finie." [S.l.] : [s.n.], 2005. ftp://ftp.irisa.fr/techreports/theses/2005/meyer.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Lassègue, Jean. "L'intelligence artificielle et la question du continu : remarques sur le modèle de Turing." Phd thesis, Université de Nanterre - Paris X, 1994. http://tel.archives-ouvertes.fr/tel-00011541.

Full text
Abstract:
La thèse vise deux buts : décrire la notion de machine de Turing en tant que concept et en tant que modèle. L'hypothèse épistémologique de départ est la suivante : pour que la notion de machine de Turing ait psychologiquement une signification, il faut qu'elle soit mise en rapport avec la notion de continu et ce, dans les deux directions envisagées, concept et modèle.
On décrit la notion de machine de Turing en tant que concept dans la théorie de la calculabilité. On étudie le contexte épistémologique dans lequel le concept est né dans les années trente : philosophiquement, la “querelle des fondements en mathématique”; mathématiquement, l'apparition des différents formalismes permettant de rendre compte de la calculabilité des fonctions, dont le formalisme de la machine de Turing.
On décrit dans la deuxième partie comment le concept de machine de Turing se transforme en modèle pour la théorie de la psychologie. La justification de cette transformation est étudiée à partir de l'expérience de pensée élaborée par Turing grâce au “jeu de l'imitation”. On interprète le sens de ce jeu d'un point de vue formaliste, probabiliste et psychologique. On finit par conclure à l'absence de “test de Turing” dans le jeu, contrairement à ce qui est cru généralement.
La troisième partie étudie la façon dont la notion de machine de Turing a servi de fondement à l'intelligence artificielle. Le modèle de Turing tel qu'il a été utilisé jusqu'à présent engendre deux types de théories dualistes de l'esprit : une théorie platonicienne et une théorie fonctionnaliste. On justifie une interprétation non-dualiste en mettant l'accent sur le rôle joué par le langage dans la constitution du modèle. On replace enfin le modèle dans une tradition historique plus large, qui va de C. Babbage à R. Thom.
APA, Harvard, Vancouver, ISO, and other styles
27

Payet, Etienne. "Produit Synchronisé pour Quelques Classes de Graphes Infinis." Phd thesis, Université de la Réunion, 2000. http://tel.archives-ouvertes.fr/tel-00468099.

Full text
Abstract:
Cette thèse a pour cadre la spécification et la vérification de systèmes informatiques distribués, concurrents ou réactifs au moyen de graphes infinis associés à des spécifications de Thue et à certaines machines. Nous montrons que la classe des graphes des spécifications de Thue est fermée par produit synchronisé. Nous établissons aussi ce fait pour la classe des graphes des machines de Turing et pour certaines de ses sous-classes. Nous nous intéressons également à la conservation par produit synchronisé de la décidabilité de la théorie du premier ordre de graphes infinis. Nous montrons que le produit synchronisé de graphes de machines à pile restreint à sa partie accessible depuis un sommet donné à une théorie du premier ordre qui n'est pas décidable. Cependant, le produit synchronisé de graphes sans racine distinguée et dont la théorie du premier oordre est décidable a une théorie du premier ordre qui est décidable. Enfin nous mettons en évidence des liens qui unissent les graphes des machines de Turing à ceux des spécifications de Thue.
APA, Harvard, Vancouver, ISO, and other styles
28

Agudelo, Juan Carlos Agudelo. "Da computação paraconsistente a computação quantica." [s.n.], 2006. http://repositorio.unicamp.br/jspui/handle/REPOSIP/279515.

Full text
Abstract:
Orientador: Walter Alexandre Carnielli
Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Filosofia e Ciencias Humanas
Made available in DSpace on 2018-08-06T09:39:40Z (GMT). No. of bitstreams: 1 Agudelo_JuanCarlosAgudelo_M.pdf: 2515941 bytes, checksum: 58c117425f8731bb67cf3fc0e1181ad4 (MD5) Previous issue date: 2006
Resumo: As diferentes interpretações da mecânica quântica levanta sérios problemas filosóficos a respeito da natureza do mundo físico e do estatuto das teorias físicas. Tais interpretações desempenham um papel importante na compreensão dos modelos de computação quântica, e por sua vez os modelos de computação quântica abrem a possibilidade de se confrontar as teses filosóficas que se atrevem a responder a tais problemas. Apesar das relevantes e surpreendentes promessas de uso pragmático e tecnológico da computação quântica, não é por essa vereda que caminha este trabalho: o que aqui se oferece é um novo paradigma de computação (um modelo de computação baseado no paradigma paraconsistente), e se propõe uma nova interpretação da computação quântica através desse novo paradigma, dessa forma colaborando simultaneamente na discussão filosófica a respeito da noção de computabilidade e da mecânica quântica. O presente trabalho introduz a definição do que será chamado de modelo de máquinas de Turing paraconsistentes (MTPs). Tal modelo de computação é uma generalização do modelo clássico de máquinas de Turing. No modelo de máquinas de Turing paraconsistentes, diferentemente do modelo clássico, permite-se a execução de múltiplas instruções de mancha simultânea, dando lugar a multiplicidade de símbolos em diferentes casas da fita, multiplicidade de estados e multiplicidade dc posições da máquina. Considerando que tal multiplicidade de configurações, embora essencial nas MTPs pode ser interpretado como incoerências com respeito às máquinas de Turing clássicas, permite-se acrescentar condições de consistência e inconsistência na execução das instruções nas MTPs. o que servirá então para controlar o estado dc incoerência do sistema. Depois de apresentar o modelo de MTPs, são descritos os modelo de máquinas de Turing quânticas (MTQs) e o modelo dc circuitos quânticos (CQs), ambos introduzidos inicialmente por David Dcutsch. os quais são, respectivamente, generalizações do modelo de máquinas dc Turing clássicas e de circuitos boolcanos clássicos usando as leis da mecânica quântica. Finalmente, estabelecem-se relações entre o modelo de MTPs e os modelos de computação quântica, simulando algoritmos quânticos simples (um CQ que soluciona o chamado problema de Dcutsch e um CQ que soluciona o chamado problema de Deutsch-Josza) e mostrando que o paralelismo quântico, uma característica essencial da computação quântica., pode em alguns casos ser simulado por meio de MTPs. Dessa forma, apesar de o particular modelo de MTPs aqui apresentado ter algumas restrições na simulação de certas características da computação quântica, abre-se a possibilidade de se definir outros modelos de MTPs de maneira a simular tais características. Em resumo, o presente trabalho, na medida cm que oferece um novo paradigma de computação (a saber, a computação paraconsistente) e uma nova interpretação dos modelos de computação quântica (a saber, a interpretação da computação quântica através da computação paraconsistente) contribui para a discussão filosófica a respeito da interpretação dos modelos de computação quântica, e possivelmente da interpretação da própria mecânica quântica. Contudo, não menos importante é o fato de que, apesar de o presente trabalho não pretender se dedicar a questões puramente técnicas da computabilidade, ele de fato abre um imenso campo de investigação a respeito da computação relativizada à lógica e suas implicações - no caso presente, relativizada à lógica paraconsistente
Abstract: The interpretations of quantum mechanics open serious philosophical questions about the nature of the physical world and about the status of physical theories. Such interpretations play an important role in the understanding of models of quantum computing, and models of quantum computing, by their turn, open possibilities to confront and test philosophical theses that dare to address such problems. Although the promising pragmatic and technological applications of quantum computing, this work goes in another way: what is here offered is a new paradigm of computation based upon the pa-raconsistency paradigm, and a new interpretation of quantum computing through this model, in this way simultaneously collaborating in the philosophical discussion about the concepts of computability and of quantum mechanics. This work introduces the definition of what will be called the model of paraconsistent Turing machines (PTMs). Such computational model is a generalization of the classical model of Turing machines. In the PTMs model, differently from the classical Turing machines model, simultaneous execution of multiple instructions is allowed, giving rise to a multiplicity of symbols on different cells of the tape, multiplicity of machine states and multiplicity of machine positions. Such multiplicity of configurations, though essential in the PTMs. can be seen as incoherencies with respect to classical Turing machines: to compensate this, the PTMs permit to operate with consistency and inconsistency conditions to control the global state of incoherence of the system. After introducing the PTMs, the model of quantum Turing machines (QTMs) and the model of quantum circuits (QCs) arc presented. This models arc due to David Dcutsch and arc. respectively, generalizations of the model of classical Turing machines and the model of classical boolean circuits, using the laws of quantum mechanics. Finally, relations between the model of PTMs and models of quantum computing arc established, which permits to simulate simple quantum algorithms (a QC to solve the so called Dcutsch problem and a QC to solve the so called Dcutsch-Jozsa. problem) by PTMs. It is also shown that quantum parallelism, an essential characteristic of quantum computation, may be simulated in some cases by PTMs. Although the particular model of PTMs here presented has some restrictions in the capacity to simulate certain quantum computing characteristics, our work opens the possibility to define other PTM models which could simulate such characteristics. To sum up, the present work, while offering a new paradigm of computation (namely, parconsistent computation) and a new interpretation of quantum computing (namely, interpretation of quantum computation by means of paraconsistent computation) contributes to the philosophical discussion about the interpretation of quantum computation and of the quantum mechanics itself. However, not less important is the fact that, besides its lack of explicit intention towards tccnical questions of computability theory, opens a new line of research about the possibilities of logic-relativized computation and its implications- in the present case, relativized to paraconsistent logics
Mestrado
Logica
Mestre em Filosofia
APA, Harvard, Vancouver, ISO, and other styles
29

De, La Cruz Anthony, Jaime Cardenas, and Leonardo Vinces. "Development of a Machine to Control the Level of Washing in Panca Chili Seeds." Universidad Peruana de Ciencias Aplicadas (UPC), 2021. http://hdl.handle.net/10757/653774.

Full text
Abstract:
El texto completo de este trabajo no está disponible en el Repositorio Académico UPC por restricciones de la casa editorial donde ha sido publicado.
The washing of Panca chili seeds requires innovative solutions that allow controlling this process. It is necessary to handle variables (conductivity, pH, colorimetry) in the face of the challenge of working with small seeds. At present, there are no machines that are dedicated to the washing of this type of seeds, since in many companies this work is done manually, which is not the one indicated because this technique cannot guarantee homogeneity in the seed washing. In addition, direct handling of this type of seeds can cause irritation to the eyes and skin of the person who maintains contact with the seeds. That is why, it is proposed to make a machine to scale by means of a motorized rotary agitator inside a tank, in order to guarantee the homogeneity of the mixture when washing seeds. The present work will allow to determine, among two different types of agitators (axial and radial), which type of agitator is the most efficient in the washing of seeds of Panca chili, to achieve this objective the measurement of pH and electrical conductivity to the water will be carried after the mixture, after stirring. Finally, the analysis of the tests performed on the mixture obtained and washed by each type of agitator allowed to identify the turbine-type radial agitator, like the one that obtained greater efficiency in the washing of seeds, with respect to the helical agitator and pallets, designed for development of this work, in turn, could also confirm that this type of palette with the conductivity control allows to guarantee the homogeneity of the mixture during washing. © 2021, The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG.
Revisión por pares
APA, Harvard, Vancouver, ISO, and other styles
30

Ben-Azza, Hussain. "Automates cellulaires et pavages vus comme des réseaux booléens." Lyon 1, 1995. http://www.theses.fr/1995LYO10235.

Full text
Abstract:
L'un des buts de cette these est de comparer certaines mesures de complexite associees aux deux modeles de calcul suivants: les reseaux booleens et les automates cellulaires. Le premier chapitre decrit l'approche adoptee et donne une idee (rapide) de certains travaux. Le deuxieme chapitre introduit les reseaux booleens, les automates cellulaires en relation avec les machines de turing, et quelques faits incontournables comme illustres par les premiers chapitres du monographe de p. Dunne (absorption, effet shannon-lupanov, trous). Le troisieme chapitre etudie les reseaux booleens synchrones comme ayant un interet propre. Le quatrieme chapitre compare les reseaux booleens et les automates cellulaires. Le cinquieme chapitre definit le probleme de pavage plus precisement et prouve les faits decrits a la fin de ce resume. Un reseau booleen se transforme en un reseau booleen synchrone dont la taille est au plus elevee au carre, et de meme profondeur. La version synchrone de l'evaluation d'un reseau booleen sur une entree est complete pour le temps polynomial deterministe. L'effet lupanov-shannon est aussi valable pour les reseaux booleens synchrones. En plus, par un argument statistique, il vient que la largeur d'un reseau booleen ainsi que sa profondeur peuvent influencer sa taille. Un langage reconnu en temps cellulaire t est reconnu par une famille de reseaux booleensu#e-uniforme synchrone de taille o(t#2), de profondeur o(t). Une famille de reseaux booleens u#c-uniforme synchrone de taille z, de profondeur d et de largeur l, est simulee en temps et largeur cellulaire, o(dz#2logz), o(log d + log l) respectivement. Savoir si une grille est pavable par des paves de taille k = 2 est dans nc#2. Un jeu, par les k-paves (k > 2), est montre complet pour l'espace deterministe polynomial
APA, Harvard, Vancouver, ISO, and other styles
31

Lefevre, Jonas. "Protocoles de population : une hiérarchie des variantes. Calcul de couplages autostabilisants." Palaiseau, Ecole polytechnique, 2014. http://www.theses.fr/2014EPXX0068.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Aubert, Clément. "Logique linéaire et classes de complexité sous-polynominales." Paris 13, 2013. https://theses.hal.science/tel-00957653.

Full text
Abstract:
This research in Theoretical Computer Science extends the gateways between Linear Logic and Complexity Theory by introducing two innovative models of computation. It focuses on sub-polynomial classes of complexity : AC and NC —the classes of efficiently parallelizable problems— and L and NL —the deterministic and non-deterministic classes of problems efficiently solvable with low resources on space. Linear Logic is used through its Proof Net resentation to mimic with efficiency the parallel computation of Boolean Circuits, including but not restricted to their constant-depth variants. In a second moment, we investigate how operators on a von Neumann algebra can be used to model computation, thanks to the method provided by the Geometry of Interaction, a subtle reconstruction of Linear Logic. This characterization of computation in logarithmic space with matrices can naturally be understood as a wander on simple structures using pointers, parsing them without modifying them. We make this intuition formal by introducing Non Deterministic Pointer Machines and relating them to other well-known pointer-like-machines. We obtain by doing so new implicit characterizations of sub-polynomial classes of complexity
Cette recherche en informatique théorique construit de nouveaux ponts entre logique linéaire et théorie de la complexité. Elle propose deux modèles de machines abstraites qui permettent de capturer de nouvelles classes de complexité avec la logique linéaire, les classes des problèmes efficacement parallélisables (NC et AC) et celle des problèmes solutionnables avec peu d’espace, dans ses versions déterministes et non-déterministes (L et NL). La représentation des preuves de la logique linéaire comme réseaux de preuves est employée pour représenter efficacement le calcul parallèle des circuits booléens, y compris à profondeur constante. La seconde étude s’inspire de la géométrie de l’interaction, une délicate reconstruction de la logique linéaire à l’aide d’opérateurs d’une algèbre de von Neumann. Nous détaillons comment l’interaction d’opérateurs représentant des entiers et d’opérateurs représentant des programmes peut être reconnue nilpotente en espace logarithmique. Nous montrons ensuite comment leur itération représente un calcul effectué par des machines à pointeurs que nous définissons et que nous rattachons à d’autres modèles plus classiques. Ces deux études permettent de capturer de façon implicite de nouvelles classes de complexité, en dessous du temps polynomial
APA, Harvard, Vancouver, ISO, and other styles
33

Toister, Yanai. "Photography from the Turin Shroud to the Turing Machine." Thesis, The University of Sydney, 2015. http://hdl.handle.net/2123/14911.

Full text
Abstract:
Photography has always been a migratory system of representation. Today, it is integrated into numerous systems, in a profusion of specialties, sub-disciplines and disciplines. Within many of these domains, the exponentially growing powers of information processing enable the manufacturing of images that are seemingly photographic, yet partly (or fully) synthetic. How do we define these images? Are traditional disciplinary accounts relevant? Photography’s cultural value is most often measured in terms of its products, the various kinds of pictures that it generates. Instead, photography can be interrogated by studying the dynamic relationships between its components: the electromagnetic, optical, mechanical, chemical and recently mathematical elements and procedures that combine as a process that produces images. This dissertation utilizes two metaphors for defining photography: the Turin Shroud and the Universal Turing Machine. The former is presented as a set of propositions that facilitate new understandings about the history and theory of photography. The latter is introduced as a conceptual model that expands the theory and philosophy of photography into new realms, most notably those of new media and media philosophy. In support of this novel exposition, and to more adequately portray the trajectory of photography’s reincarnation as a form of computation, various terms from Vilém Flusser’s philosophy are reinterpreted and further developed. Through these it is suggested that photography is a family of programs wherein both ‘analogue’ and ‘digital’ characteristics always coexist. These are not to be seen as mutually exclusive qualities but as complementary discursive modalities. Further, because photography has always had mathematical qualities and potentialities, the recent technological turmoil does not designate the ‘end’ of the medium, but rather its coming of age. Importantly, now that the medium of photography has become media, photographic images should no longer be understood as bearers of ontological qualities but only as epistemic containers.
APA, Harvard, Vancouver, ISO, and other styles
34

Skácel, Jiří. "Systémy převodníků." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2016. http://www.nusl.cz/ntk/nusl-255379.

Full text
Abstract:
This document defines systems of pushdown transducers. The idea of cooperating distributed grammar systems for components working on one word is adjusted for use of transducers instead of grammars. The transducers cooperate by passing output of one to input of another component. It discusses their descriptive power and equivalency between systems with arbitrary numbers of components. The main conclusion is then comparison of their descriptive power with Turing machines with regard to their translation and accepted languages.
APA, Harvard, Vancouver, ISO, and other styles
35

Ricquebourg, Frédéric. "Archéologie de l'informatique." Lyon 3, 2008. https://scd-resnum.univ-lyon3.fr/out/theses/2008_out_ricquebourg_f.pdf.

Full text
Abstract:
L’ordinateur, instrument de calcul et de traitement de l’information digital électronique, est né au sortir de la deuxième guerre mondiale dans un laboratoire universitaire américain qui travaillait pour le compte des forces armées de ce pays. Peu sûrs matériellement, très grands consommateurs d’énergie électrique, accessibles et utilisables seulement par des personnels hyper qualifiés et triés sur le volet, les premiers systèmes informatiques ainsi que leurs logiciels furent pour l’essentiel conçus afin de répondre aux besoins spécifiques des militaires. Ils occupaient des dizaines de mètres carrés au sol, pesaient des tonnes tandis que leur coût, lui, atteignait en général plusieurs centaines de milliers de dollars. Quant à leur puissance de calcul, elle n’était rien moins que dérisoire comparée à celle dont tout un chacun peut disposer à ce jour. En l’espace d’une soixantaine d’années à peine, l’ordinateur a pourtant littéralement conquis les sociétés humaines en les transformant si radicalement et si profondément qu’il est aujourd’hui couramment admis de parler de révolution numérique. D’un coût dérisoire, ultra miniaturisé, polymorphe, surpuissant, mis en réseau à l’échelon planétaire, on le retrouve maintenant absolument partout, dans tous les aspects de l’existence humaine. Il est devenu à ce point si essentiel, si coextensif à nous-mêmes, à notre temps et à notre économie que paradoxalement, on a fini par ne plus guère y prêter attention. Après avoir questionné sa provenance, son essence logico-mathématique nous montrons comment l’ordinateur, ce fils prodigue engendré et perfectionné en temps de guerre par la puissante triangulaire armée-université-industrie et pensé à l’origine comme un analogue artificiel du cerveau humain, n’a cessé de subir de spectaculaires métamorphoses qualitatives et performatives tout en induisant rétroactivement des transformations remarquables sur les institutions, les groupes et les individus composant la société. Au travers de cette archéologie de l’informatique il s’agit ici de tenter de mieux comprendre la machine univers, son histoire et sa science, dans l’espoir d’être en mesure de mieux appréhender l’Homme et le monde dans lequel il vit d’aujourd’hui
The computer - computing and information processing digital electronics instrument - was born at the end of World War II in an American university lab who worked for the armed forces of this country. Unsafe materially, very large consumers of electricity, accessible and usable only by highly qualified and sort on the track personnel the first computer systems and software were for the most part designed to meet the specific needs of the military. Initially thought to be artificial equivalents to human brains, they occupied dozens of square feet on the ground, weighed tons while their cost was usually several hundred thousand dollars. As for their computing power, it was nothing less than pathetic compared to that which everyone can have nowadays. In about sixty years, the computer has literally conquered human societies by turning them so dramatically and so deeply that it is now commonly accepted to speak of digital revolution. Cheap, ultra-miniaturized, multiforme, powerful, globally networked, it is now absolutely everywhere, in all aspects of human existence. The computer has become so crucial to ourselves, our time and our economy that paradoxically we ended up not paying attention to it. After questioning its provenance, its logical and mathematical essence, we show how the computer, this prodigal son created and perfected in wartime by the powerful army-university-industry triangular, has continued to undergo dramatic qualitative and quantitative transformations while retroactively inducing remarkable transformations on institutions, groups and individuals in society. Through this archeology of computing we try to better understand the universal machine, its history and the computer science, in the hope of better understanding man and the world in which he lives today.
APA, Harvard, Vancouver, ISO, and other styles
36

Meyer, Antoine. "Graphes infinis de présentation finie." Phd thesis, Université Rennes 1, 2005. http://tel.archives-ouvertes.fr/tel-00325707.

Full text
Abstract:
Cette thèse s'inscrit dans l'étude de familles de graphes infinis de présentation finie, de leurs propriétés structurelles, ainsi que des comparaisons entre ces familles. Étant donné un alphabet fini Σ, un graphe infini étiqueté par Σ peut être caractérisé par un ensemble fini de relations binaires (Ra )a∈Σ sur un domaine dénombrable V quelconque. De multiples caractérisations finies de tels ensembles de relations existent, soit de façon explicite grâce à des systèmes de réécriture ou à divers formalismes de la théorie des automates, soit de façon implicite. Après un survol des principaux résultats existants, nous nous intéressons plus particulièrement à trois problèmes. Dans un premier temps, nous définissons trois familles de systèmes de réécriture de termes dont nous démontrons que la rela- tion de dérivation peut être représentée de façon finie. De ces résultats découlent plusieurs questions sur les familles de graphes infinis correspondantes. Dans un se- cond temps, nous étudions deux familles de graphes dont les ensembles de traces forment la famille des langages contextuels, à savoir les graphes rationnels et les graphes linéairement bornés. Nous nous intéressons en particulier au cas des langages contextuels déterministes, ainsi qu'à la comparaison structurelle de ces deux familles. Enfin, d'un point de vue plus proche du domaine de la vérifica- tion, nous proposons un algorithme de calcul des prédécesseurs pour une famille d'automates à pile d'ordre supérieur.
APA, Harvard, Vancouver, ISO, and other styles
37

Capuni, Ilir. "A fault-tolerant Turing machine." Thesis, Boston University, 2013. https://hdl.handle.net/2144/13608.

Full text
Abstract:
Thesis (Ph.D.)--Boston University PLEASE NOTE: Boston University Libraries did not receive an Authorization To Manage form for this thesis or dissertation. It is therefore not openly accessible, though it may be available by request. If you are the author or principal advisor of this work and would like to request open access for it, please contact us at open-help@bu.edu. Thank you.
The Turing machine is the most studied universal model of computation. This thesis studies the question if there is a Turing machine that can compute reliably even when violations of its transition function occur independently of each other with some small probability. In this thesis, we prove the existence of a Turing machine that - with a polynomial overhead - can simulate any other Turing machine, even when it is subject to faults of the above type, thereby answering the question that was open for 25 years.
APA, Harvard, Vancouver, ISO, and other styles
38

Zenil, Hector. "Une approche expérimentale à la théorie algorithmique de la complexité." Phd thesis, Université des Sciences et Technologie de Lille - Lille I, 2011. http://tel.archives-ouvertes.fr/tel-00839374.

Full text
Abstract:
Une caractéristique contraignante de la complexité de Kolmogorov-Chaitin (dénotée dans ce chapitre par K) est qu'elle n'est pas calculable à cause du problème de l'arrêt, ce qui limite son domaine d'application. Une autre critique concerne la dépendance de K à un langage particulier ou une machine de Turing universelle particulière, surtout pour les suites assez courtes, par exemple, plus courtes que les longueurs typiques des compilateurs des langages de programmation. En pratique, on peut obtenir une approximation de K(s), grâce à des méthodes de compression. Mais les performances de ces méthodes de compression s'écroulent quand il s'agit des suites courtes. Pour les courtes suites, approcher K(s) par des méthodes de compression ne fonctionne pas. On présente dans cet thèse une approche empirique pour surmonter ce problème. Nous proposons une méthode "naturelle" qui permet d'envisager une définition plus stable de la complexité de Kolmogorov-Chaitin K(s) via la mesure de probabilité algorithmique m(s). L'idée est de faire fonctionner une machine universelle en lui donnant un programme au hasard pour calculer expérimentalement la probabilité m(s) (la probabilité de produire s), pour ensuite évaluer numériquement K(s) de manière alternative aux méthodes des algorithmes de compression. La méthode consiste à : (a) faire fonctionner des machines de calcul (machines de Turing, automates cellulaires) de façon systématique pour produire des suites (b) observer quelles sont les distributions de probabilité obtenues et puis (c) obtenir K(s) à partir de m(s) par moyen du théorème de codage de Levin-Chaitin.
APA, Harvard, Vancouver, ISO, and other styles
39

Masum, Hassan Carleton University Dissertation Mathematics. "An exploration of turing machine based complexity." Ottawa, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
40

Müller, Markus. "Quantum Kolmogorov complexity and the quantum turing machine." [S.l.] : [s.n.], 2007. http://opus.kobv.de/tuberlin/volltexte/2007/1655.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Rendell, P. "Turing machine universality of the game of life." Thesis, University of the West of England, Bristol, 2014. http://eprints.uwe.ac.uk/22323/.

Full text
Abstract:
This project proves universal computation in the Game of Life cellular automaton by using a Turing machine construction. Existing proofs of universality in the Game of Life rely on a counter machine. These machines require complex encoding and decoding of the input and output and the proof of universality for these machines by the Church Turing thesis is that they can perform the equivalent of a Turing machine. A proof based directly on a Turing machine is much more accessible. The computational power available today allows powerful algorithms such as HashLife to calculate the evolution of cellular automata patterns sufficiently fast that an efficient universal Turing machine can be demonstrated in a conveniently short period of time. Such a universal Turing machine is presented here. It is a direct simulation of a Turing machine and the input and output are easily interpreted. In order to achieve full universal behaviour an infinite storage media is required. The storage media used to represent the Turing machine tape is a pair of stacks. One stack representing the Turing tape to the left of the read/write head and one for the Turing tape to the right. Collision based construction techniques have been used to add stack cells to the ends of the stacks continuously. The continuous construction of the stacks is equivalent to the formatting of blank media. This project demonstrates that large areas of a cellular automata can be formatted in real time to perform complex functions.
APA, Harvard, Vancouver, ISO, and other styles
42

Renuka, Shivaswaroop R. "Development of models of CNC machines - EMCO VMC100 and EMCO TURN120P in virtual NC." Ohio : Ohio University, 1996. http://www.ohiolink.edu/etd/view.cgi?ohiou1178046654.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Dubovský, Dávid. "Analýza rizik nástrojářské dílny." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2015. http://www.nusl.cz/ntk/nusl-232113.

Full text
Abstract:
This master`s thesis deals with a hazard analysis of selected machinery according to an actual legislative documents in Slovakia, which were taken over from Europien Union directives. The thesis is focused on exploring legislation related to the safety of toolroom workshop in the Czech Republic and Germany. Because of that was done the recherché of harmonized standards in these countries. Subsequently is made the identification of hazards under recherched standards and assessment of control system based on the performance level of the system. In the end are all obtained data evaluated and the precautionary measures are suggested. With taking care of economic factors are finally proposed the possibilities of the elimination of risks.
APA, Harvard, Vancouver, ISO, and other styles
44

Shah, Huma. "Deception-detection and machine intelligence in practical Turing tests." Thesis, University of Reading, 2010. http://centaur.reading.ac.uk/24768/.

Full text
Abstract:
Deception-detection is the crux of Turing’s experiment to examine machine thinking conveyed through a capacity to respond with sustained and satisfactory answers to unrestricted questions put by a human interrogator. However, in 60 years to the month since the publication of Computing Machinery and Intelligence little agreement exists for a canonical format for Turing’s textual game of imitation, deception and machine intelligence. This research raises from the trapped mine of philosophical claims, counter-claims and rebuttals Turing’s own distinct five minutes question-answer imitation game, which he envisioned practicalised in two different ways: a) A two-participant, interrogator-witness viva voce, b) A three-participant, comparison of a machine with a human both questioned simultaneously by a human interrogator. Using Loebner’s 18th Prize for Artificial Intelligence contest, and Colby et al.’s 1972 transcript analysis paradigm, this research practicalised Turing’s imitation game with over 400 human participants and 13 machines across three original experiments. Results show that, at the current state of technology, a deception rate of 8.33% was achieved by machines in 60 human-machine simultaneous comparison tests. Results also show more than 1 in 3 Reviewers succumbed to hidden interlocutor misidentification after reading transcripts from experiment 2. Deception-detection is essential to uncover the increasing number of malfeasant programmes, such as CyberLover, developed to steal identity and financially defraud users in chatrooms across the Internet. Practicalising Turing’s two tests can assist in understanding natural dialogue and mitigate the risk from cybercrime.
APA, Harvard, Vancouver, ISO, and other styles
45

Tantau, Till. "On structural similarities of finite automata and turing machine enumerability classes." [S.l.] : [s.n.], 2003. http://edocs.tu-berlin.de/diss/2003/tantau_till.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Pazienza, Giovanni Egidio. "Aspects of algorithms and dynamics of cellular paradigms." Doctoral thesis, Universitat Ramon Llull, 2008. http://hdl.handle.net/10803/9151.

Full text
Abstract:
Els paradigmes cel·lulars, com les xarxes neuronals cel·lulars (CNN, en anglès) i els autòmats cel·lulars (CA, en anglès), són una eina excel·lent de càlcul, al ser equivalents a una màquina universal de Turing. La introducció de la màquina universal CNN (CNN-UM, en anglès) ha permès desenvolupar hardware, el nucli computacional del qual funciona segons la filosofia cel·lular; aquest hardware ha trobat aplicació en diversos camps al llarg de la darrera dècada. Malgrat això, encara hi ha moltes preguntes a obertes sobre com definir els algoritmes d'una CNN-UM i com estudiar la dinàmica dels autòmats cel·lulars. En aquesta tesis es tracten els dos problemes: primer, es demostra que es possible acotar l'espai dels algoritmes per a la CNN-UM i explorar-lo gràcies a les tècniques genètiques; i segon, s'expliquen els fonaments de l'estudi dels CA per mitjà de la dinàmica no lineal (segons la definició de Chua) i s'il·lustra com aquesta tècnica ha permès trobar resultats innovadors.
Los paradigmas celulares, como las redes neuronales celulares (CNN, en
inglés) y los autómatas celulares (CA, en inglés), son una excelente
herramienta de cálculo, al ser equivalentes a una maquina universal de
Turing. La introducción de la maquina universal CNN (CNN-UM, en
inglés) ha permitido desarrollar hardware cuyo núcleo computacional
funciona según la filosofía celular; dicho hardware ha encontrado
aplicación en varios campos a lo largo de la ultima década. Sin
embargo, hay aun muchas preguntas abiertas sobre como definir los
algoritmos de una CNN-UM y como estudiar la dinámica de los autómatas
celular. En esta tesis se tratan ambos problemas: primero se demuestra
que es posible acotar el espacio de los algoritmos para la CNN-UM y
explorarlo gracias a técnicas genéticas; segundo, se explican los
fundamentos del estudio de los CA por medio de la dinámica no lineal
(según la definición de Chua) y se ilustra como esta técnica ha
permitido encontrar resultados novedosos.
Cellular paradigms, like Cellular Neural Networks (CNNs) and Cellular Automata (CA) are an excellent tool to perform computation, since they are equivalent to a Universal Turing machine. The introduction of the Cellular Neural Network - Universal Machine (CNN-UM) allowed us to develop hardware whose computational core works according to the principles of cellular paradigms; such a hardware has found application in a number of fields throughout the last decade. Nevertheless, there are still many open questions about how to define algorithms for a CNN-UM, and how to study the dynamics of Cellular Automata. In this dissertation both problems are tackled: first, we prove that it is possible to bound the space of all algorithms of CNN-UM and explore it through genetic techniques; second, we explain the fundamentals of the nonlinear perspective of CA (according to Chua's definition), and we illustrate how this technique has allowed us to find novel results.
APA, Harvard, Vancouver, ISO, and other styles
47

Wells, Andrew J. "The External Tape Hypothesis : a Turing machine based approach to cognitive computation." Thesis, London School of Economics and Political Science (University of London), 1994. http://etheses.lse.ac.uk/118/.

Full text
Abstract:
The symbol processing or "classical cognitivist" approach to mental computation suggests that the cognitive architecture operates rather like a digital computer. The components of the architecture are input, output and central systems. The input and output systems communicate with both the internal and external environments of the cognizer and transmit codes to and from the rule governed, central processing system which operates on structured representational expressions in the internal environment. The connectionist approach, by contrast, suggests that the cognitive architecture should be thought of as a network of interconnected neuron-like processing elements (nodes) which operates rather like a brain. Connectionism distinguishes input, output and central or "hidden" layers of nodes. Connectionists claim that internal processing consists not of the rule governed manipulation of structured symbolic expressions, but of the excitation and inhibition of activity and the alteration of connection strengths via message passing within and between layers of nodes in the network. A central claim of the thesis is that neither symbol processing nor connectionism provides an adequate characterization of the role of the external environment in cognitive computation. An alternative approach, called the External Tape Hypothesis (ETH), is developed which claims, on the basis of Turing's analysis of routine computation, that the Turing machine model can be used as the basis for a theory which includes the environment as an essential part of the cognitive architecture. The environment is thought of as the tape, and the brain as the control of a Turing machine. Finite state automata, Turing machines, and universal Turing machines are described, including details of Turing's original universal machine construction. A short account of relevant aspects of the history of digital computation is followed by a critique of the symbol processing approach as it is construed by influential proponents such as Allen Newell and Zenon Pylyshyn among others. The External Tape Hypothesis is then developed as an alternative theoretical basis. In the final chapter, the ETH is combined with the notion of a self-describing Turing machine to provide the basis for an account of thinking and the development of internal representations.
APA, Harvard, Vancouver, ISO, and other styles
48

Yedidia, Adam. "A relatively small turing machine whose behavior is independent of set theory." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/100680.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2015.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 79-80).
Since the definition of the Busy Beaver function by Radó in 1962, an interesting open question has been what the smallest value of n for which BB(n) is independent of ZFC. Is this n approximately 10, or closer to 1,000,000, or is it unfathomably large? In this thesis, I show that it is at most 340,943 by presenting an explicit description of a 340,943-state Turing machine Z with 1 tape and a 2-symbol alphabet whose behavior cannot be proved in ZFC, assuming ZFC is consistent. The machine is based on work of Harvey Friedman on independent statements involving order-invariant graphs. Ill In doing so, I give the first known upper bound on the highest provable Busy Beaver number in ZFC. I also present an explicit description of a 7,902-state Turing machine G that halts if and only if there's a counterexample to Goldbach's conjecture, and an explicit description of a 36,146-state Turing machine R that halts if and only if the Riemann hypothesis is false. In the process of creating G, R, and Z, I define a higher-level language, TMD, which is much more convenient than direct state manipulation, and explain in great detail the process of compiling this language down to a Turing machine description. TMD is a well-documented language that is optimized for parsimony over efficiency. This makes TMD a uniquely useful tool for creating small Turing machines that encode mathematical statements.
by Adam Yedidia
M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
49

Ахматов, Владислав Вікторович. "The evolution of artificial intelligence: from the Turing machine to Google DeepMind." Thesis, Київський національний університет технологій та дизайну, 2020. https://er.knutd.edu.ua/handle/123456789/15259.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Wang, Zhanchen. "Chatter analysis of machine tool systems in turning processes." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp05/NQ63715.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography