Dissertations / Theses on the topic 'Randomness'

To see the other types of publications on this topic, follow the link: Randomness.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Randomness.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Ghoudi, Kilani. "Multivariate randomness statistics." Thesis, University of Ottawa (Canada), 1993. http://dx.doi.org/10.20381/ruor-17165.

Full text
Abstract:
During the startup phase of a production process while statistics on the product quality are being collected it is useful to establish that the process is under control. Small samples ni qi=1 are taken periodically for q periods. We shall assume each measurement is multivariate. A process is under control or on-target if all the observations are deemed to be independent and identically distributed. Let Fi represent the empirical distribution function of the ith sample. Let F¯ represent the empirical distribution function of all observations. Following Lehmann (1951) we propose statistics of the form i=1q -infinityinfinityFi s-F- s2d Fs. The asymptotics of nonparametric q-sample Cramer-Von Mises statistics were studied in Kiefer (1959). The emphasis there, however, is on the case where n(i) → infinity while q stayed fixed. Here we study the asymptotics of a family of randomness statistics, that includes the above. These asymptotics are in the quality control situation (i.e q → infinity while n( i) stay fixed). Such statistics can be used in many situations; in fact one can use randomness statistics in any situation where the problem amounts to a test of homoscedasticity or homogeneity of a collection of observations. We give two such applications. First we show how such statistics can be used in nonparametric regression. Second we illustrate the application to retrospective quality control.
APA, Harvard, Vancouver, ISO, and other styles
2

Justamante, David. "Randomness from space." Thesis, Monterey, California: Naval Postgraduate School, 2017. http://hdl.handle.net/10945/52996.

Full text
Abstract:
Approved for public release; distribution is unlimited
Includes supplementary material
Reissued 30 May 2017 with correction to degree on title page.
Randomness is at the heart of today's computing. There are two categorical methods to generate random numbers: pseudorandom number generation (PRNG) methods and true random number generation (TRNG) methods. While PRNGs operate orders of magnitude faster than TRNGs, the strength of PRNGs lies in their initial seed. TRNGs can function to generate such a seed. This thesis will focus on studying the feasibility of using the next generation Naval Postgraduate School Femto Satellite (NPSFS) as a TRNG. The hardware for the next generation will come from the Intel Quark D2000 along with its onboard BMC150 6-axis eCompass. We simulated 3-dimensional motion to see if any raw data from the BMC150 could be used as an entropy source for random number generation.We studied various "schemes" on how to select and output specific data bits to determine if more entropy and increased bitrate could be reached. Data collected in this thesis suggests that the BMC150 contains certain bits that could be considered good sources of entropy. Various schemes further utilized these bits to yield a strong entropy source with higher bitrate. We propose the NPSFS be studied further to find other sources of entropy. We also propose a prototype be sent into space for experimental verification of these results.
Lieutenant, United States Navy
APA, Harvard, Vancouver, ISO, and other styles
3

Yu, Ru Qi. "Mechanisms of randomness cognition." Thesis, University of British Columbia, 2017. http://hdl.handle.net/2429/62682.

Full text
Abstract:
The environment is inherently noisy, with regularities and randomness. Therefore, the challenge for the cognitive system is to detect signals from noise. This extraction of regularities forms the basis of many learning processes, such as conditioning and language acquisition. However, people often have erroneous beliefs about randomness. One pervasive bias in people’s conception of randomness is that they expect random sequences to exhibit greater alternations than typically produced by random devices (i.e., the over-alternation bias). To explain the causes of this bias, in the thesis, I examined the cognitive and neural mechanisms of randomness perception. In six experiments, I found that the over-alternation bias was present regardless of the feature dimensions, sensory modalities, and probing methods (Experiment 1); alternations in a binary sequence were harder to encode and are under-represented compared with repetitions (Experiments 2-5); and hippocampal neurogenesis was a critical neural mechanism for the detection of alternating patterns but not for repeating patterns (Experiment 6). These findings provide new insights on the mechanisms of randomness cognition; specifically, we revealed different mechanisms involved in representing alternating patterns versus repeating patterns.
Arts, Faculty of
Psychology, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
4

Bourdoncle, Boris. "Quantifying randomness from Bell nonlocality." Doctoral thesis, Universitat Politècnica de Catalunya, 2019. http://hdl.handle.net/10803/666591.

Full text
Abstract:
The twentieth century was marked by two scientific revolutions. On the one hand, quantum mechanics questioned our understanding of nature and physics. On the other hand, came the realisation that information could be treated as a mathematical quantity. They together brought forward the age of information. A conceptual leap took place in the 1980's, that consisted in treating information in a quantum way as well. The idea that the intuitive notion of information could be governed by the counter-intuitive laws of quantum mechanics proved extremely fruitful, both from fundamental and applied points of view. The notion of randomness plays a central role in that respect. Indeed, the laws of quantum physics are probabilistic: that contrasts with thousands of years of physical theories that aimed to derive deterministic laws of nature. This, in turn, provides us with sources of random numbers, a crucial resource for information protocols. The fact that quantum theory only describes probabilistic behaviours was for some time regarded as a form of incompleteness. But nonlocality, in the sense of Bell, showed that this was not the case: the laws of quantum physics are inherently random, i.e., the randomness they imply cannot be traced back to a lack of knowledge. This observation has practical consequences: the outputs of a nonlocal physical process are necessarily unpredictable. Moreover, the random character of these outputs does not depend on the physical system, but only of its nonlocal character. For that reason, nonlocality-based randomness is certified in a device-independent manner. In this thesis, we quantify nonlocality-based randomness in various frameworks. In the first scenario, we quantify randomness without relying on the quantum formalism. We consider a nonlocal process and assume that it has a specific causal structure that is only due to how it evolves with time. We provide trade-offs between nonlocality and randomness for the various causal structures that we consider. Nonlocality-based randomness is usually defined in a theoretical framework. In the second scenario, we take a practical approach and ask how much randomness can be certified in a practical situation, where only partial information can be gained from an experiment. We describe a method to optimise how much randomness can be certified in such a situation. Trade-offs between nonlocality and randomness are usually studied in the bipartite case, as two agents is the minimal requirement to define nonlocality. In the third scenario, we quantify how much randomness can be certified for a tripartite process. Though nonlocality-based randomness is device-independent, the process from which randomness is certified is actually realised with a physical state. In the fourth scenario, we ask what physical requirements should be imposed on the physical state for maximal randomness to be certified, and more specifically, how entangled the underlying state should be. We show that maximal randomness can be certified from any level of entanglement.
El siglo XX estuvo marcado por dos revoluciones científicas. Por un lado, la mecánica cuántica cuestionó nuestro entendimiento de la naturaleza y de la física. Por otro lado, quedó claro que la información podía ser tratada como un objeto matemático. Juntos, ambas revoluciones dieron inicio a la era de la información. Un salto conceptual ocurrió en los años 80: se descubrió que la información podía ser tratada de manera cuántica. La idea de que la noción intuitiva de información podía ser gobernada por las leyes contra intuitivas de la mecánica cuántica resultó extremadamente fructífera tanto desde un punto de vista teórico como práctico. El concepto de aleatoriedad desempeña un papel central en este respecto. En efecto, las leyes de la física cuántica son probabilistas, lo que contrasta con siglos de teorías físicas cuyo objetivo era elaborar leyes deterministas de la naturaleza. Además, esto constituye una fuente de números aleatorios, un recurso crucial para criptografía. El hecho de que la física cuántica solo describe comportamientos aleatorios fue a veces considerado como una forma de incompletitud en la teoría. Pero la no-localidad, en el sentido de Bell, probó que no era el caso: las leyes cuánticas son intrínsecamente probabilistas, es decir, el azar que contienen no puede ser atribuido a una falta de conocimiento. Esta observación tiene consecuencias prácticas: los datos procedentes de un proceso físico no-local son necesariamente impredecibles. Además, el carácter aleatorio de estos datos no depende del sistema físico, sino solo de su carácter no-local. Por esta razón, el azar basado en la no-localidad está certificado independientemente del dispositivo físico. En esta tesis, cuantificamos el azar basado en la no-localidad en varios escenarios. En el primero, no utilizamos el formalismo cuántico. Estudiamos un proceso no-local dotado de varias estructuras causales en relación con su evolución temporal, y calculamos las relaciones entre aleatoriedad y no-localidad para estas diferentes estructuras causales. El azar basado en la no-localidad suele ser definido en un marco teórico. En el segundo escenario, adoptamos un enfoque práctico, y examinamos la relación entre aleatoriedad y no-localidad en una situación real, donde solo tenemos una información parcial, procedente de un experimento, sobre el proceso. Proponemos un método para optimizar la aleatoriedad en este caso. Hasta ahora, las relaciones entre aleatoriedad y no-localidad han sido estudiadas en el caso bipartito, dado que dos agentes forman el requisito mínimo para definir el concepto de no-localidad. En el tercer escenario, estudiamos esta relación en el caso tripartito. Aunque el azar basado en la no-localidad no depende del dispositivo físico, el proceso que sirve para generar azar debe sin embargo ser implementado con un estado cuántico. En el cuarto escenario, preguntamos si hay que imponer requisitos sobre el estado para poder certificar una máxima aleatoriedad de los resultados. Mostramos que se puede obtener la cantidad máxima de aleatoriedad indiferentemente del nivel de entrelazamiento del estado cuántico.
APA, Harvard, Vancouver, ISO, and other styles
5

Elias, Joran. "Randomness In Tree Ensemble Methods." The University of Montana, 2009. http://etd.lib.umt.edu/theses/available/etd-10092009-110301/.

Full text
Abstract:
Tree ensembles have proven to be a popular and powerful tool for predictive modeling tasks. The theory behind several of these methods (e.g. boosting) has received considerable attention. However, other tree ensemble techniques (e.g. bagging, random forests) have attracted limited theoretical treatment. Specifically, it has remained somewhat unclear as to why the simple act of randomizing the tree growing algorithm should lead to such dramatic improvements in performance. It has been suggested that a specific type of tree ensemble acts by forming a locally adaptive distance metric [Lin and Jeon, 2006]. We generalize this claim to include all tree ensembles methods and argue that this insight can help to explain the exceptional performance of tree ensemble methods. Finally, we illustrate the use of tree ensemble methods for an ecological niche modeling example involving the presence of malaria vectors in Africa.
APA, Harvard, Vancouver, ISO, and other styles
6

Vaikuntanathan, Vinod. "Distributed computing with imperfect randomness." Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/34354.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.
Includes bibliographical references (p. 41-43).
Randomness is a critical resource in many computational scenarios, enabling solutions where deterministic ones are elusive or even provably impossible. However, the randomized solutions to these tasks assume access to a pure source of unbiased, independent coins. Physical sources of randomness, on the other hand, are rarely unbiased and independent although they do seem to exhibit somewhat imperfect randomness. This gap in modeling questions the relevance of current randomized solutions to computational tasks. Indeed, there has been substantial investigation of this issue in complexity theory in the context of the applications to efficient algorithms and cryptography. This work seeks to determine whether imperfect randomness, modeled appropriately, is "good enough" for distributed algorithms. Namely, can we do with imperfect randomness all that we can do with perfect randomness, and with comparable efficiency ? We answer this question in the affirmative, for the problem of Byzantine agreement. We construct protocols for Byzantine agreement in a variety of scenarios (synchronous or asynchronous networks, with or without private channels), in which the players have imperfect randomness. Our solutions are essentially as efficient as the best known randomized Byzantine agreement protocols, which traditionally assume that all the players have access to perfect randomness.
by Vinod Vaikuntanathan.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
7

Mezher, Rawad. "Randomness for quantum information processing." Electronic Thesis or Diss., Sorbonne université, 2019. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2019SORUS244.pdf.

Full text
Abstract:
Cette thèse est basée sur la génération et la compréhension de types particuliers des ensembles unitaires aleatoires. Ces ensembles est utile pour de nombreuses applications de physique et de l’Information Quantique, comme le benchmarking aléatoire, la physique des trous noirs, ainsi qu’à la démonstration de ce que l’on appelle un "quantum speedup" etc. D'une part, nous explorons comment générer une forme particulière d'évolution aléatoire appelée epsilon-approximateunitary t-designs . D'autre part, nous montrons comment cela peut également donner des exemples de quantum speedup, où les ordinateurs classiques ne peuvent pas simuler en temps polynomiale le caractère aléatoire. Nous montrons également que cela est toujours possible dans des environnements bruyants et réalistes
This thesis is focused on the generation and understanding of particular kinds of quantum randomness. Randomness is useful for many tasks in physics and information processing, from randomized benchmarking , to black hole physics , as well demonstrating a so-called quantum speedup , and many other applications. On the one hand we explore how to generate a particular form of random evolution known as a t-design. On the other we show how this can also give instances for quantum speedup - where classical computers cannot simulate the randomness efficiently. We also show that this is still possible in noisy realistic settings. More specifically, this thesis is centered around three main topics. The first of these being the generation of epsilon-approximate unitary t-designs. In this direction, we first show that non-adaptive, fixed measurements on a graph state composed of poly(n,t,log(1/epsilon)) qubits, and with a regular structure (that of a brickwork state) effectively give rise to a random unitary ensemble which is a epsilon-approximate t-design. This work is presented in Chapter 3. Before this work, it was known that non-adaptive fixed XY measurements on a graph state give rise to unitary t-designs , however the graph states used there were of complicated structure and were therefore not natural candidates for measurement based quantum computing (MBQC), and the circuits to make them were complicated. The novelty in our work is showing that t-designs can be generated by fixed, non-adaptive measurements on graph states whose underlying graphs are regular 2D lattices. These graph states are universal resources for MBQC. Therefore, our result allows the natural integration of unitary t-designs, which provide a notion of quantum pseudorandomness which is very useful in quantum algorithms, into quantum algorithms running in MBQC. Moreover, in the circuit picture this construction for t-designs may be viewed as a constant depth quantum circuit, albeit with a polynomial number of ancillas. We then provide new constructions of epsilon-approximate unitary t-designs both in the circuit model and in MBQC which are based on a relaxation of technical requirements in previous constructions. These constructions are found in Chapters 4 and 5
APA, Harvard, Vancouver, ISO, and other styles
8

Morphett, Anthony William. "Degrees of computability and randomness." Thesis, University of Leeds, 2009. http://etheses.whiterose.ac.uk/11291/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Spiegel, Christoph. "Additive structures and randomness in combinatorics." Doctoral thesis, Universitat Politècnica de Catalunya, 2020. http://hdl.handle.net/10803/669327.

Full text
Abstract:
Arithmetic Combinatorics, Combinatorial Number Theory, Structural Additive Theory and Additive Number Theory are just some of the terms used to describe the vast field that sits at the intersection of Number Theory and Combinatorics and which will be the focus of this thesis. Its contents are divided into two main parts, each containing several thematically related results. The first part deals with the question under what circumstances solutions to arbitrary linear systems of equations usually occur in combinatorial structures..The properties we will be interested in studying in this part relate to the solutions to linear systems of equations. A first question one might ask concerns the point at which sets of a given size will typically contain a solution. We will establish a threshold and also study the distribution of the number of solutions at that threshold, showing that it converges to a Poisson distribution in certain cases. Next, Van der Waerden’s Theorem, stating that every finite coloring of the integers contains monochromatic arithmetic progression of arbitrary length, is by some considered to be the first result in Ramsey Theory. Rado generalized van der Waerden’s result by characterizing those linear systems whose solutions satisfy a similar property and Szemerédi strengthened it to a statement concerning density rather than colorings. We will turn our attention towards versions of Rado’s and Szemerédi’s Theorem in random sets, extending previous work of Friedgut, Rödl, Rucin´ski and Schacht in the case of the former and of Conlon, Gowers and Schacht for the latter to include a larger variety of systems and solutions. Lastly, Chvátal and Erdo¿s suggested studying Maker-Breaker games. These games have deep connections to the theory of random structures and we will build on work of Bednarska and Luczak to establish the threshold for how much a large variety of games need to be biased in favor of the second player. These include games in which the first player wants to occupy a solution to some given linear system, generalizing the van der Waerden games introduced by Beck. The second part deals with the extremal behavior of sets with interesting additive properties. In particular, we will be interested in bounds or structural descriptions for sets exhibiting some restrictions with regards to either their representation function or their sumset. First, we will consider Sidon sets, that is sets of integers with pairwise unique differences. We will study a generalization of Sidon sets proposed very recently by Kohayakawa, Lee, Moreira and Rödl, where the pairwise differences are not just distinct, but in fact far apart by a certain measure. We will obtain strong lower bounds for such infinite sets using an approach of Cilleruelo. As a consequence of these bounds, we will also obtain the best current lower bound for Sidon sets in randomly generated infinite sets of integers of high density. Next, one of the central results at the intersection of Combinatorics and Number Theory is the Freiman–Ruzsa Theorem stating that any finite set of integers of given doubling can be efficiently covered by a generalized arithmetic progression. In the case of particularly small doubling, more precise structural descriptions exist. We will first study results going beyond Freiman’s well-known 3k–4 Theorem in the integers. We will then see an application of these results to sets of small doubling in finite cyclic groups. Lastly, we will turn our attention towards sets with near-constant representation functions. Erdo¿s and Fuchs established that representation functions of arbitrary sets of integers cannot be too close to being constant. We will first extend the result of Erdo¿s and Fuchs to ordered representation functions. We will then address a related question of Sárközy and Sós regarding weighted representation function.
La combinatòria aritmètica, la teoria combinatòria dels nombres, la teoria additiva estructural i la teoria additiva de nombres són alguns dels termes que es fan servir per descriure una branca extensa i activa que es troba en la intersecció de la teoria de nombres i de la combinatòria, i que serà el motiu d'aquesta tesi doctoral. La primera part tracta la qüestió de sota quines circumstàncies es solen produir solucions a sistemes lineals d’equacions arbitràries en estructures additives. Una primera pregunta que s'estudia es refereix al punt en que conjunts d’una mida determinada contindran normalment una solució. Establirem un llindar i estudiarem també la distribució del nombre de solucions en aquest llindar, tot demostrant que en certs casos aquesta distribució convergeix a una distribució de Poisson. El següent tema de la tesis es relaciona amb el teorema de Van der Waerden, que afirma que cada coloració finita dels nombres enters conté una progressió aritmètica monocromàtica de longitud arbitrària. Aquest es considera el primer resultat en la teoria de Ramsey. Rado va generalitzar el resultat de van der Waerden tot caracteritzant en aquells sistemes lineals les solucions de les quals satisfan una propietat similar i Szemerédi la va reforçar amb una versió de densitat del resultat. Centrarem la nostra atenció cap a versions del teorema de Rado i Szemerédi en conjunts aleatoris, ampliant els treballs anteriors de Friedgut, Rödl, Rucinski i Schacht i de Conlon, Gowers i Schacht. Per últim, Chvátal i Erdos van suggerir estudiar estudiar jocs posicionals del tipus Maker-Breaker. Aquests jocs tenen una connexió profunda amb la teoria de les estructures aleatòries i ens basarem en el treball de Bednarska i Luczak per establir el llindar de la quantitat que necessitem per analitzar una gran varietat de jocs en favor del segon jugador. S'inclouen jocs en què el primer jugador vol ocupar una solució d'un sistema lineal d'equacions donat, generalitzant els jocs de van der Waerden introduïts per Beck. La segona part de la tesis tracta sobre el comportament extrem dels conjunts amb propietats additives interessants. Primer, considerarem els conjunts de Sidon, és a dir, conjunts d’enters amb diferències úniques quan es consideren parelles d'elements. Estudiarem una generalització dels conjunts de Sidons proposats recentment per Kohayakawa, Lee, Moreira i Rödl, en que les diferències entre parelles no són només diferents, sinó que, en realitat, estan allunyades una certa proporció en relació a l'element més gran. Obtindrem límits més baixos per a conjunts infinits que els obtinguts pels anteriors autors tot usant una construcció de conjunts de Sidon infinits deguda a Cilleruelo. Com a conseqüència d'aquests límits, obtindrem també el millor límit inferior actual per als conjunts de Sidon en conjunts infinits generats aleatòriament de nombres enters d'alta densitat. A continuació, un dels resultats centrals a la intersecció de la combinatòria i la teoria dels nombres és el teorema de Freiman-Ruzsa, que afirma que el conjunt suma d'un conjunt finit d’enters donats pot ser cobert de manera eficient per una progressió aritmètica generalitzada. En el cas de que el conjunt suma sigui de mida petita, existeixen descripcions estructurals més precises. Primer estudiarem els resultats que van més enllà del conegut teorema de Freiman 3k-4 en els enters. Llavors veurem una aplicació d’aquests resultats a conjunts de dobles petits en grups cíclics finits. Finalment, dirigirem l’atenció cap a conjunts amb funcions de representació gairebé constants. Erdos i Fuchs van establir que les funcions de representació de conjunts arbitraris d’enters no poden estar massa a prop de ser constants. Primer estendrem el resultat d’Erdos i Fuchs a funcions de representació ordenades. A continuació, abordarem una pregunta relacionada de Sárközy i Sós sobre funció de representació ponderada.
APA, Harvard, Vancouver, ISO, and other styles
10

Wong, Erick Bryce. "Structure and randomness in arithmetic settings." Thesis, University of British Columbia, 2012. http://hdl.handle.net/2429/42887.

Full text
Abstract:
We study questions in three arithmetic settings, each of which carries aspects of random-like behaviour. In the setting of arithmetic functions, we establish mild conditions under which the tuple of multiplicative functions [f₁, f₂, …, f_d ], evaluated at d consecutive integers n+1, …, n+d, closely approximates points in R^d for a positive proportion of n; we obtain a further generalization which allows these functions to be composed with various arithmetic progressions. Secondly, we examine the eigenvalues of random integer matrices, showing that most matrices have no rational eigenvalues; we also identify the precise distributions of both real and rational eigenvalues in the 2 × 2 case. Finally, we consider the set S(k) of numbers represented by the quadratic form x² + ky², showing that it contains infinitely many strings of five consecutive integers under many choices of k; we also characterize exactly which numbers can appear as the difference of two consecutive values in S(k).
APA, Harvard, Vancouver, ISO, and other styles
11

Johnston, Wilder Peter. "Learners’ shifting perceptions of randomness." Thesis, Open University, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.424680.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Roberts, Barnaby. "Structure and randomness in extremal combinatorics." Thesis, London School of Economics and Political Science (University of London), 2017. http://etheses.lse.ac.uk/3592/.

Full text
Abstract:
In this thesis we prove several results in extremal combinatorics from areas including Ramsey theory, random graphs and graph saturation. We give a random graph analogue of the classical Andr´asfai, Erd˝os and S´os theorem showing that in some ways subgraphs of sparse random graphs typically behave in a somewhat similar way to dense graphs. In graph saturation we explore a ‘partite’ version of the standard graph saturation question, determining the minimum number of edges in H-saturated graphs that in some way resemble H themselves. We determine these values for K4, paths, and stars and determine the order of magnitude for all graphs. In Ramsey theory we give a construction from a modified random graph to solve a question of Conlon, determining the order of magnitude of the size-Ramsey numbers of powers of paths. We show that these numbers are linear. Using models from statistical physics we study the expected size of random matchings and independent sets in d-regular graphs. From this we give a new proof of a result of Kahn determining which d-regular graphs have the most independent sets. We also give the equivalent result for matchings which was previously unknown and use this to prove the Asymptotic Upper Matching Conjecture of Friedland, Krop, Lundow and Markstrom. Using these methods we give an alternative proof of Shearer’s upper bound on off-diagonal Ramsey numbers.
APA, Harvard, Vancouver, ISO, and other styles
13

Kopparty, Swastik. "Algebraic methods in randomness and pseudorandomness." Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/62425.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 183-188).
Algebra and randomness come together rather nicely in computation. A central example of this relationship in action is the Schwartz-Zippel lemma and its application to the fast randomized checking of polynomial identities. In this thesis, we further this relationship in two ways: (1) by compiling new algebraic techniques that are of potential computational interest, and (2) demonstrating the relevance of these techniques by making progress on several questions in randomness and pseudorandomness. The technical ingredients we introduce include: " Multiplicity-enhanced versions of the Schwartz-Zippel lenina and the "polynomial method", extending their applicability to "higher-degree" polynomials. " Conditions for polynomials to have an unusually small number of roots. " Conditions for polynomials to have an unusually structured set of roots, e.g., containing a large linear space. Our applications include: * Explicit constructions of randomness extractors with logarithmic seed and vanishing "entropy loss". " Limit laws for first-order logic augmented with the parity quantifier on random graphs (extending the classical 0-1 law). " Explicit dispersers for affine sources of imperfect randomness with sublinear entropy.
by Swastik Kopparty.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
14

Coudron, Matthew Ryan. "Trading isolation for certifiable randomness expansion." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/84872.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2013.
Cataloged from PDF version of thesis.
Includes bibliographical references (page 41).
A source of random bits is an important resource in modern cryptography, algorithms and statistics. Can one ever be sure that a "random" source is truly random, or in the case of cryptography, secure against potential adversaries or eavesdroppers? Recently the study of non-local properties of entanglement has produced an interesting new perspective on this question, which we will refer to broadly as Certifiable Randomness Expansion (CRE). CRE refers generally to a process by which a source of information-theoretically certified randomness can be constructed based only on two simple assumptions: the prior existence of a short random seed and the ability to ensure that two or more black-box devices do not communicate (i.e. are non-signaling). In this work we make progress on a conjecture of [Col09] which proposes a method for indefinite certifiable randomness expansion using a growing number of devices (we actually prove a slight modification of the original conjecture in which we use the CHSH game as a subroutine rather than the GHZ game as originally proposed). The proof requires a technique not used before in the study of randomness expansion, and inspired by the tools developed in [RUV12]. The result also establishes the existence of a protocol for constant factor CRE using a finite number of devices (here the constant factor can be much greater than 1). While much better expansion rates (polynomial, and even exponential) have been achieved with only two devices, our analysis requires techniques not used before in the study of randomness expansion, and represents progress towards a protocol which is provably secure against a quantum eavesdropper who knows the input to the protocol.
by Matthew Ryan Coudron.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
15

Melkebeek, Dieter van. "Randomness and completeness in computational complexity." New York : Springer, 2000. http://www.springerlink.com/openurl.asp?genre=issue&issn=0302-9743&volume=1950.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Shoup, Victor. "Removing randomness from computational number theory." Madison, Wis. : University of Wisconsin-Madison, Computer Sciences Dept, 1989. http://catalog.hathitrust.org/api/volumes/oclc/20839526.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Vermeeren, Stijn. "Notions and applications of algorithmic randomness." Thesis, University of Leeds, 2013. http://etheses.whiterose.ac.uk/4569/.

Full text
Abstract:
Algorithmic randomness uses computability theory to define notions of randomness for infinite objects such as infinite binary sequences. The different possible definitions lead to a hierarchy of randomness notions. In this thesis we study this hierarchy, focussing in particular on Martin-Lof randomness, computable randomness and related notions. Understanding the relative strength of the different notions is a main objective. We look at proving implications where they exists (Chapter 3), as well as separating notions when the are not equivalent (Chapter 4). We also apply our knowledge about randomness to solve several questions about provability in axiomatic theories like Peano arithmetic (Chapter 5).
APA, Harvard, Vancouver, ISO, and other styles
18

Eickmeyer, Kord. "Randomness in complexity theory and logics." Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, 2011. http://dx.doi.org/10.18452/16364.

Full text
Abstract:
Die vorliegende Dissertation besteht aus zwei Teilen, deren gemeinsames Thema in der Frage besteht, wie mächtig Zufall als Berechnungsressource ist. Im ersten Teil beschäftigen wir uns mit zufälligen Strukturen, die -- mit hoher Wahrscheinlichkeit -- Eigenschaften haben können, die von Computeralgorithmen genutzt werden können. In zwei konkreten Fällen geben wir bis dahin unbekannte deterministische Konstruktionen solcher Strukturen: Wir derandomisieren eine randomisierte Reduktion von Alekhnovich und Razborov, indem wir bestimmte unbalancierte bipartite Expandergraphen konstruieren, und wir geben eine Reduktion von einem Problem über bipartite Graphen auf das Problem, den minmax-Wert in Dreipersonenspielen zu berechnen. Im zweiten Teil untersuchen wir die Ausdrucksstärke verschiedener Logiken, wenn sie durch zufällige Relationssymbole angereichert werden. Unser Ziel ist es, Techniken aus der deskriptiven Komplexitätstheorie für die Untersuchung randomisierter Komplexitätsklassen nutzbar zu machen, und tatsächlich können wir zeigen, dass unsere randomisierten Logiken randomisierte Komlexitätsklassen einfangen, die in der Komplexitätstheorie untersucht werden. Unter Benutzung starker Ergebnisse über die Logik erster Stufe und die Berechnungsstärke von Schaltkreisen beschränkter Tiefe geben wir sowohl positive als auch negative Derandomisierungsergebnisse für unsere Logiken. Auf der negativen Seite zeigen wir, dass randomisierte erststufige Logik gegenüber normaler erststufiger Logik an Ausdrucksstärke gewinnt, sogar auf Strukturen mit einer eingebauten Additionsrelation. Außerdem ist sie nicht auf geordneten Strukturen in monadischer zweitstufiger Logik enthalten, und auch nicht in infinitärer Zähllogik auf beliebigen Strukturen. Auf der positiven Seite zeigen wir, dass randomisierte erststufige Logik auf Strukturen mit einem unären Vokabular derandomisiert werden kann und auf additiven Strukturen in monadischer Logik zweiter Stufe enthalten ist.
This thesis is comprised of two main parts whose common theme is the question of how powerful randomness as a computational resource is. In the first part we deal with random structures which possess -- with high probability -- properties than can be exploited by computer algorithms. We then give two new deterministic constructions for such structures: We derandomise a randomised reduction due to Alekhnovich and Razborov by constructing certain unbalanced bipartite expander graphs, and we give a reduction from a problem concerning bipartite graphs to the problem of computing the minmax-value in three-player games. In the second part we study the expressive power of various logics when they are enriched by random relation symbols. Our goal is to bridge techniques from descriptive complexity with the study of randomised complexity classes, and indeed we show that our randomised logics do capture complexity classes under study in complexity theory. Using strong results on the expressive power of first-order logic and the computational power of bounded-depth circuits, we give both positive and negative derandomisation results for our logics. On the negative side, we show that randomised first-order logic gains expressive power over standard first-order logic even on structures with a built-in addition relation. Furthermore, it is not contained in monadic second-order logic on ordered structures, nor in infinitary counting logic on arbitrary structures. On the positive side, we show that randomised first-order logic can be derandomised on structures with a unary vocabulary and is contained in monadic second-order logic on additive structures.
APA, Harvard, Vancouver, ISO, and other styles
19

Liu, Zi-Wen. "On quantum randomness and quantum resources." Thesis, Massachusetts Institute of Technology, 2018. https://hdl.handle.net/1721.1/122846.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 2018
Cataloged from PDF version of thesis.
Includes bibliographical references.
This thesis is consisted of two independent parts. The first part is on entanglement, quantum randomness, and complexity beyond scrambling. More explicitly, we study the Rényi entanglement entropies of quantum designs. The results lay the mathematical foundation for studying the hierarchy of complexities in between scrambling and Haar randomness by entanglement. The second part explores the general aspects of quantum resource theory. We introduce three theories that do not rely on the specific resource: the theory of resource destroying maps, the one-shot operational resource theory, and the resource theory of quantum channels.
by Zi-Wen Liu.
Ph. D.
Ph.D. Massachusetts Institute of Technology, Department of Physics
APA, Harvard, Vancouver, ISO, and other styles
20

Grilli, Jacopo. "Randomness and Criticality in Biological Interactions." Doctoral thesis, Università degli studi di Padova, 2015. http://hdl.handle.net/11577/3424011.

Full text
Abstract:
In this thesis we study from a physics perspective two problems related to biological interactions. In the first part of this thesis we consider ecological interactions, that shape ecosystems and determine their fate, and their relation with stability of ecosystems. Using random matrix theory we are able to identify the key aspect, the order parameters, determining the stability of large ecosystems. We then consider the problem of determining the persistence of a population living in a randomly fragmented landscape. Using some techniques borrowed from random matrix theory applied to disordered systems, we are able to identify what are the key drivers of persistence. The second part of the thesis is devoted to the observation that many living systems seem to tune their interaction close to a critical point. We introduce a stochastic model, based on information theory, that predict the critical point as a natural outcome of a process of evolution or adaptation, without fine-tuning of parameters.
In questa tesi studiamo da una prospettiva fisica due problemi legati alle interazioni biologiche. Nella prima parte della tesi consideriamo le interazioni ecologiche, che danno forma agli ecosistemi e determinano la loro sorte, e la loro relazione con la stabilità degli stessi. Usando la teoria delle matrici aleatorie, siamo in grado di identificare gli aspetti chiave, i parametri d'ordine, che determinano la stabilità degli ecosistemi. Quindi consideriamo il problema di determinare la persistenza di una popolazione che vive in un territorio frammentato aleatoriamente. Usando alcune tecniche prese in prestito dalla teoria delle matrici aleatorie applicata ai sistemi disordinati, riusciamo a identificare quali sono gli ingredienti chiave per la persistenza. La seconda parte della tesi è dedicata all'osservazione che molti sistemi viventi sembrano essere calibrati precisamente vicino a un punto critico. Indroduciamo un modello stocastico, basato sulla teoria dell'informazione, che predice i punti critici come risultato naturale di un processo di voluzione e adattamento, senza una calibrazione dei parametri
APA, Harvard, Vancouver, ISO, and other styles
21

Nouretdinov, Ilia. "Algorithmic theory of randomness and its applications." Thesis, Royal Holloway, University of London, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.406218.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Martzoukos, Spyros. "Walks on graphs : From randomness to determinism." Thesis, Queen Mary, University of London, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.510880.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Rute, Jason. "Topics in algorithmic randomness and computable analysis." Research Showcase @ CMU, 2013. http://repository.cmu.edu/dissertations/260.

Full text
Abstract:
This dissertation develops connections between algorithmic randomness and computable analysis. In the first part, it is shown that computable randomness can be defined robustly on all computable probability spaces, and that computable randomness is preserved by a.e. computable isomorphisms between spaces. Further applications are also given. In the second part, a number of almost-everywhere convergence theorems are looked at using computable analysis and algorithmic randomness. These include various martingale convergence theorems and almosteverywhere differentiability theorems. General conditions are given for when the rate of convergence is computable and for when convergence takes place on the Schnorr random points. Examples are provided to show that these almost-everywhere convergence theorems characterize Schnorr randomness.
APA, Harvard, Vancouver, ISO, and other styles
24

Montanaro, Ashley. "Structure, randomness and complexity in quantum computation." Thesis, University of Bristol, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.443658.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Kilian, Joe. "Uses of randomness in algorithms and protocols." Thesis, Massachusetts Institute of Technology, 1989. http://hdl.handle.net/1721.1/60724.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Saias, Alain Isaac. "Randomness versus non-determinism in distributed computing." Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/37022.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Rompel, John Taylor. "Techniques for computing with low-independence randomness." Thesis, Massachusetts Institute of Technology, 1990. http://hdl.handle.net/1721.1/33480.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1990.
Includes bibliographical references (p. 105-110).
by John Taylor Rompel.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
28

Yuen, Henry Ph D. Massachusetts Institute of Technology. "Quantum randomness expansion : upper and lower bounds." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/84856.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2013.
Title as it appears in Degrees awarded booklet, September 2013: Upper and lower bounds for quantum randomness expansion. Cataloged from PDF version of thesis.
Includes bibliographical references (pages 62-64).
A recent sequence of works, initially motivated by the study of the nonlocal properties of entanglement, demonstrate that a source of information-theoretically certified randomness can be constructed based only on two simple assumptions: the prior existence of a short random seed and the ability to ensure that two black-box devices do not communicate (i.e. are non-signaling). We call protocols achieving such certified amplification of a short random seed randomness amplifiers. We introduce a simple framework in which we initiate the systematic study of the possibilities and limitations of randomness amplifiers. Our main results include a new, improved analysis of a robust randomness amplifier with exponential expansion, as well as the first upper bounds on the maximum expansion achievable by a broad class of randomness amplifiers. In particular, we show that non-adaptive randomness amplifiers that are robust to noise cannot achieve more than doubly exponential expansion. We show that a wide class of protocols based on the use of the CHSH game can only lead to (singly) exponential expansion if adversarial devices are allowed the full power of non-signaling strategies. Our upper bound results apply to all known non-adaptive randomness amplifier constructions to date. Finally, we demonstrate, for all positive integers k, a protocol involving 2k non-signaling black-box quantum devices that achieves an amount of expansion that is a tower of exponentials of height k. This hints at the intriguing possibility of infinite randomness expansion.
by Henry Yuen.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
29

Mjörnman, Jesper, and Daniel Mastell. "Randomness as a Cause of Test Flakiness." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-177303.

Full text
Abstract:
With today’s focus on Continuous Integration, test cases are used to ensure the software’s reliability when integrating and developing code. Test cases that behave in an undeterministic manner are known as flaky tests, which threatens the software’s reliability. Because of flaky test’s undeterministic nature, they can be troublesome to detect and correct. This is causing companies to spend great amount of resources on flaky tests since they can reduce the quality of their products and services. The aim of this thesis was to develop a usable tool that can automatically detect flakiness in the Randomness category. This was done by initially locating and rerunning flaky tests found in public Git repositories. By scanning the resulting pytest logs from the tests that manifested flaky behaviour, noting indicators of how flakiness manifests in the Randomness category. From these findings we determined tracing to be a viable option of detecting Randomness as a cause of flakiness. The findings were implemented into our proposed tool FlakyReporter, which reruns flaky tests to determine if they pertain to the Randomness category. Our FlakyReporter tool was found to accurately categorise flaky tests into the Randomness category when tested against 25 different flaky tests. This indicates the viability of utilizing tracing as a method of categorizing flakiness.
APA, Harvard, Vancouver, ISO, and other styles
30

Strandberg, Alicia Graziosi. "A Nonparametric Test for Deviation from Randomness." Diss., Temple University Libraries, 2012. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/214767.

Full text
Abstract:
Statistics
Ph.D.
There are many existing tests used to determine if a series consists of a random sample. Often these tests have restrictive distributional assumptions, size distortions, or low power for key useful alternative situations. The interest of this dissertation lies in developing an alternative nonparametric test to determine whether a series consists of a random sample. The proposed test detects deviations from randomness, without a priori distributional assumption, when observations are not independent and identically distributed (i.i.d.), which is suitable for our motivating stock market index data. Departures from i.i.d. are tested by subdividing data into subintervals and then using a conditional probability measure within intervals as a binomial test. This nonparametric test is designed to detect deviations of neighboring observations from randomness when the data set consists of time series observations. Simulation results confirm correct test size for varied distributions and good power for detecting alternative cases. This test is compared to a number of other popular methods and shown to be a competitive alternative. Although the proposed test may be applicable to multiple areas, this dissertation is mostly interested in applications to stock market and regression data. The proposed test is effectively illustrated with the common three stock market index data sets using a newly created transformation, and shown to perform exceptionally well.
Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
31

Kalyanasundaram, Subrahmanyam. "Turing machine algorithms and studies in quasi-randomness." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/42808.

Full text
Abstract:
Randomness is an invaluable resource in theoretical computer science. However, pure random bits are hard to obtain. Quasi-randomness is a tool that has been widely used in eliminating/reducing the randomness from randomized algorithms. In this thesis, we study some aspects of quasi-randomness in graphs. Specifically, we provide an algorithm and a lower bound for two different kinds of regularity lemmas. Our algorithm for FK-regularity is derived using a spectral characterization of quasi-randomness. We also use a similar spectral connection to also answer an open question about quasi-random tournaments. We then provide a "Wowzer" type lower bound (for the number of parts required) for the strong regularity lemma. Finally, we study the derandomization of complexity classes using Turing machine simulations. 1. Connections between quasi-randomness and graph spectra. Quasi-random (or pseudo-random) objects are deterministic objects that behave almost like truly random objects. These objects have been widely studied in various settings (graphs, hypergraphs, directed graphs, set systems, etc.). In many cases, quasi-randomness is very closely related to the spectral properties of the combinatorial object that is under study. In this thesis, we discover the spectral characterizations of quasi-randomness in two different cases to solve open problems. A Deterministic Algorithm for Frieze-Kannan Regularity: The Frieze-Kannan regularity lemma asserts that any given graph of large enough size can be partitioned into a number of parts such that, across parts, the graph is quasi-random. . It was unknown if there was a deterministic algorithm that could produce a parition satisfying the conditions of the Frieze-Kannan regularity lemma in deterministic sub-cubic time. In this thesis, we answer this question by designing an O(n[superscript]w) time algorithm for constructing such a partition, where w is the exponent of fast matrix multiplication. Even Cycles and Quasi-Random Tournaments: Chung and Graham in had provided several equivalent characterizations of quasi-randomness in tournaments. One of them is about the number of "even" cycles where even is defined in the following sense. A cycle is said to be even, if when walking along it, an even number of edges point in the wrong direction. Chung and Graham showed that if close to half of the 4-cycles in a tournament T are even, then T is quasi-random. They asked if the same statement is true if instead of 4-cycles, we consider k-cycles, for an even integer k. We resolve this open question by showing that for every fixed even integer k geq 4, if close to half of the k-cycles in a tournament T are even, then T must be quasi-random. 2. A Wowzer type lower bound for the strong regularity lemma. The regularity lemma of Szemeredi asserts that one can partition every graph into a bounded number of quasi-random bipartite graphs. Alon, Fischer, Krivelevich and Szegedy obtained a variant of the regularity lemma that allows one to have an arbitrary control on this measure of quasi-randomness. However, their proof only guaranteed to produce a partition where the number of parts is given by the Wowzer function, which is the iterated version of the Tower function. We show here that a bound of this type is unavoidable by constructing a graph H, with the property that even if one wants a very mild control on the quasi-randomness of a regular partition, then any such partition of H must have a number of parts given by a Wowzer-type function. 3. How fast can we deterministically simulate nondeterminism? We study an approach towards derandomizing complexity classes using Turing machine simulations. We look at the problem of deterministically counting the exact number of accepting computation paths of a given nondeterministic Turing machine. We provide a deterministic algorithm, which runs in time roughly O(sqrt(S)), where S is the size of the configuration graph. The best of the previously known methods required time linear in S. Our result implies a simulation of probabilistic time classes like PP, BPP and BQP in the same running time. This is an improvement over the currently best known simulation by van Melkebeek and Santhanam.
APA, Harvard, Vancouver, ISO, and other styles
32

El, Omer. "Avalanche Properties And Randomness Of The Twofish Cipher." Master's thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/12605571/index.pdf.

Full text
Abstract:
In this thesis, one finalist cipher of the Advanced Encryption Standard (AES) block cipher contest, Twofish proposed by Schneier et al, is studied in order to observe the validity of the statement made by Arikan about the randomness of the cipher, which contradicts National Institute of Standards and Technology (NIST)&rsquo
s results. The strength of the cipher to cryptanalytic attacks is investigated by measuring its randomness according to the avalanche criterion. The avalanche criterion results are compared with those of the Statistical Test Suite of the NIST and discrepancies in the second and third rounds are explained theoretically.
APA, Harvard, Vancouver, ISO, and other styles
33

Chachulski, Szymon (Szymon Kazimierz). "Trading structure for randomness in wireless opportunistic routing." Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/40320.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2007.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Includes bibliographical references (leaves 71-73).
Opportunistic routing is a recent technique that achieves high throughput in the face of lossy wireless links. The current opportunistic routing protocol, ExOR, ties the MAC with routing, imposing a strict schedule on routers' access to the medium. Although the scheduler delivers opportunistic gains, it eliminates the clean layering abstraction and misses some of the inherent features of the 802.11 MAC. In particular, it prevents spatial reuse and thus may underutilize the wireless medium. This thesis presents MORE, a MAC-independent opportunistic routing protocol. MORE randomly mixes packets before forwarding them. This randomness ensures that routers that hear the same transmission do not forward the same packets. Thus, MORE needs no special scheduler to coordinate routers and can run directly on top of 802.11. We analyze the theoretical gains provided by opportunistic routing and present the EOTX routing metric which minimizes the number of opportunistic transmissions to deliver a packet to its destination. We implemented MORE in the Click modular router running on off-the-shelf PCs equipped with 802.11 (WiFi) wireless interfaces. Experimental results from a 20-node wireless testbed show that MORE's median unicast throughput is 20% higher than ExOR, and the gains rise to 50% over ExOR when there is a chance of spatial reuse.
by Szymon Chachulski.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
34

Nieto-Silleras, Olmo. "Device-independent randomness generation from several Bell estimators." Doctoral thesis, Universite Libre de Bruxelles, 2018. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/271365.

Full text
Abstract:
The device-independent (DI) framework is a novel approach to quantum information science which exploits the nonlocality of quantum physics to certify the correct functioning of a quantum information processing task without relying on any assumption on the inner workings of the devices performing the task. This thesis focuses on the device-independent certification and generation of true randomness for cryptographic applications. The existence of such true randomness relies on a fundamental relation between the random character of quantum theory and its nonlocality, which arises in the context of Bell tests. Device-independent randomness generation (DIRG) and quantum key distribution (DIQKD) protocols usually evaluate the produced randomness (as measured by the conditional min-entropy) as a function of the violation of a given Bell inequality. However, the probabilities characterising the measurement outcomes of a Bell test are richer than the degree of violation of a single Bell inequality. In this work we show that a more accurate assessment of the randomness present in nonlocal correlations can be obtained if the value of several Bell expressions is simultaneously taken into account, or if the full set of probabilities characterising the behaviour of the device is considered. As a side result, we show that to every behaviour there corresponds an optimal Bell expression allowing to certify the maximal amount of DI randomness present in the correlations. Based on these results, we introduce a family of protocols for DIRG secure against classical side information that relies on the estimation of an arbitrary number of Bell expressions, or even directly on the experimental frequencies of the measurement outcomes. The family of protocols we propose also allows for the evaluation of randomness from a subset of measurement settings, which can be advantageous when considering correlations for which some measurement settings result in more randomness than others. We provide numerical examples illustrating the advantage of this method for finite data, and show that asymptotically it results in an optimal generation of randomness from experimental data without having to assume beforehand that the devices violate a specific Bell inequality.
L'approche indépendante des appareils ("device-independent" en anglais) est une nouvelle approche en informatique quantique. Cette nouvelle approche exploite la non-localité de la physique quantique afin de certifier le bon fonctionnement d'une tâche sans faire appel à des suppositions sur les appareils menant à bien cette tâche. Cette thèse traite de la certification et la génération d'aléa indépendante des appareils pour des applications cryptographiques. L'existence de cet aléa repose sur une relation fondamentale entre le caractère aléatoire de la théorie quantique et sa non-localité, mise en lumière dans le cadre des tests de Bell. Les protocoles de génération d'aléa et de distribution quantique de clés indépendants des appareils mesurent en général l'aléa produit en fonction de la violation d'une inégalité de Bell donnée. Cependant les probabilités qui caracterisent les résultats de mesures dans un test de Bell sont plus riches que le degré de violation d'une seule inégalité de Bell. Dans ce travail nous montrons qu'une évaluation plus exacte de l'aléa présent dans les corrélations nonlocales peut être faite si l'on tient compte de plusieurs expressions de Bell à la fois ou de l'ensemble des probabilités (ou comportement) caractérisant l'appareil testé. De plus nous montrons qu'à chaque comportement correspond une expression de Bell optimale permettant de certifier la quantité maximale d'aléa présente dans ces corrélations. À partir de ces resultats, nous introduisons une famille de protocoles de génération d'aléa indépendants des appareils, sécurisés contre des adversaires classiques, et reposant sur l'évaluation de l'aléa à partir d'un nombre arbitraire d'expressions de Bell, ou même à partir des fréquences expérimentales des résultats de mesure. Les protocoles proposés permettent aussi d'évaluer l'aléa à partir d'un sous-ensemble de choix de mesure, ce qui peut être avantageux lorsque l'on considère des corrélations pour lesquelles certains choix de mesure produisent plus d'aléa que d'autres. Nous fournissons des exemples numériques illustrant l'avantage de cette méthode pour des données finies et montrons qu'asymptotiquement cette méthode résulte en un taux de génération d'aléa optimal à partir des données expérimentales, sans devoir supposer à priori que l'expérience viole une inégalité de Bell spécifique.
Doctorat en Sciences
info:eu-repo/semantics/nonPublished
APA, Harvard, Vancouver, ISO, and other styles
35

Anglès, d'Auriac Paul-Elliot. "Infinite Computations in Algorithmic Randomness and Reverse Mathematics." Thesis, Paris Est, 2019. http://www.theses.fr/2019PESC0061.

Full text
Abstract:
Cette thèse se concentre sur l'apport du calcul en temps infini à la logique mathématique. Le calcul en temps infini est une variante de la traditionnelle définition du calcul comme suite finie d'étapes, chaque étape étant définie à partir des précédentes, et aboutissant à un état final. Dans le cas de cette thèse, nous considérons le cas où le nombre d'étapes n'est pas forcément fini, mais peut continuer le long des ordinaux, une extension des entiers. Il existe plusieurs manières d'implémenter cette idée, nous en utilisons trois : la calculabilité d'ordre supérieur, les machines de Turing à temps infini et l'α-récursion.Une part de ce travail concerne les mathématiques à rebours, et plus particulièrement le théorème de Hindman. Les mathématiques à rebours sont un programme mathématique consistant en l'étude des théorèmes et axiomes mathématiques du point de vue de leur "puissance", et établissant une hiérarchie sur celle-ci. En particulier la détermination des briques de bases, aussi appelées axiomes, qui sont nécessaires dans une preuve est centrale. Nous étudions au travers de ce prisme le théorème de Hindman, un théorème combinatoire de la théorie de Ramsey qui dit que pour tout partitionnement des entiers en un nombre fini d'ensembles, appelés couleurs, il doit exister une ensemble infini S d'entiers dont toute les sommes d'éléments issus de S ont la même couleur. Dans cette thèse, nous progressons dans la résolution de la question du système d'axiomes minimal pour prouver ce théorème, en montrant que l'existence d'objets combinatoires intermédiaires est prouvable dans un système d'axiome faible.La réduction de Weihrauch est une méthode récente de comparaison de puissance de théorème, qui les considère comme des problèmes à résoudre, puis compare leur difficulté. Cette réduction a été moins étudiée, et en particulier certains des principes les plus importants des mathématiques à rebours ne sont pas bien compris dans ce cadre. L'un d'eux est le principe ATR de Récursion Arithmétique Transfinie, un principe très lié au calcul en temps infini et plus particulièrement à la calculabilité d'ordre supérieur. Nous continuons l'étude de ce principe en montrant ses liens avec un type particulier d'axiome du choix, et l'utilisons pour séparer les versions dépendantes et indépendantes de ces axiomes.Un autre domaine de la logique mathématiques qui tire parti de la théorie de la calculabilité est l'aléatoire algorithmique. Ce domaine étudie les réels "aléatoires", c'est à dire ceux dont il paraît raisonnable qu'ils aient été obtenus de façon purement aléatoire. Une manière d'étudier cela est de considérer, étant donné un réel, la plus petite complexité algorithmique d'un ensemble de mesure 0 le contenant. Ce domaine est très riche et a déjà été étendu à certains types de calcul en temps infini, modifiant ainsi les classes de complexité considérées. Cependant, il a seulement très récemment été étendu aux machines de Turing à temps infini (ITTMs) et à l'α-récursion. Dans cette thèse, nous contribuons à l'étude des notions d'aléatoire pour ITTMs et α-récursion les plus naturelles. Nous montrons que deux classes importantes, le Σ-aléatoire et l'ITTM-aléatoire, ne sont pas automatiquement distinctes ; en particulier leurs équivalents catégoriques sont confondus
This thesis focuses on the gains of infinite time computations to mathematical logic. Infinite time computations is a variant of the traditional definition of computations as a finite sequence of stages, each stage being defined from the previous ones, and finally reaching a halting state. In this thesis, we consider the case where the number of stages is not necessarily finite, but can continue along ordinals, an extension of the integers. There exists several ways to implement this idea, we will use three of them: higher recursion, infinite time Turing machines and α-recursion.Part of this works concerns the domain of reverse mathematics, and especially Hindman's theorem. Reverse mathematics is a program consisting in the study of theorems and axioms from the point of view of their "strength", and establishing a hierarchy on these. In particular the question of which axioms are needed in a proof of a given statement is central. We study Hindman's theorem under this lens, a combinatorial result from Ramsey's theory stating that for every partitioning of the integers into finitely many colors, there must exists an infinite set such that any sum of elements taken from it has a fixed color. In this thesis, we make some progress in the question of the minimal axiomatic system needed to show this result, by showing that the existence of some intermediate combinatorial objects is provable in a weak system.Weihrauch reduction is a way to compare the strength of theorems, that has been introduced in reverse mathematics recently. It sees theorems as problems to solve, and then compare their difficulties. This reduction is still less studied in this context, in particular few of the most important principles of reverse mathematics are not yet well comprehended. One of these is the Arithmetical Transfinite Recursion principle, an axiomatic system with strong links with infinite time computations and especially higher recursion. We continue the study of this principle by showing its links with a particular type of axiom of choice, and use it to separate the dependent and independent version of this choice.Yet another field of mathematical logic that benefits from computability theory is the one of algorithmic randomness. It studies "random" reals, those that it would seem reasonable to think that they arise from a process picking a real uniformly in some interval. A way to study this is to considerate, for a given real, the smallest algorithmic complexity of a null set containing it. This domain has proven very rich and has already been extended to certain type of infinite time computation, thereby modifying the complexity class considered. However, it has been extended to infinite time Turing machine and α-recursion only recently, by Carl and Schlicht. In this thesis, we contribute to the study of the most natural randomness classes for ITTMs and α-recursion. We show that two important classes, Σ-randomness and ITTM-randomness, are not automatically different; in particular their categorical equivalent are in fact the same classes
APA, Harvard, Vancouver, ISO, and other styles
36

Birch, Thomas. "Algorithmic randomness on computable metric spaces and hyperspaces." Master's thesis, University of Cape Town, 2012. http://hdl.handle.net/11427/22093.

Full text
Abstract:
In this text we shall be focusing on generalizing Martin-Löf randomness to computable metric spaces with arbitrary measure (for examples of this type of generalization see Gács [14], Rojas and Hoyrup [15]. The aim of this generalization is to define algorithmic randomness on the hyperspace of non-empty compact subsets of a computable metric space, the study of which was first proposed by Barmpalias et al. [16] at the University of Florida in their work on the random closed subsets of the Cantor space. Much work has been done in the study of random sets with authors such as Diamondstone and Kjos-Hanssen [17] continuing the Florida approach, whilst others such as Axon [18] and Cenzer and Broadhead [19] have been studying the use of capacities to define hyperspace measures for use in randomness tests. Lastly in section 6.4 we shall be looking at the work done by Hertling and Weihrauch [13] on universal randomness tests in effective topological measure spaces and relate their results to randomness on computable metric measure spaces and in particular to the randomness of compact sets in the hyperspace of non-empty compact subsets of computable metric spaces.
APA, Harvard, Vancouver, ISO, and other styles
37

Hellouin, de Menibus Benjamin. "Asymptotic behaviour of cellular automata : computation and randomness." Thesis, Aix-Marseille, 2014. http://www.theses.fr/2014AIXM4729/document.

Full text
Abstract:
L'objet de cette thèse est l'étude de l'auto-organisation dans les automates cellulaires unidimensionnels.Les automates cellulaires sont un système dynamique discret ainsi qu'un modèle de calcul massivement parallèle, ces deux aspects s'influençant mutuellement. L'auto-organisation est un phénomène où un comportement organisé est observé asymptotiquement, indépendamment de la configuration initiale. Typiquement, nous considérons que le point initial est tiré aléatoirement: étant donnée une mesure de probabilité décrivant une distribution de configurations initiales, nous étudions son évolution sous l'action de l'automate, le comportement asymptotique étant décrit par la(les) mesure(s) limite(s).Notre étude présente deux aspects. D'abord, nous caractérisons les mesures qui peuvent être atteintes à la limite par les automates cellulaires; ceci correspond aux différents comportements asymptotiques pouvant apparaître en simulation. Cette approche rejoint divers résultats récents caractérisant des paramètres de systèmes dynamiques par des conditions de calculabilité, utilisant des outils d'analyse calculable. Il s'agit également d'une description de la puissance de calcul des automates cellulaires sur les mesures.Ensuite, nous proposons des outils pour létude de l'auto-organisation dans des classes restreintes. Nous introduisons un cadre d'étude d'automates pouvant être vus comme un ensemble de particules en interaction, afin d'en déduire des propriétés sur leur comportement asymptotique. Une dernière direction de recherche concerne les automates convergeant vers la mesure uniforme sur une large classe de mesures initiales (phénomène de randomisation)
The subject of this thesis is the study of self-organization in one-dimensional cellular automata.Cellular automata are a discrete dynamical system as well as a massively parallel model of computation, both theseaspects influencing each other. Self-organisation is a phenomenon where an organised behaviour is observed asymptotically, regardless of the initial configuration. Typically, we consider that the initial point is sampled at random; that is, we consider a probability measure describing the distribution of theinitial configurations, and we study its evolution under the action of the automaton, the asymptoticbehaviour being described by the limit measure(s).Our work is two-sided. On the one hand, we characterise measures that can bereached as limit measures by cellular automata; this corresponds to the possible kinds of asymptoticbehaviours that can arise in simulations. This approach is similar to several recent results characterising someparameters of dynamical systems by computability conditions, using tools from computable analysis. Thisresult is also a description of the measure-theoretical computational power of cellular automata.On the other hand, we provided tools for the practical study of self-organization in restricted classes of cellularautomata. We introduced a frameworkfor cellular automata that can be seen as a set of interacting particles, in order todeduce properties concerning their asymptotic behaviour. Another ongoing research direction focus on cellular automata that converge to the uniform measurefor a wide class of initial measures (randomization phenomenon)
APA, Harvard, Vancouver, ISO, and other styles
38

Morris, Mary Beth. "Flow randomness and tip losses in transonic rotors." Thesis, This resource online, 1992. http://scholar.lib.vt.edu/theses/available/etd-07212009-040241/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Wallace, Kyle. "Understanding and Enriching Randomness Within Resource-Constrained Devices." W&M ScholarWorks, 2018. https://scholarworks.wm.edu/etd/1550153802.

Full text
Abstract:
Random Number Generators (RNG) find use throughout all applications of computing, from high level statistical modeling all the way down to essential security primitives. A significant amount of prior work has investigated this space, as a poorly performing generator can have significant impacts on algorithms that rely on it. However, recent explosive growth of the Internet of Things (IoT) has brought forth a class of devices for which common RNG algorithms may not provide an optimal solution. Furthermore, new hardware creates opportunities that have not yet been explored with these devices. in this Dissertation, we present research fostering deeper understanding of and enrichment of the state of randomness within the context of resource-constrained devices. First, we present an exploratory study into methods of generating random numbers on devices with sensors. We perform a data collection study across 37 android devices to determine how much random data is consumed, and which sensors are capable of producing sufficiently entropic data. We use the results of our analysis to create an experimental framework called SensoRNG, which serves as a prototype to test the efficacy of a sensor-based RNG. SensoRNG employs opportunistic collection of data from on-board sensors and applies a light-weight mixing algorithm to produce random numbers. We evaluate SensoRNG with the National Institute of Standards and Technology (NIST) statistical testing suite and demonstrate that a sensor-based RNG can provide high quality random numbers with only little additional overhead. Second, we explore the design, implementation, and efficacy of a Collaborative and Distributed Entropy Transfer protocol (CADET), which explores moving random number generation from an individual task to a collaborative one. Through the sharing of excess random data, devices that are unable to meet their own needs can be aided by contributions from other devices. We implement and test a proof-of-concept version of CADET on a testbed of 49 Raspberry Pi 3B single-board computers, which have been underclocked to emulate resource-constrained devices. Through this, we evaluate and demonstrate the efficacy and baseline performance of remote entropy protocols of this type, as well as highlight remaining research questions and challenges. Finally, we design and implement a system called RightNoise, which automatically profiles the RNG activity of a device by using techniques adapted from language modeling. First, by performing offline analysis, RightNoise is able to mine and reconstruct, in the context of a resource-constrained device, the structure of different activities from raw RNG access logs. After recovering these patterns, the device is able to profile its own behavior in real time. We give a thorough evaluation of the algorithms used in RightNoise and show that, with only five instances of each activity type per log, RightNoise is able to reconstruct the full set of activities with over 90\% accuracy. Furthermore, classification is very quick, with an average speed of 0.1 seconds per block. We finish this work by discussing real world application scenarios for RightNoise.
APA, Harvard, Vancouver, ISO, and other styles
40

Avesani, Marco. "Practical and secure quantum randomness generation and communication." Doctoral thesis, Università degli studi di Padova, 2019. http://hdl.handle.net/11577/3423195.

Full text
Abstract:
Quantum mechanics has profoundly revolutionized the field of physics and our understanding of nature. Many effects predicted by quantum mechanics and with no classical analogue such as wave-particle duality, the coherent superposition of quantum states, the uncertainty principle, entanglement, and non-locality, are in deep contrast with our general common sense and yet they survived to any experimental verification. Interestingly, when these peculiar quantum effects are studied within the framework of Information Theory, they provide advantages for tasks such as computation, communication, and cryptography. This thesis work studies how quantum resources can be exploited to develop and implement practical protocols for secure communication and private randomness generation. In particular, the work is focused on those protocols that offer an optimal compromise between security and performances and that are realizable with the current technology.
APA, Harvard, Vancouver, ISO, and other styles
41

Gallego, López Rodrigo. "Device-independent information protocols: measuring dimensionality, randomness and nonlocality." Doctoral thesis, Universitat Politècnica de Catalunya, 2013. http://hdl.handle.net/10803/108178.

Full text
Abstract:
The device-independent formalism is a set of tools to analyze experimental data and infer properties about systems, while avoiding almost any assumption about the functioning of devices. It has found applications both in fundamental and applied physics: some examples are the characterization of quantum nonlocality and information protocols for secure cryptography or randomness generation. This thesis contains novel results on these topics and also new applications such as deviceindependent test for dimensionality. After an introduction to the field, the thesis is divided in four parts. In the first we study device-independent tests for classical and quantum dimensionality. We investigate a scenario with a source and a measurement device. The goal is to infer, solely from the measurement statistics, the dimensionality required to describe the system. To this end, we exploit the concept of dimension witnesses. These are functions of the measurement statistics whose value allows one to bound the dimension. We study also the robustness of our tests in more realistic experimental situations, in which devices are affected by noise and losses. Lastly, we report on an experimental implementation of dimension witnesses. We conducted the experiment on photons manipulated in polarization and orbital angular momentum. This allowed us to generate ensembles of classical and quantum systems of dimension up to four. We then certified their dimension as well as its quantum nature by using dimension witnesses. The second part focuses on nonlocality. The local content is a nonlocality quantifier that represents the fraction of events that admit a local description. We focus on systems that exhibit, in that sense, maximal nonlocality. By exploiting the link between Kochen-Specker theorems and nonlocality, we derive a systematic recipe to construct maximally nonlocal correlations. We report on the experimental implementation of correlations with a high degree on nonlocality in comparison with all previous experiments on nonlocality. We also study maximally nonlocal correlations in the multipartite setting, and show that the so-called GHZ-state can be used to obtain correlations suitable for multipartite information protocols, such as secret-sharing. The third part studies nonlocality from an operational perspective. We study the set of operations that do not create nonlocality and characterize nonlocality as a resource theory. Our framework is consistent with the canonical definitions of nonlocality in the bipartite setting. However, we find that the well-established definition of multipartite nonlocality is inconsistent with the operational framework. We derive and analyze alternative definitions of multipartite nonlocality to recover consistency. Furthermore, the novel definitions of multipartite nonlocality allows us to analyze the validity of information principles to bound quantum correlations. We show that `information causality' and `non-trivial communication complexity' are insufficient to characterize the set of quantum correlations. In the fourth part we present the first quantum protocol attaining full randomness amplification. The protocol uses as input a source of imperfect random bits and produces full random bits by exploiting nonlocality. Randomness amplification is impossible in the classical regime and it was known to be possible with quantum system only if the initial source was almost fully random. Here, we prove that full randomness can indeed be certified using quantum non-locality under the minimal possible assumptions: the existence of a source of arbitrarily weak (but non-zero) randomness and the impossibility of instantaneous signaling. This implies that one is left with a strict dichotomic choice regarding randomness: either our world is fully deterministic or there exist events in nature that are fully random.
APA, Harvard, Vancouver, ISO, and other styles
42

Dhara, Chirag. "Intrinsic randomness in non-local theories: quantification and amplification." Doctoral thesis, Universitat Politècnica de Catalunya, 2013. http://hdl.handle.net/10803/128867.

Full text
Abstract:
Quantum mechanics was developed as a response to the inadequacy of classical physics in explaining certain physical phenomena. While it has proved immensely successful, it also presents several features that severely challenge our classicality based intuition. Randomness in quantum theory is one such and is the central theme of this dissertation. Randomness is a notion we have an intuitive grasp on since it appears to abound in nature. It a icts weather systems and nancial markets and is explicitly used in sport and gambling. It is used in a wide range of scienti c applications such as the simulation of genetic drift, population dynamics and molecular motion in fluids. Randomness (or the lack of it) is also central to philosophical concerns such as the existence of free will and anthropocentric notions of ethics and morality. The conception of randomness has evolved dramatically along with physical theory. While all randomness in classical theory can be fully attributed to a lack of knowledge of the observer, quantum theory qualitatively departs by allowing the existence of objective or intrinsic randomness. It is now known that intrinsic randomness is a generic feature of hypothetical theories larger than quantum theory called the non-signalling theories. They are usually studied with regards to a potential future completion of quantum mechanics or from the perspective of recognizing new physical principles describing nature. While several aspects have been studied to date, there has been little work in globally characterizing and quantifying randomness in quantum and non-signalling theories and the relationship between them. This dissertation is an attempt to ll this gap. Beginning with the unavoidable assumption of a weak source of randomness in the universe, we characterize upper bounds on quantum and non-signalling randomness. We develop a simple symmetry argument that helps identify maximal randomness in quantum theory and demonstrate its use in several explicit examples. Furthermore, we show that maximal randomness is forbidden within general non-signalling theories and constitutes a quantitative departure from quantum theory. We next address (what was) an open question about randomness ampli cation. It is known that a single source of randomness cannot be ampli ed using classical resources alone. We show that using quantum resources on the other hand allows a full ampli cation of the weakest sources of randomness to maximal randomness even in the presence of supra-quantum adversaries. The signi cance of this result spans practical cryptographic scenarios as well as foundational concerns. It demonstrates that conditional on the smallest set of assumptions, the existence of the weakest randomness in the universe guarantees the existence of maximal randomness. The next question we address is the quanti cation of intrinsic randomness in non-signalling correlations. While this is intractable in general, we identify cases where this can be quanti ed. We nd that in these cases all observed randomness is intrinsic even relaxing the measurement independence assumption. We nally turn to the study of the only known resource that allows generating certi able intrinsic randomness in the laboratory i.e. entanglement. We address noisy quantum systems and calculate their entanglement dynamics under decoherence. We identify exact results for several realistic noise models and provide tight bounds in some other cases. We conclude by putting our results into perspective, pointing out some drawbacks and future avenues of work in addressing these concerns.
APA, Harvard, Vancouver, ISO, and other styles
43

Sadeh, Sadra [Verfasser], and Stefan [Akademischer Betreuer] Rotter. "Sensory processing in neocortical networks: randomness, specificity and learning." Freiburg : Universität, 2015. http://d-nb.info/1122593163/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Charalambous, Ismini. "Distortion, randomness and quantitative measurements using optical coherence tomography." Thesis, University of Kent, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.443780.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

NUNES, VIVIAN DE ARAUJO DORNELAS. "EFFECTS OF CONTACT NETWORK RANDOMNESS ON MULTIPLE OPINION DYNAMICS." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2016. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=30466@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
PROGRAMA DE SUPORTE À PÓS-GRADUAÇÃO DE INSTS. DE ENSINO
PROGRAMA DE EXCELENCIA ACADEMICA
Muitas vezes enfrentamos o desafio de escolher entre diferentes opções com atratividade semelhante como, por exemplo, na escolha de um candidato parlamentar, na escolha de um filme ou ao comprar um produto no supermercado. A fim de estudar a distribuição das preferências em tais situações, podemos considerar dinâmicas de opinião (com diversas opções possíveis, contemplando também os casos em que há indecisão) em redes. Neste trabalho, utilizamos duas dinâmicas distintas: uma envolvendo o contágio direto de cada sítio para a sua vizinhança (regra A) e a outra onde a opinião de cada sítio é definida pela maioria relativa local (regra B). A topologia da rede de contatos pode ter um efeito importante sobre a distribuição final de opiniões. Utilizamos as redes de Watts-Strogatz e, em particular, estamos interessados em investigar a contribuição da aleatoriedade p da rede no resultado final das dinâmicas. Dependendo das propriedades estruturais da rede e das condições iniciais, podemos ter diferentes resultados finais: equipartição de preferências, consenso e situações onde a indecisão é relevante. O papel da aleatoriedade da rede é não trivial: para um número pequeno de opiniões, as regras A e B (esta última com atualização síncrona) apresentam um valor ótimo de p, onde o predomínio da opinião vencedora é máximo. Já para a regra da pluralidade com atualização assíncrona, o aumento do número de atalhos pode até mesmo promover situações de consenso. Além disso, as duas dinâmicas (e seus diferentes modos de atualização) coincidem para baixa desordem da rede, mas diferem para graus de desordem maiores. Observaremos também que a quantidade de iniciadores diminui a fração da opinião vencedora para todas as dinâmicas e atenua o máximo local que aparece na região de mundo pequeno.
People often face the challenge of choosing amongst different options with similar attractiveness, such as when choosing a parliamentary candidate, a movie or buying a product in the supermarket. In order to study the distribution of preferences in such situations, we can consider opinion dynamics (where different options are available as well as the undecided state) in network. In this work, we use two different opinion dynamics: one involving the direct contagion from each site to its neighborhood (rule A) and another where the opinion of each site is defined by the local relative majority (rule B). The contact network topology can have a important effect in the final distribution of opinions. We use the Watts-Strogatz network and, in particular, we are interested in investigating the contribution of the network randomness p in the output of the dynamics. Depending on the structural properties of the network and the initial conditions, the final distribution can be: equipartition of preferences, consensus and situations where indecision is relevant. The role of network randomness is nontrivial: for a small number of opinions, the rules A and B (the latter with synchronous update) present an optimum value of p, where the predominance of a winning opinion is maximal. Moreover, for the plurality rule with asynchronous update, the increase of the number of shortcuts can even promote consensus situations. Furthermore, both dynamics coincide for small disorder of the network, but differ for larger disorder. Also we observe that the number of initiators decreases the value of the winning fraction in all types of dynamics and attenuates the local maximum that appears in the small-world region.
APA, Harvard, Vancouver, ISO, and other styles
46

Verbeeck, Kenny. "Randomness as a generative principle in art and architecture." Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/35124.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Architecture, 2006.
Includes bibliographical references (leaves [87]-[98]).
As designers have become more eloquent in the exploitation of the powerful yet generic calculating capabilities of the computer, contemporary architectural practice seems to have set its mind on creating a logic machine that designs from predetermined constraints. Generating form from mathematical formulae thus gives the design process a scientific twist that allows the design to present itself as the outcome to a rigorous and objective process. So far, several designer-computer relations have been explored. The common designer-computer models are often described as either pre-rational or post-rational. Yet another approach would be the irrational. The hypothesis is that the early design process is in need of the unexpected, rather than iron logic. This research investigated how the use of randomness as a generative principle could present the designer with a creative design environment. The analysis and reading of randomness in art and architecture production takes as examples works of art where the artist/designer saw uncertainty or unpredictability as an intricate part of the process. The selected works incorporate, mostly, an instigating and an interpreting party embedded in the making of the work.
(cont.) The negotiations of boundaries between both parties determine the development of the work. Crucial to the selected works of art was the rendering of control or choice from one party to another - whether human, machine or nature - being used as a generative principle. Jackson Pollock serves as the analog example of a scattered computation: an indefinite number of calculations, of which each has a degree of randomness, that relate in a rhizomic manner. Pollock responds to each of these outcomes, allowing the painting to form from intentions rather than expectations. This looking and acting aspect to Pollock's approach is illustrated in the Jackson Pollock shape grammar. Ultimately the investigation of randomness in art is translated to architecture by comparing the Pollock approach in his drip paintings to Greg Lynn's digital design process in the Port Authority Gateway project. In the Pollock approach to digital design agency is given to the tools at hand, yet at the same time, the sheer indefinite number of designer-system interactions allows the design to emerge out of that constructive dialogue in an intuitive manner.
by Kenny Verbeeck.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
47

Cappelleri, Vincenzo-Maria. "Randomness, Age, Work: Ingredients for Secure Distributed Hash Tables." Doctoral thesis, Università degli studi di Padova, 2017. http://hdl.handle.net/11577/3423231.

Full text
Abstract:
Distributed Hash Tables (DHTs) are a popular and natural choice when dealing with dynamic resource location and routing. DHTs basically provide two main functions: saving (key, value) records in a network environment and, given a key, find the node responsible for it, optionally retrieving the associated value. However, all predominant DHT designs suffer a number of security flaws that expose nodes and stored data to a number of malicious attacks, ranging from disrupting correct DHT routing to corrupting data or making it unavailable. Thus even if DHTs are a standard layer for some mainstream systems (like BitTorrent or KAD clients), said vulnerabilities may prevent more security-aware systems from taking advantage of the ease of indexing and publishing on DHTs. Through the years a variety of solutions to the security flaws of DHTs have been proposed both from academia and practitioners, ranging from authentication via Central Authorities to social-network based ones. These solutions are often tailored to DHT specific implementations, simply try to mitigate without eliminating hostile actions aimed at resources or nodes. Moreover all these solutions often sports serious limitations or make strong assumptions on the underlying network. We present, after after providing a useful abstract model of the DHT protocol and infrastructure, two new primitives. We extend a “standard” proof-of-work primitive making of it also a “proof of age” primitive (informally, allowing a node to prove it is “sufficiently old”) and a “shared random seed” primitive (informally, producing a new, shared, seed that was completely unpredictable in a “sufficiently remote” past). These primitives are then integrated into the basic DHT model obtaining an “enhanced” DHT design, resilient to many common attacks. This work also shows how to adapt a Block Chain scheme – a continuously growing list of records (or blocks) protected from alteration or forgery – to provide a possible infrastructure for our proposed secure design. Finally a working proof-of-concept software implementing an “enhanced” Kademlia-based DHT is presented, together with some experimental results showing that, in practice, the performance overhead of the additional security layer is more than tolerable. Therefore this work provides a threefold contribution. It describes a general set of new primitives (adaptable to any DHT matching our basic model) achieving a secure DHT; it proposes an actionable design to attain said primitives; it makes public a proof-of-concept implementation of a full “enhanced” DHT system, which a preliminary performance evaluation shows to be actually usable in practice.
Nel contesto dell’indirizzamento dinamico basato su risorse le Tabelle di Hash Distribuite (DHT) si rivelano una scelta naturale oltre che molto apprezzata. Le DHT forniscono due funzioni principali: il salvataggio di coppie (chiave, valore) e, data una chiave, la localizzazione del nodo per essa responsabile, opzionalmente unita al recupero del valore associato. La maggior parte delle DHT realizzate sono ad ogni modo vulnerabili a falle di sicurezza che espongono i nodi ed i dati salvati ad un certo numero di possibili attacchi. Tali attacchi spaziano dall’impedire il corretto instradamento sulla DHT al corrompere o rendere indisponibili i dati. Anche se le DHT sono uno standard de facto in sistemi molto diffusi (come per esempio i client di BitTorrent o per la rete KAD) la debolezza di fronte a questi attacchi potrebbe tuttavia impedirne l’adozione da parte di sistemi maggiormente incentrati sulla sicurezza, pur potendo trarre vantaggio dalla facilità di indicizzazione e pubblicazione delle DHT. Nel corso degli anni, sia da parte della comunità accademica che da parte di sviluppatori professionisti, sono state proposte molte possibili soluzioni al problema di sicurezza della DHT, spaziando da idee basate sul controllo esercitato da parte di Autorità Centrali a meccanismi basati sulle social network. Le proposte sono spesso personalizzate per specifiche realizzazioni delle DHT o, spesso, cercano semplicemente di mitigare senza eliminare la possibilità di azioni ostili verso i nodi o le risorse. Inoltre le soluzioni proposte spesso dimostrano di essere seriamente limitate o basate su assunzioni piuttosto forti relativamente alla rete di riferimento. In questo lavoro, dopo aver fornito un’utile e generica astrazione del protocollo e delle infrastrutture di una DHT, presentiamo due nuove primitive. Estendiamo la “normale” funzione di proof-of-work facendo si che offra anche una “prova d’età” (ossia, informalmente, permette di provare che un nodo sia sufficientemente “anziano”) ed una primitiva che permetta l’accesso ad un seme randomico distribuito. Utilizzando queste due nuove primitive ed integrandole nell’astrazione basilare otteniamo una DHT “migliorata”, resistente a molti degi comuni attacchi inferti a questi sistemi. Inoltre mostreremo come un sistema basato sulle Block Chain – una collezione di “blocchi di dati” protetta contro la contraffazione – possa fornire una possibile fondazione per la nostra DHT migliorata. Infine abbiamo realizzato un software prototipo che realizza una DHT sicura basata sul sistema Kademlia. Utilizzando questo software abbiamo condotto degli esperimenti, dimostrando come questo sistema sia utilizzabile in pratica nonostante il lavoro addizionale richiesto dai nodi. Concludendo questo lavoro forniamo il seguente contributo: descriviamo un nuovo insieme di primitive per ottenere una DHT sicura (adattabile ad ogni sistema conforme alla nostra definizione di DHT), proponiamo un’architettura concreta per ottenere una DHT migliorata, ed annunciamo una versione prototipale e funzionante di questo sistema.
APA, Harvard, Vancouver, ISO, and other styles
48

Kephart, David E. "Topology, morphisms, and randomness in the space of formal languages." [Tampa, Fla.] : University of South Florida, 2005. http://purl.fcla.edu/fcla/etd/SFE0001250.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Yilek, Scott Christopher. "Public-key encryption secure in the presence of randomness failures." Diss., [La Jolla] : University of California, San Diego, 2010. http://wwwlib.umi.com/cr/ucsd/fullcit?p3404895.

Full text
Abstract:
Thesis (Ph. D.)--University of California, San Diego, 2010.
Title from first page of PDF file (viewed June 23, 2010). Available via ProQuest Digital Dissertations. Vita. Includes bibliographical references (leaves 103-109).
APA, Harvard, Vancouver, ISO, and other styles
50

Cantinotti, Michael. "Can gamblers beat randomness? : an experimental study on sport betting." Master's thesis, Université Laval, 2002. http://proquest.umi.com/pqdweb?did=766575321&sid=17&Fmt=2&clientId=9268&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography