To see the other types of publications on this topic, follow the link: Coded Computation.

Dissertations / Theses on the topic 'Coded Computation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Coded Computation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Wang, Sinong. "Coded Computation for Speeding up Distributed Machine Learning." The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1555336880521062.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Chen, Yiqi. "Computation of Initial State for Tail-Biting Trellis." Ohio University / OhioLINK, 2005. http://www.ohiolink.edu/etd/view.cgi?ohiou1125026574.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Chan, Siu Yan. "Efficient computation of weight enumerators and performance bounds for convolutionally coded systems in quasi-static fading channels /." View abstract or full-text, 2009. http://library.ust.hk/cgi/db/thesis.pl?ECED%202009%20CHANS.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Veluri, Subrahmanya Pavan Kumar. "Code Verification and Numerical Accuracy Assessment for Finite Volume CFD Codes." Diss., Virginia Tech, 2010. http://hdl.handle.net/10919/28715.

Full text
Abstract:
A detailed code verification study of an unstructured finite volume Computational Fluid Dynamics (CFD) code is performed. The Method of Manufactured Solutions is used to generate exact solutions for the Euler and Navier-Stokes equations to verify the correctness of the code through order of accuracy testing. The verification testing is performed on different mesh types which include triangular and quadrilateral elements in 2D and tetrahedral, prismatic, and hexahedral elements in 3D. The requirements of systematic mesh refinement are discussed, particularly in regards to unstructured meshes. Different code options verified include the baseline steady state governing equations, transport models, turbulence models, boundary conditions and unsteady flows. Coding mistakes, algorithm inconsistencies, and mesh quality sensitivities uncovered during the code verification are presented. In recent years, there has been significant work on the development of algorithms for the compressible Navier-Stokes equations on unstructured grids. One of the challenging tasks during the development of these algorithms is the formulation of consistent and accurate diffusion operators. The robustness and accuracy of diffusion operators depends on mesh quality. A survey of diffusion operators for compressible CFD solvers is conducted to understand different formulation procedures for diffusion fluxes. A patch-wise version of the Method of Manufactured Solutions is used to test the accuracy of selected diffusion operators. This testing of diffusion operators is limited to cell-centered finite volume methods which are formally second order accurate. These diffusion operators are tested and compared on different 2D mesh topologies to study the effect of mesh quality (stretching, aspect ratio, skewness, and curvature) on their numerical accuracy. Quantities examined include the numerical approximation errors and order of accuracy associated with face gradient reconstruction. From the analysis, defects in some of the numerical formulations are identified along with some robust and accurate diffusion operators.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
5

Ben, Hadj Fredj Abir. "Computations for the multiple access in wireless networks." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLT030.

Full text
Abstract:
Les futures générations de réseaux sans fil posent beaucoup de défis pour la communauté de recherche. Notamment, ces réseaux doivent être en mesure de répondre, avec une certaine qualité de service, aux demandes d'un nombre important de personnes et d'objets connectés. Ce qui se traduit par des exigences assez importantes en termes de capacité. C'est dans ce cadre que les méthodes d'accès multiple non orthogonaux (NOMA) ont été introduit. Dans cette thèse, nous avons étudié et proposé une méthodes d'accès multiple basé sur la technique compute and forawrd et sur les réseaux de point (Lattice codes) tout en considérant différentes constructions de lattice. Nous avons également proposé des amélioration de l'algorithme de décodage de la méthode SCMA (Sparse code multiple access) basé sur les réseaux de points. Afin de simplifier les décodeurs multi-niveaux utilisés, nous avons proposé des expressions simplifiées de LLRs ainsi que des approximations. Finalement, nous avons étudié la construction D des lattices en utilisant les codes polaires. Cette thèse était en collaboration avec le centre de recherche de Huawei France
Future generations of wireless networks pose many challenges for the research community. In particular, these networks must be able to respond, with a certain quality of service, to the demands of a large number of connected people and objects. This drives us into quite important requirements in terms of capacity. It is within this framework that non-orthogonal multiple access methods (NOMA) have been introduced. In this thesis, we have studied and proposed a multiple access method based on the compute and forward technique and on Lattice codes while considering different lattice constructions. We have also proposed improvements to the algorithm for decoding the Sparse code multiple access (SCMA) method based on Lattice codes. In order to simplify the multi-stage decoders used in here, we have proposed simplified expressions of LLRs as well as approximations. Finally, we studied the construction D of lattices using polar codes. This thesis was in collaboration with the research center of Huawei France
APA, Harvard, Vancouver, ISO, and other styles
6

Zeng, Fanxuan. "Nonlinear codes: representation, constructions, minimum distance computation and decoding." Doctoral thesis, Universitat Autònoma de Barcelona, 2014. http://hdl.handle.net/10803/284241.

Full text
Abstract:
Resum La teoria de codis estudia el disseny de codis correctors d'errors per a la transmisió fidedigne d'informació per un canal amb soroll. Un codi corrector d'errors (o simplement codi) es un proces que consisteix en expressar una seqüència d'elements sobre un alfabet de tal manera que qualsevol error que sigui introduït pot ser detactat i corregit (amb limitacions), i està basat en la tècnica d'afegir elements redundants. Aquest proces inclou la codifcació, la transmisió i la descodifcació de la seqüència d'elements. La majoria dels codis utilitzat són codis bloc i la majoria d'ells tenen una estructura lineal, que facilita el procés de codifcació i descodifcació. En aquesta memòria, estudiarem codis correctors d'errors no lineals. Mal¬grat els codis no lineals no tenen les mateixes bones propietats per codifcar i descodifcar com els lineals, el codis no lineals tenen interes ates que alguns dels millors codis no son lineals. La primera qüestió que apareix quan s'utilitzen codis no lineals és la seva representació. Els codis lineals poden ser representats utilitzant una matriu generadora o una matriu de control. La millor manera de representar un codi no lineal és utilitzar la representacio kernel/caset, que permet represen¬tar un codi mitjanCoding theory deals with the design of error-correcting codes for the reliable transmission of information across noisy channels. An error-correcting code (or code) is a process, which consists on expressing a sequence of elements over an alphabet in such a way that any introduced error can be detected and corrected (with limitation), and it is based on adding redundancy elements. This process includes encoding, transmitting and decoding the sequence of elements. Most of the used codes are block codes and most of them have a linear structure, which facilitates the process of encoding and decoding. In this dissertation, nonlinear error-correcting codes are studied. Despite non¬linear codes do not have the same good properties for encoding and decoding as linear ones, they have interest because some of best codes are nonlinear. The frst question that arises when we use nonlinear codes is their repre-sentation. Linear codes can be represented by using a generator or parity¬check matrix. The best way to represent a nonlinear code is by using the kernel/coset representation, which allows us to represent it through some representative codewords instead of all codewords. In this dissertation, this representation is studied and efcient algorithms to compute the kernel and coset representatives from the list of codewords are given. In addition, prop¬erties such as equality, inclusion, intersection and union between nonlinear codes are given in terms of this representation. Also, some well known code constructions (extended, punctured,...) are described by manipulating directly the kernel and coset representatives ofthe constituent nonlinearcodes. In order to identify a code (linear or nonlinear), the length n, number of codewords M and minimum distance d are the most important parameters. The length n and size M are comparatively easy to compute. On the other hand, to determine the minimum distance of a code is not so easy. As a matter offact, it has been proven to be an NP-hard problem [37]. However, some algorithms have been developed to compute the minimum distance for linear codes using diferent approaches: Grabner bases [7], tree structure [25], probabilistic algorithms [13, 23] and vector enumeration [39]. For nonlinear codes, except for some special families, no general algorithms have been developed to compute their minimum distance. Using the kernel/coset representation and the Brouwer-Zimmermann's algorithm to compute the minimum dis¬tance for linear codes, new algorithms to compute the minimum distance for nonlinear codes are described. The hardest problem in the process of transmitting information is de¬coding. For linear codes, a general decoding algorithm is the syndrome de¬coding. However, there is not any general decoding method for nonlinear codes. Based on the kernel/coset representation and the minimum distance computation, new general algorithms to decode linear and nonlinear codes are proposed. For some linear codes (codes with a big codimension), the proposed algorithms have better performance than the syndrome decoding algorithm. For nonlinear codes, this is the frst general method for decoding, which is comparable to syndrome decoding for linear ones. Finally, most of these algorithms have been evaluated using the MAGMA software, and a new MAGMA package to deal with binary nonlinear codes has been developed, based in the results given in this dissertation.
APA, Harvard, Vancouver, ISO, and other styles
7

Rodal, Morten. "Scalability of seismic codes on computational clusters." Thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2004. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-9145.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Cusdin, P. A. "Automatic sensitivity code for computational fluid dynamics." Thesis, Queen's University Belfast, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.431586.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Hagen, Knut Imar. "Fault-tolerance for MPI Codes on Computational Clusters." Thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2007. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-8728.

Full text
Abstract:

This thesis focuses on fault-tolerance for MPI codes on computational clusters. When an application runs on a very large cluster with thousands of processors, there is likely that a process crashes due to a hardware or software failure. Fault-tolerance is the ability of a system to respond gracefully to an unexpected hardware or software failure. A test application which is meant to run for several weeks on several nodes is used in this thesis. The application is a seismic MPI application, written in Fortran90. This application was provided by Statoil, who wanted a fault-tolerant implementation. The original test application had no degree of fault-tolerance --if one process or one node crashed, the entire application also crashed. In this thesis, a collection of fault-tolerant techniques are analysed, including checkpointing, MPI Error handlers, extending MPI, replication, fault detection, atomic clocks and multiple simultaneous failures. Several MPI implementations are described, like MPICH1, MPICH2, LAM/MPI and Open MPI. Next, some fault-tolerant products which are developed at other universities are described, like FT-MPI, FEMPI, MPICH-V including its five protocols, the fault-tolerant functionality of Open MPI, and MPI Error handlers. A fault-tolerant simulator which simulates the application's behaviour is developed. The simulator uses two fault-tolerance methods: FT-MPI and MPI Error handlers. Next, our test application is similarly made fault-tolerant with FT-MPI using three proposed approaches: MPI_Reduce(), MPI_Barrier(), and the final and current implementation: MPI Loop. Tests of the MPI Loop implementation are run on a small and a large cluster to verify the fault-tolerant behaviour. The seismic application survives a crash of n-2 nodes/processes. Process number 0 must stay alive since it acts as an I/O server, and there must be at least one process left to compute data. Processes can also be restarted rather than left out, but the test application needs to be modified to support this.

APA, Harvard, Vancouver, ISO, and other styles
10

Bellini, Emanuele. "Computational techniques for nonlinear codes and Boolean functions." Doctoral thesis, Università degli studi di Trento, 2014. https://hdl.handle.net/11572/369066.

Full text
Abstract:
We present some upper bounds on the size of nonlinear codes and their restriction to systematic codes and linear codes. These bounds, which are an improvement of a bound by Zinoviev, Litsyn and Laihonen, are independent of other classical known theoretical bounds. Among these, we mention the Griesmer bound for linear codes, of which we provide a partial generalization for the systematic case. Our experiments show that in some cases (the majority of cases for some q) our bounds provide the best value, compared to all other closed-formula upper-bounds. We also present an algebraic method for computing the minimum weight of any nonlinear code. We show that for some particular code, using a non-standard representation of the code, our method is faster than brute force. An application of this method allows to compute the nonlinearity of a Boolean function, improving existing algebraic methods and reaching the same complexity of algorithms based on the fast Fourier transform.
APA, Harvard, Vancouver, ISO, and other styles
11

Bellini, Emanuele. "Computational techniques for nonlinear codes and Boolean functions." Doctoral thesis, University of Trento, 2014. http://eprints-phd.biblio.unitn.it/1376/1/00-Thesis.pdf.

Full text
Abstract:
We present some upper bounds on the size of nonlinear codes and their restriction to systematic codes and linear codes. These bounds, which are an improvement of a bound by Zinoviev, Litsyn and Laihonen, are independent of other classical known theoretical bounds. Among these, we mention the Griesmer bound for linear codes, of which we provide a partial generalization for the systematic case. Our experiments show that in some cases (the majority of cases for some q) our bounds provide the best value, compared to all other closed-formula upper-bounds. We also present an algebraic method for computing the minimum weight of any nonlinear code. We show that for some particular code, using a non-standard representation of the code, our method is faster than brute force. An application of this method allows to compute the nonlinearity of a Boolean function, improving existing algebraic methods and reaching the same complexity of algorithms based on the fast Fourier transform.
APA, Harvard, Vancouver, ISO, and other styles
12

O'Gorman, Joe. "Architectures for fault-tolerant quantum computation." Thesis, University of Oxford, 2017. http://ora.ox.ac.uk/objects/uuid:4219548d-798b-45f8-b376-91025bbe3ec4.

Full text
Abstract:
Quantum computing has enormous potential, but this can only be realised if quantum errors can be controlled sufficiently to allow quantum algorithms to be completed reliably. However, quantum-error-corrected logical quantum bits (qubits) which can be said to have achieved meaningful error suppression have not yet been demonstrated. This thesis reports research on several topics related to the challenge of designing fault-tolerant quantum computers. The first topic is a proposal for achieving large-scale error correction with the surface code in a silicon donor based quantum computing architecture. This proposal relaxes some of the stringent requirements in donor placement precision set by previous ideas from the single atom level to the order of 10 nm in some regimes. This is shown by means of numerical simulation of the surface code threshold. The second topic then follows, it is the development of a method for benchmarking and assessing the performance of small error correcting codes in few-qubit systems, introducing a metric called 'integrity' - closely linked to the trace distance -- and a proposal for experiments to demonstrate various stepping stones on the way to 'strictly superior' quantum error correction. Most quantum error correcting codes, including the surface code, do not allow for fault-tolerant universal computation without the addition of extra gadgets. One method of achieving universality is through a process of distilling and then consuming high quality 'magic states'. This process adds additional overhead to quantum computation over and above that incurred by the use of the base level quantum error correction. The latter parts of this thesis report an investigation into how many physical qubits are needed in a `magic state factory' within a surface code quantum computer and introduce a number of techniques to reduce the overhead of leading magic state techniques. It is found that universal quantum computing is achievable with ∼ 16 million qubits if error rates across a device are kept below 10-4. In addition, the thesis introduces improved methods of achieving magic state distillation for unconventional magic states that allow for logical small angle rotations, and show that this can be more efficient than synthesising these operations from the gates provided by traditional magic states.
APA, Harvard, Vancouver, ISO, and other styles
13

Bhagat, Indu. "Code optimizations for narrow bitwidth architectures." Doctoral thesis, Universitat Politècnica de Catalunya, 2012. http://hdl.handle.net/10803/96190.

Full text
Abstract:
This thesis takes a HW/SW collaborative approach to tackle the problem of computational inefficiency in a holistic manner. The hardware is redesigned by restraining the datapath to merely 16-bit datawidth (integer datapath only) to provide an extremely simple, low-cost, low-complexity execution core which is best at executing the most common case efficiently. This redesign, referred to as the Narrow Bitwidth Architecture, is unique in that although the datapath is squeezed to 16-bits, it continues to offer the advantage of higher memory addressability like the contemporary wider datapath architectures. Its interface to the outside (software) world is termed as the Narrow ISA. The software is responsible for efficiently mapping the current stack of 64-bit applications onto the 16-bit hardware. However, this HW/SW approach introduces a non-negligible penalty both in dynamic code-size and performance-impact even with a reasonably smart code-translator that maps the 64- bit applications on to the 16-bit processor. The goal of this thesis is to design a software layer that harnesses the power of compiler optimizations to assuage this negative performance penalty of the Narrow ISA. More specifically, this thesis focuses on compiler optimizations targeting the problem of how to compile a 64-bit program to a 16-bit datapath machine from the perspective of Minimum Required Computations (MRC). Given a program, the notion of MRC aims to infer how much computation is really required to generate the same (correct) output as the original program. Approaching perfect MRC is an intrinsically ambitious goal and it requires oracle predictions of program behavior. Towards this end, the thesis proposes three heuristic-based optimizations to closely infer the MRC. The perspective of MRC unfolds into a definition of productiveness - if a computation does not alter the storage location, it is non-productive and hence, not necessary to be performed. In this research, the definition of productiveness has been applied to different granularities of the data-flow as well as control-flow of the programs. Three profile-based, code optimization techniques have been proposed : 1. Global Productiveness Propagation (GPP) which applies the concept of productiveness at the granularity of a function. 2. Local Productiveness Pruning (LPP) applies the same concept but at a much finer granularity of a single instruction. 3. Minimal Branch Computation (MBC) is an profile-based, code-reordering optimization technique which applies the principles of MRC for conditional branches. The primary aim of all these techniques is to reduce the dynamic code footprint of the Narrow ISA. The first two optimizations (GPP and LPP) perform the task of speculatively pruning the non-productive (useless) computations using profiles. Further, these two optimization techniques perform backward traversal of the optimization regions to embed checks into the nonspeculative slices, hence, making them self-sufficient to detect mis-speculation dynamically. The MBC optimization is a use case of a broader concept of a lazy computation model. The idea behind MBC is to reorder the backslices containing narrow computations such that the minimal necessary computations to generate the same (correct) output are performed in the most-frequent case; the rest of the computations are performed only when necessary. With the proposed optimizations, it can be concluded that there do exist ways to smartly compile a 64-bit application to a 16- bit ISA such that the overheads are considerably reduced.
Esta tesis deriva su motivación en la inherente ineficiencia computacional de los procesadores actuales: a pesar de que muchas aplicaciones contemporáneas tienen unos requisitos de ancho de bits estrechos (aplicaciones de enteros, de red y multimedia), el hardware acaba utilizando el camino de datos completo, utilizando más recursos de los necesarios y consumiendo más energía. Esta tesis utiliza una aproximación HW/SW para atacar, de forma íntegra, el problema de la ineficiencia computacional. El hardware se ha rediseñado para restringir el ancho de bits del camino de datos a sólo 16 bits (únicamente el de enteros) y ofrecer así un núcleo de ejecución simple, de bajo consumo y baja complejidad, el cual está diseñado para ejecutar de forma eficiente el caso común. El rediseño, llamado en esta tesis Arquitectura de Ancho de Bits Estrecho (narrow bitwidth en inglés), es único en el sentido que aunque el camino de datos se ha estrechado a 16 bits, el sistema continúa ofreciendo las ventajas de direccionar grandes cantidades de memoria tal como procesadores con caminos de datos más anchos (64 bits actualmente). Su interface con el mundo exterior se denomina ISA estrecho. En nuestra propuesta el software es responsable de mapear eficientemente la actual pila software de las aplicaciones de 64 bits en el hardware de 16 bits. Sin embargo, esta aproximación HW/SW introduce penalizaciones no despreciables tanto en el tamaño del código dinámico como en el rendimiento, incluso con un traductor de código inteligente que mapea las aplicaciones de 64 bits en el procesador de 16 bits. El objetivo de esta tesis es el de diseñar una capa software que aproveche la capacidad de las optimizaciones para reducir el efecto negativo en el rendimiento del ISA estrecho. Concretamente, esta tesis se centra en optimizaciones que tratan el problema de como compilar programas de 64 bits para una máquina de 16 bits desde la perspectiva de las Mínimas Computaciones Requeridas (MRC en inglés). Dado un programa, la noción de MRC intenta deducir la cantidad de cómputo que realmente se necesita para generar la misma (correcta) salida que el programa original. Aproximarse al MRC perfecto es una meta intrínsecamente ambiciosa y que requiere predicciones perfectas de comportamiento del programa. Con este fin, la tesis propone tres heurísticas basadas en optimizaciones que tratan de inferir el MRC. La utilización de MRC se desarrolla en la definición de productividad: si un cálculo no altera el dato que ya había almacenado, entonces no es productivo y por lo tanto, no es necesario llevarlo a cabo. Se han propuesto tres optimizaciones del código basadas en profile: 1. Propagación Global de la Productividad (GPP en inglés) aplica el concepto de productividad a la granularidad de función. 2. Poda Local de Productividad (LPP en inglés) aplica el mismo concepto pero a una granularidad mucho más fina, la de una única instrucción. 3. Computación Mínima del Salto (MBC en inglés) es una técnica de reordenación de código que aplica los principios de MRC a los saltos condicionales. El objetivo principal de todas esta técnicas es el de reducir el tamaño dinámico del código estrecho. Las primeras dos optimizaciones (GPP y LPP) realizan la tarea de podar especulativamente las computaciones no productivas (innecesarias) utilizando profiles. Además, estas dos optimizaciones realizan un recorrido hacia atrás de las regiones a optimizar para añadir chequeos en el código no especulativo, haciendo de esta forma la técnica autosuficiente para detectar, dinámicamente, los casos de fallo en la especulación. La idea de la optimización MBC es reordenar las instrucciones que generan el salto condicional tal que las mínimas computaciones que general la misma (correcta) salida se ejecuten en la mayoría de los casos; el resto de las computaciones se ejecutarán sólo cuando sea necesario.
APA, Harvard, Vancouver, ISO, and other styles
14

Mahalanobis, Abhijit, Richard Shilling, Robert Muise, and Mark Neifeld. "High-resolution imaging using a translating coded aperture." SPIE-SOC PHOTO-OPTICAL INSTRUMENTATION ENGINEERS, 2017. http://hdl.handle.net/10150/626004.

Full text
Abstract:
It is well known that a translating mask can optically encode low-resolution measurements from which higher resolution images can be computationally reconstructed. We experimentally demonstrate that this principle can be used to achieve substantial increase in image resolution compared to the size of the focal plane array (FPA). Specifically, we describe a scalable architecture with a translating mask (also referred to as a coded aperture) that achieves eightfold resolution improvement (or 64: 1 increase in the number of pixels compared to the number of focal plane detector elements). The imaging architecture is described in terms of general design parameters (such as field of view and angular resolution, dimensions of the mask, and the detector and FPA sizes), and some of the underlying design trades are discussed. Experiments conducted with different mask patterns and reconstruction algorithms illustrate how these parameters affect the resolution of the reconstructed image. Initial experimental results also demonstrate that the architecture can directly support task-specific information sensing for detection and tracking, and that moving objects can be reconstructed separately from the stationary background using motion priors. (C) 2017 Society of Photo-Optical Instrumentation Engineers (SPIE)
APA, Harvard, Vancouver, ISO, and other styles
15

Holewinski, Justin A. "Automatic Code Generation for Stencil Computations on GPU Architectures." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1354545992.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Senske, Nicholas. "Fear of code : an approach to integrating computation with architectural design." Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/33032.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Architecture, 2005.
Includes bibliographical references (leaves 84-85).
Computation has enormous creative potential for designers. Unfortunately, many factors, such as the considerable investment required to learn programming, have prevented it from entering into common use. By analyzing the barriers to working with computation and suggesting directions for future research and implementation, this thesis seeks to enable a more inclusive dialogue towards the eventual integration of computation into the architectural mainstream. To this end, I propose a more responsible relationship between architects and their software through a combination of interface improvements, code analysis and management tools, and the formation of open source communities for support and development.
by Nicholas Senske.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
17

Zaribafiyan, Arman. "Fast post processing algorithms for fault tolerant quantum computation using surface codes." Thesis, University of British Columbia, 2014. http://hdl.handle.net/2429/50569.

Full text
Abstract:
Local architecture of qubits in two dimensional lattices is known as one of the candidates to run fault tolerant quantum computation with high threshold. Topological quantum error correcting codes, such as surface codes, are used to make this architecture robust against various quantum error models. In this research we present a fast, concise, operationally inexpensive and highly parallelizable decoding algorithm for surface codes without using concatenation which has a threshold range of 8.6 % to 10.5 % varying based on a parameter called OMSS. Thanks to parallelization of the proposed algorithm, the time complexity scales logarithmically in the lattice size for small OMSS. This can be compared to the work of \citep{Poulin-2010,Poulin-2010-IEEE} which has a threshold of 8 % for the bit-flip error channel within the same complexity order.
Applied Science, Faculty of
Electrical and Computer Engineering, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
18

Beedy, Jeremy. "Using computational fluid dynamics to improve rotorcraft aerodynamic codes." Thesis, University of Glasgow, 2006. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.519486.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Muscat, Martin. "Computational methods of design by analysis for pressure vessel components." Thesis, University of Strathclyde, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.248722.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Tuckett, David Kingsley. "Tailoring surface codes: Improvements in quantum error correction with biased noise." Thesis, The University of Sydney, 2020. https://hdl.handle.net/2123/22132.

Full text
Abstract:
For quantum computers to reach their full potential will require error correction. We study the surface code, one of the most promising quantum error correcting codes, in the context of predominantly dephasing (Z-biased) noise, as found in many quantum architectures. We find that the surface code is highly resilient to Y-biased noise, and tailor it to Z-biased noise, whilst retaining its practical features. We demonstrate ultrahigh thresholds for the tailored surface code: ~39% with a realistic bias of  = 100, and ~50% with pure Z noise, far exceeding known thresholds for the standard surface code: ~11% with pure Z noise, and ~19% with depolarizing noise. Furthermore, we provide strong evidence that the threshold of the tailored surface code tracks the hashing bound for all biases. We reveal the hidden structure of the tailored surface code with pure Z noise that is responsible for these ultrahigh thresholds. As a consequence, we prove that its threshold with pure Z noise is 50%, and we show that its distance to Z errors, and the number of failure modes, can be tuned by modifying its boundary. For codes with appropriately modified boundaries, the distance to Z errors is O(n) compared to O(n1/2) for square codes, where n is the number of physical qubits. We demonstrate that these characteristics yield a significant improvement in logical error rate with pure Z and Z-biased noise. Finally, we introduce an efficient approach to decoding that exploits code symmetries with respect to a given noise model, and extends readily to the fault-tolerant context, where measurements are unreliable. We use this approach to define a decoder for the tailored surface code with Z-biased noise. Although the decoder is suboptimal, we observe exceptionally high fault-tolerant thresholds of ~5% with bias  = 100 and exceeding 6% with pure Z noise. Our results open up many avenues of research and, recent developments in bias-preserving gates, highlight their direct relevance to experiment.
APA, Harvard, Vancouver, ISO, and other styles
21

Thakur, Anuja. "Outside cell Interference Computations for Cellular Code Division Multiple Access." Ohio University / OhioLINK, 2003. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1071070073.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Shaw, Mackenzie Hooper. "Quantum Computation with Gottesman-Kitaev-Preskill Codes: Logical Gates, Measurements, and Analysis Techniques." Thesis, The University of Sydney, 2022. https://hdl.handle.net/2123/29663.

Full text
Abstract:
The Gottesman-Kitaev-Preskill (GKP) error-correcting code uses one or more bosonic modes to encode a finite-dimensional logical space, allowing a low-error logical qubit to be encoded in a small number of resonators. In this thesis, I propose new methods to implement logical gates and measurements with GKP codes and analyse their performance. The logical gate scheme uses the single-qubit Clifford frame to greatly reduce the number of gates needed to implement an algorithm without increasing the hardware requirements. The logical measurement scheme uses one ancilla mode to achieve a 0.1% logical error rate over a measurement time of 630 ns when the measurement efficiency is as low as 75%. Finally, I provide a subsystem decomposition which can be used to analyse GKP codes efficiently even as the Fock space distribution of the codestates goes to infinity.
APA, Harvard, Vancouver, ISO, and other styles
23

Mora, Alessio. "Mobile Computation Offloading on Android: Evaluation of Sockets and Native Code Offloading Opportunities." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019.

Find full text
Abstract:
In questa tesi, ULOOF (User Level Online Offloading Framework), un originale framework per offloading dedicato ad applicazioni Android, viene dettagliatamente presentato. L’ambizione di tale framework è quella di essere leggero, poco intrusivo e completamente automatizzato, in grado di abilitare qualsiasi applicazione Android, così come è quando viene scaricata dal Google Play Store, ad usufruire dell’offloading, senza nessuna conoscenza pregressa riguardo l’applicazione stessa. Attualmente, ULOOF riesce a fare ciò sulla maggior parte delle applicazioni disponibili sullo store e consente un risparmio energetico che ha raggiunto il 50% con applicazioni ad-hoc. Sebbene ULOOF sia già stato implementato, alcune tematiche non sono ancora state affrontate. In questa tesi, si valuterà la possibilità di migliorare il framework esistente estendendo l’insieme dei tasks offloadabili e, in particolare, considerando quei tasks che coinvolgono l’uso di sockets e quei tasks che sfruttano codice nativo.
APA, Harvard, Vancouver, ISO, and other styles
24

Bergman, Harris. "Development of a parallel spectral element computational fluids dynamics code." Thesis, Georgia Institute of Technology, 1995. http://hdl.handle.net/1853/19082.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Tegerup, Alexander. "Simulationof a Bayard-Alpert ionization gauge with the PIC code Warp." Thesis, KTH, Numerisk analys, NA, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-220895.

Full text
Abstract:
At RISE Research Institutes of Sweden, there is an interest in computer simulations of the physics related to ionization gauges. The objective of this thesis is to find out if the open source code Warp can be used for simulating the physics of interest. In this thesis it is explained what an ionization gauge is and the physics and the governing mathematical equations of the simulations are described. How those governing equations are solved and the algorithms used in Warp is also discussed in this thesis. The results of the simulations are presented in the thesis and a discussion of which parts of Warp that need to be further developed to successfully simulate the physics is carried through.
På RISE Research Institutes of Sweden, är man intresserad av att göra datorsimuleringar av fysiken bakom joniserande tryckgivare. Målet med denna uppsats är att ta reda på om det är möjligt att använda den öppna källkoden Warp för att genomföra simuleringar av fysiken som man är intresserad av. I den här uppsatsen förklaras det vad en joniserande tryckgivare är och fysiken och de styrande matematiska ekvationerna bakom simuleringarna beskrivs. Hur dessa styrande ekvationer löses och algoritmerna som används i Warp diskuteras också i denna uppsats. Resultaten från simuleringarna presenteras i uppsatsen och det förs en diskussion om vilka delar utav Warp som behöver vidareutvecklas för att framgångsrikt kunna simulera fysiken.
APA, Harvard, Vancouver, ISO, and other styles
26

Hong, Sung Wook. "Uncertainty of microwave radiative transfer computations in rain." [College Station, Tex. : Texas A&M University, 2006. http://hdl.handle.net/1969.1/ETD-TAMU-1733.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Maislin, Scott. "Cyclic Codes and Cyclic Lattices." Scholarship @ Claremont, 2017. http://scholarship.claremont.edu/cmc_theses/1552.

Full text
Abstract:
In this thesis, we review basic properties of linear codes and lattices with a certain focus on their interplay. In particular, we focus on the analogous con- structions of cyclic codes and cyclic lattices. We start out with a brief overview of the basic theory and properties of linear codes. We then demonstrate the construction of cyclic codes and emphasize their importance in error-correcting coding theory. Next we survey properties of lattices, focusing on algorithmic lattice problems, exhibit the construction of cyclic lattices and discuss their applications in cryptography. We emphasize the similarity and common prop- erties of the two cyclic constructions.
APA, Harvard, Vancouver, ISO, and other styles
28

BONIFETTO, ROBERTO. "Computational thermal-hydraulic modeling for nuclear fusion and fission applications." Doctoral thesis, Politecnico di Torino, 2014. http://hdl.handle.net/11583/2572946.

Full text
Abstract:
Both nuclear fusion and fission devices have to face the issue of the cooling of some critical components, and in particular the cooling of the superconducting coils and of the core fuel assemblies, respectively. In order to investigate these issues during the design phase of new reactors, e.g. new tokamak experiments and the forthcoming Generation IV reactors, new dedicated models and reliable tools should be developed. In this work, two separate tools, for fusion and fission devices, respectively, are developed relying on the same quasi-3D thermal-hydraulic model, i.e. the 1D advection flow of the coolant in a diffusive (solid) bundle along the channel axis and the 2D coupling between the channels. The already available 4C code for the analysis of thermal-hydraulic transients in the superconducting coils of fusion devices is firstly validated against experimental data, then benchmarked against other tools and finally its predictive capability are checked performing the simulations before the experiment. The 4C code is thus proved to be able to simulate transients spanning from very short to week-long timescales with the very same model. The code is then applied to the analysis of operational transients in the ITER TF coils. Whenever the minimum temperature margin requirements are not satisfied, different possible mitigation strategies are investigated. Concerning the fission devices, a new multi-physics simulation tool (FRENETIC) is developed for the quasi-3D analysis of a lead-cooled fast reactor core with the closed hexagonal fuel element configuration, as currently proposed within the framework of the European project LEADER. The code couples a neutronic and a thermal-hydraulic analysis of the full reactor core, each performed in separate modules. A first validation of the thermal-hydraulic module of this code is presented both in the single assembly and, even if preliminarly, in the multi-assembly configuration. In the latter case, a benchmark against another qualified tool (RELAP5-3D©) is also carried out. Finally the coupled code is tested in some simple transients in order to assess at least qualitatively the effectiveness of the coupling and the representativity of the multi-physics results.
APA, Harvard, Vancouver, ISO, and other styles
29

Mouilleron, Christophe. "Efficient computation with structured matrices and arithmetic expressions." Phd thesis, Ecole normale supérieure de lyon - ENS LYON, 2011. http://tel.archives-ouvertes.fr/tel-00688388.

Full text
Abstract:
Designing efficient code in practice for a given computation is a hard task. In this thesis, we tackle this issue in two different situations. The first part of the thesis introduces some algorithmic improvements in structured linear algebra. We first show how to extend an algorithm by Cardinal for inverting Cauchy-like matrices to the other common structures. This approach, which mainly relies on products of the type "structured matrix × matrix", leads to a theoretical speed-up of a factor up to 7 that we also observe in practice. Then, we extend some works on Toeplitz-like matrices and prove that, for any of the common structures, the product of an n×n structured matrix of displacement rank α by an n×α matrix can be computed in Õ(α^(ω-1)n). This leads to direct inversion algorithms in Õ(α^(ω-1)n) , that do not rely on a reduction to the Toeplitz-like case. The second part of the thesis deals with the implementation of arithmetic expressions. This topic raises several issues like finding the minimum number of operations, and maximizing the speed or the accuracy when using some finite-precision arithmetic. Making use of the inductive nature of arithmetic expressions enables the design of algorithms that help to answer such questions. We thus present a set of algorithms for generating evaluation schemes, counting them, and optimizing them according to one or several criteria. These algorithms are part of a library that we have developed and used, among other things, in order to decrease the running time of a code generator for a mathematical library, and to study optimality issues about the evaluation of a small degree polynomial with scalar coefficients at a matrix point.
APA, Harvard, Vancouver, ISO, and other styles
30

Ingraham, Daniel. "Verification of a Computational Aeroacoustics Code Using External Verification Analysis (EVA)." University of Toledo / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1271271426.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Marchand, Renier Gustav. "The method of manufactured solutions for the verification of computational electromagnetic codes." Thesis, Stellenbosch : Stellenbosch University, 2013. http://hdl.handle.net/10019.1/80210.

Full text
Abstract:
Thesis (PhD)--Stellenbosch University, 2013.
ENGLISH ABSTRACT: In this work the Method of Manufactured Solutions (MMS) is introduced for the code veri cation of full-wave frequency dependent electromagnetic computational software. At rst the method is sketched in the context of the veri cation and validation process and the need for proper code veri cation is highlighted. Subsequently, the MMS is investigated in its natural context: the Finite Element Method, speci cally for the E- eld Vector Wave Equation. The usefulness of the method to detect error in a computational code is demonstrated. The selection of Manufactured Solutions is discussed and it is demonstrated how it can be used to nd the probable cause of bugs. Mutation testing is introduced and used to show the ability to detect errors present in code. The MMS is nally applied in a novel manner to a Method of Moments (MoM) code. The challenges of numerical integration associated with the application of the operator is discussed and correct integration is successfully demonstrated. Subsequently the MMS is demonstrated to be successfully applied to the MoM and mutation testing is used to demonstrate the practical e cacy of the method. The application of the MMS to the MoM is the main contribution of this work.
AFRIKAANSE OPSOMMING: Die Metode van Vervaardigde Oplossings (MVO) word hier bekend gestel vir die veri kasie van numeriese volgolf frekwensie-afhanklike elektromagnetise kode. Die metode word eerstens in die bre e konteks van algemene veri kasie en validasie geplaas en gevolglik word die noodsaaklikheid van kode veri kasie beklemtoon. Daarna, word die toets-metode in die konteks van die Eindige Element Metode vir die E-veld vektorgolf vergelyking bestudeer. Die MVO is oorspronklik ontwikkel in die di erentiaalvergelyking omgewing. Die bruikbaarheid van die metode vir elektromagnetiese simulasies word prakties gedemonstreer deur die opsporing van werklike foute. Die metode word ook verder ondersoek vir die oorsprong van foute. Mutasietoetsing word bekendgestel en word gebruik om die metode verder prakties te veri eer. Die MVO word laastens in 'n nuwe manier gebruik om 'n Moment Metode kode te veri eer. Die praktiese probleme betrokke by numeriese integrasie word ondersoek en die korrekte toepassing van die integraal operator word prakties gedemonstreer. Daarna, word die MVO in hierdie konteks gedemonstreer deur verskeie voorbeelde te ondersoek. Mutasietoetsing word weereens gebruik om na die e ektiewiteit van die MVO te kyk om 'n Moment Metode kode te toets. Die toepassing van die MVO op 'n Moment Metode kode is die hoof bydrae van hierdie werk.
APA, Harvard, Vancouver, ISO, and other styles
32

Devulapalli, Venkata Lakshmi Narasimha. "Application of Huffman Data Compression Algorithm in Hashing Computation." TopSCHOLAR®, 2018. https://digitalcommons.wku.edu/theses/2614.

Full text
Abstract:
Cryptography is the art of protecting information by encrypting the original message into an unreadable format. A cryptographic hash function is a hash function which takes an arbitrary length of the text message as input and converts that text into a fixed length of encrypted characters which is infeasible to invert. The values returned by the hash function are called as the message digest or simply hash values. Because of its versatility, hash functions are used in many applications such as message authentication, digital signatures, and password hashing [Thomsen and Knudsen, 2005]. The purpose of this study is to apply Huffman data compression algorithm to the SHA-1 hash function in cryptography. Huffman data compression algorithm is an optimal compression or prefix algorithm where the frequencies of the letters are used to compress the data [Huffman, 1952]. An integrated approach is applied to achieve new compressed hash function by integrating Huffman compressed codes in the core functionality of hashing computation of the original hash function.
APA, Harvard, Vancouver, ISO, and other styles
33

Pippin, William E. Jr. "Optimizing Threads of Computation in Constraint Logic Programs." The Ohio State University, 2003. http://rave.ohiolink.edu/etdc/view?acc_num=osu1041551800.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Chou, Remi. "Information-theoretic security under computational, bandwidth, and randomization constraints." Diss., Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/53837.

Full text
Abstract:
The objective of the proposed research is to develop and analyze coding schemes for information-theoretic security, which could bridge a gap between theory an practice. We focus on two fundamental models for information-theoretic security: secret-key generation for a source model and secure communication over the wire-tap channel. Many results for these models only provide existence of codes, and few attempts have been made to design practical schemes. The schemes we would like to propose should account for practical constraints. Specifically, we formulate the following constraints to avoid oversimplifying the problems. We should assume: (1) computationally bounded legitimate users and not solely rely on proofs showing existence of code with exponential complexity in the block-length; (2) a rate-limited public communication channel for the secret-key generation model, to account for bandwidth constraints; (3) a non-uniform and rate-limited source of randomness at the encoder for the wire-tap channel model, since a perfectly uniform and rate-unlimited source of randomness might be an expensive resource. Our work focuses on developing schemes for secret-key generation and the wire-tap channel that satisfy subsets of the aforementioned constraints.
APA, Harvard, Vancouver, ISO, and other styles
35

Mansouri, Boroujeni Nima. "A High-Order High-Accuracy Geometric Block Interface Treatment for Computational Aeroacoustics Codes." University of Toledo / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1463099661.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Guo, Jian. "Multiphysics computational framework for MEMS based on code coupling and data exchange." Thesis, University of Leeds, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.531531.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Saied, Mariem. "Automatic code generation and optimization of multi-dimensional stencil computations on distributed-memory architectures." Thesis, Strasbourg, 2018. http://www.theses.fr/2018STRAD036/document.

Full text
Abstract:
Nous proposons Dido, un langage dédié (DSL) implicitement parallèle qui capture les spécifications de haut niveau des stencils et génère automatiquement du code parallèle de haute performance pour les architectures à mémoire distribuée. Le code généré utilise ORWL en tant que interface de communication et runtime. Nous montrons que Dido réalise un grand progrès en termes de productivité sans sacrifier les performances. Dido prend en charge une large gamme de calculs stencils ainsi que des applications réelles à base de stencils. Nous montrons que le code généré par Dido est bien structuré et se prête à de différentes optimisations possibles. Nous combinons également la technique de génération de code de Dido avec Pluto l'optimiseur polyédrique de boucles pour améliorer la localité des données. Nous présentons des expériences qui prouvent l'efficacité et la scalabilité du code généré qui atteint de meilleures performances que les implémentations ORWL et MPI écrites à la main
In this work, we present Dido, an implicitly parallel domain-specific language (DSL) that captures high-level stencil abstractions and automatically generates high-performance parallel stencil code for distributed-memory architectures. The generated code uses ORWL as a communication and synchronization backend. We show that Dido achieves a huge progress in terms of programmer productivity without sacrificing the performance. Dido supports a wide range of stencil computations and real-world stencil-based applications. We show that the well-structured code generated by Dido lends itself to different possible optimizations and study the performance of two of them. We also combine Dido's code generation technique with the polyhedral loop optimizer Pluto to increase data locality and improve intra-node data reuse. We present experiments that prove the efficiency and scalability of the generated code that outperforms both ORWL and MPI hand-crafted implementations
APA, Harvard, Vancouver, ISO, and other styles
38

Lyle, Suzanne McLean. "Error Correcting Codes and the Human Genome." Digital Commons @ East Tennessee State University, 2010. https://dc.etsu.edu/etd/1689.

Full text
Abstract:
In this work, we study error correcting codes and generalize the concepts with a view toward a novel application in the study of DNA sequences. The author investigates the possibility that an error correcting linear code could be included in the human genome through application and research. The author finds that while it is an accepted hypothesis that it is reasonable that some kind of error correcting code is used in DNA, no one has actually been able to identify one. The author uses the application to illustrate how the subject of coding theory can provide a teaching enrichment activity for undergraduate mathematics.
APA, Harvard, Vancouver, ISO, and other styles
39

Côte, Maxime. "Reconnaissance de codes correcteurs d'erreurs." Phd thesis, Ecole Polytechnique X, 2010. http://pastel.archives-ouvertes.fr/pastel-00006125.

Full text
Abstract:
Durant cette thèse, je me suis intéressés à la reconnaissance de codes correcteurs d'erreurs à partir d'une observation bruitée. Parmi ces codes, nous avons choisi d'étudier plus particulièrement les codes convolutifs et les turbo-codes. Le canal de transmission considéré pour nos travaux est le canal binaire symétrique. En s'appuyant sur les travaux de E. Filiol et J. Barbier, j'ai mis au point un algorithme, imaginé conjointement avec N. Sendrier. Nous avons créé une nouvelle méthode générique de reconnaissance des codes convolutifs (n; k) (k entrées et n sorties). Cette méthode améliore l'état de l'art grâce à l'utilisation exclusive d'opérations binaires d'algèbre linéaire dans l'algorithme. L'implémentation fournit de bons résultats, autant du point de vue du temps d'exécution que de la tolérance au bruit, pour tout type de code convolutifs. La seconde partie consiste en la mise au point d'une méthode de reconnaissance des turbo-codes. Cette méthode repose sur les hypothèses que nous sommes capable de retrouver le premier code convolutif à l'aide de notre méthode de reconnaissance de code convolutif et que le second code convolutif (suivant l'entrelaceur) possède une matrice génératrice systématique définie par P(D)/Q(D) (où P(D) et Q(D) sont les polynômes du codeur convolutif) de terme constant non nul. Cette dernière hypothèse forte mais réaliste nous permet de construire une méthode et un algorithme capable de retrouver à la fois l'entrelaceur et les polynômes P(D) et Q(D) du code convolutif. Cet algorithme est très rapide mais trouve ses limites lorsque le taux d'erreur croit. De plus, notre hypothèse rend impossible la reconstruction de turbo-codes poinçonnés sans modifier l'algorithme.
APA, Harvard, Vancouver, ISO, and other styles
40

Wong, Brenden. "On the computation of the probability of undetected error for linear block codes on the Gilbert channel." Thesis, University of British Columbia, 1991. http://hdl.handle.net/2429/30119.

Full text
Abstract:
An important measure of the performance of error detecting codes is the probability of undetected error. Extensive study on the subject has yielded results which allow for the computation of the probability of undetected error for many codes on the binary symmetric channel (BSC). However, little is known about code performance in more complicated channel models. The Gilbert channel is a two-state, three-parameter model with memory which simulates the effects of burst noise. In this thesis, we investigate methods to compute the probability of undetected error of binary linear block codes on this channel. We examine an approach to approximate code performance based on the P(m,n) distribution which is the probability of m errors in a block of n bits and the weight distribution of the code. For the Gilbert channel, P(m,n) can in principle be calculated from the channel parameters. In practice however, existing methodologies suffer from rather excessive computational requirements, particularly when n is larger than one thousand or so. We have developed an efficient method to calculate P(m,n) for reasonable channel parameters. This allows the probability of undetected error for many codes to be readily estimated. For certain channel and code parameters, the approximation method described above may not be sufficiently accurate. Exact analytical results are difficult to obtain, however; because unlike the BSC, the probability of a particular error pattern on the Gilbert channel depends not just on the number of 1's in the pattern. Nevertheless, by appropriately exploiting certain symmetries present on the Gilbert channel, we can acquire some useful results. We have derived the probability of undetected error for the single parity check code. We have also obtained a formula for summing over a cyclic class of vectors and shown that reciprocal generator polynomials generate cyclic codes which have the same probability of undetected error on the Gilbert channel. The Monte Carlo simulation technique is often used when exact analysis is difficult. In a simulation study of CRC codes, we are able to observe several interesting qualitative results with just a reasonable amount of computational effort. We find that as on the BSC, on the Gilbert channel the probability of undetected error does not always increase with worsening channel conditions. Also, the CRC-CCITT code appears to maintain its superiority in terms of error detection performance over the CRC-ANSI code on the Gilbert channel, and perhaps most significantly, for some ranges of channel parameters, the probability of undetected error estimated using BSC results with the effective bit error rate can be quite inaccurate.
Applied Science, Faculty of
Electrical and Computer Engineering, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
41

Clarke, Lauren Elizabeth. "Interfacing the CFD Code MFiX with the PETSc Linear Solver Library to Achieve Reduced Computation Times." Thesis, The University of North Dakota, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10813583.

Full text
Abstract:

A computational bottleneck during the solution to multiphase formulations of the incompressible Navier-Stokes equations is often during the implicit solution of the pressure-correction equation that results from operator-splitting methods. Since density is a coefficient in the pressure-correction equation, large variations or discontinuities among the phase densities greatly increase the condition number of the pressure-correction matrix and impede the convergence of iterative methods employed in its solution. To alleviate this shortcoming, the open-source multiphase code MFiX is interfaced with the linear solver library PETSc. Through an appropriate mapping of matrix and vector data structures between the two software, the access to a suite of robust, scalable, solver options in PETSc is obtained. Verification of the implementation of MFiX-PETSc is demonstrated through predictions that are identical to those obtained from MFiX?s native solvers for a simple heat conduction case with a well-known solution. After verifying the framework, several cases were tested with MFiX-PETSc to analyze the performance of various solver and preconditioner combinations. For a low Reynolds number, flow over a cylinder case, applying right-side Block Jacobi preconditioning to the BiCGSTAB iterative solver in MFiX-PETSc was 28-40% faster than MFiX?s native solver at the finest mesh resolution. Similarly, the left-side Block Jacobi preconditioner in MFiX-PETSc was 27?46% faster for the same fine meshing. Further assessments of these preconditioning options were then made for a fluidized bed problem involving different bed geometries, convergence tolerances, material densities, and inlet velocities. For a three-dimensional geometry with uniform meshing, native MFiX was faster than MFiX-PETSc for each simulation. The difference in speed was minimized when a low density fluidization material (polypropylene) was used along with a higher order discretization scheme. With these settings, MFiX-PETSc was only 2-6% slower than native MFiX when right-side Block Jacobi preconditioning was employed. The fluidized bed was then represented by a two-dimensional geometry with fine meshing towards the center. When this bed was filled with glass beads, right-side Block Jacobi was 28% faster than MFiX?s native solver, which was the largest speedup encountered throughout this 2D case.

APA, Harvard, Vancouver, ISO, and other styles
42

Lienkaemper, Caitlin. "Toric Ideals, Polytopes, and Convex Neural Codes." Scholarship @ Claremont, 2017. http://scholarship.claremont.edu/hmc_theses/106.

Full text
Abstract:
How does the brain encode the spatial structure of the external world? A partial answer comes through place cells, hippocampal neurons which become associated to approximately convex regions of the world known as their place fields. When an organism is in the place field of some place cell, that cell will fire at an increased rate. A neural code describes the set of firing patterns observed in a set of neurons in terms of which subsets fire together and which do not. If the neurons the code describes are place cells, then the neural code gives some information about the relationships between the place fields–for instance, two place fields intersect if and only if their associated place cells fire together. Since place fields are convex, we are interested in determining which neural codes can be realized with convex sets and in finding convex sets which generate a given neural code when taken as place fields. To this end, we study algebraic invariants associated to neural codes, such as neural ideals and toric ideals. We work with a special class of convex codes, known as inductively pierced codes, and seek to identify these codes through the Gröbner bases of their toric ideals.
APA, Harvard, Vancouver, ISO, and other styles
43

Choudhary, Aniruddha. "Verification of Compressible and Incompressible Computational Fluid Dynamics Codes and Residual-based Mesh Adaptation." Diss., Virginia Tech, 2015. http://hdl.handle.net/10919/51169.

Full text
Abstract:
Code verification is the process of ensuring, to the degree possible, that there are no algorithm deficiencies and coding mistakes (bugs) in a scientific computing simulation. In this work, techniques are presented for performing code verification of boundary conditions commonly used in compressible and incompressible Computational Fluid Dynamics (CFD) codes. Using a compressible CFD code, this study assesses the subsonic inflow (isentropic and fixed-mass), subsonic outflow, supersonic outflow, no-slip wall (adiabatic and isothermal), and inviscid slip-wall. The use of simplified curved surfaces is proposed for easier generation of manufactured solutions during the verification of certain boundary conditions involving many constraints. To perform rigorous code verification, general grids with mixed cell types at the verified boundary are used. A novel approach is introduced to determine manufactured solutions for boundary condition verification when the velocity-field is constrained to be divergence-free during the simulation in an incompressible CFD code. Order of accuracy testing using the Method of Manufactured Solutions (MMS) is employed here for code verification of the major components of an open-source, multiphase flow code - MFIX. The presence of two-phase governing equations and a modified SIMPLE-based algorithm requiring divergence-free flows makes the selection of manufactured solutions more involved than for single-phase, compressible flows. Code verification is performed here on 2D and 3D, uniform and stretched meshes for incompressible, steady and unsteady, single-phase and two-phase flows using the two-fluid model of MFIX. In a CFD simulation, truncation error (TE) is the difference between the continuous governing equation and its discrete approximation. Since TE can be shown to be the local source term for the discretization error, TE is proposed as the criterion for determining which regions of the computational mesh should be refined/coarsened. For mesh modification, an error equidistribution strategy to perform r-refinement (i.e., mesh node relocation) is employed. This technique is applied to 1D and 2D inviscid flow problems where the exact (i.e., analytic) solution is available. For mesh adaptation based upon TE, about an order of magnitude improvement in discretization error levels is observed when compared with the uniform mesh.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
44

Parks, Benjamin T. "Using the FLUENT computational fluid dynamics code to model the NACOK corrosion test." Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/38249.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Nuclear Engineering, 2004.
Includes bibliographical references (p. 89-90).
As a part of advancing nuclear technology, computational fluid dynamics (CFD) analysis offers safer and lower-cost results relative to experimental work. Its use as a safety analysis tool is gaining much broader acceptance due to increasing experimental verification. FLUENT is a Computational Fluid Dynamics (CFD) code that offers extensive benchmarks and validations, and is widely accepted by the nuclear engineering community. The Modular Pebble Bed Reactor (MPBR) is among the advanced reactor designs proposed for future deployment. As such, it offers an excellent opportunity to illustrate the possible contributions of computational modeling to the reactor design process. Because the MPBR contains graphite structures and fuel elements, there is significant concern about graphite heating and chemical reactions during an air ingress accident. Some MPBR-relevant experimental safety assurances have been benchmarked using FLUENT. Currently, there is a planned experiment involving natural convection flow and graphite corrosion at the Forschungzentrum Julich in Julich, Germany. Thus far, only a preliminary test of this experiment has been performed. This test has been called the NACOK Corrosion Test, and this thesis presents a model of the test configuration.
(cont.) A methodology is developed by which an efficient analysis of the flow, heat transfer, and corrosion effects of the test are modeled using the FLUENT software. An adequate grid resolution is determined that allows computationally efficient analysis. Steady-state and transient flow and heat transfer effects are modeled, and separate models contain steady-state and transient chemistry effects. The steady-state flow and heat transfer model was used for the grid refinement study; it was determined that a fully-structured, 4,508 element grid was sufficient for analysis of this experiment. The transient flow and heat transfer model confirmed the results of the steady-state analysis in that the transient model had results similar to those of the steady-state model. An effort was made to couple a density-driven pressure drop correlation to this model; however, because of the requirement of a general pressure-drop specification for the entire model, and the temperature dependence of the correlation, an over-specification resulted that caused the solution to diverge. Because the ambient air that causes the buoyancy-induced pressure drop is not modeled, it was determined that specifying a general pressure drop for the entire model was a sufficient and relevant assumption.
(cont.) The steady-state chemistry model was used to perform sensitivity studies by varying the flow rate, graphite temperature, and stoicheometry. Increasing the flow rate results in quicker graphite consumption, although the oxygen exits the system less depleted. Increasing the graphite temperature seems to have little effect on the chemistry effects of the modeL Varying the stoicheometry indicated that more heat is released by CO2 production. Overall, it was determined that most of the graphite chemistry occurs in the reflector region of the model. A transient chemistry model was also created, but because mass transport effects were not modeled, the solution tended to steady-state operation, rather than eventual graphite cooling, which would be the expected result of this test in the laboratory setting.
by Benjamin T. Parks.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
45

Hardman, Robert R. "Infrared imaging : a proposed validation technique for computational fluid dynamics codes used in STOVL applications /." Thesis, This resource online, 1990. http://scholar.lib.vt.edu/theses/available/etd-05022009-040418/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Ingraham, Daniel. "External Verification Analysis: A Code-Independent Approach to Verifying Unsteady Partial Differential Equation Solvers." University of Toledo / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1430491745.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Rennó, Costa César. "The Hippocampus code : a computational study of the structure and function of the hippocampus." Doctoral thesis, Universitat Pompeu Fabra, 2012. http://hdl.handle.net/10803/94196.

Full text
Abstract:
Actualment, no hi ha consens científic respecte a la informació representada en la activitat de les célules del hipocamp. D'una banda, experiments amb humans sostenen una visión de la funció de l'hipocamp com a un sistema per l'emmagatzematge de memóries episódiques, mentre que la recerca amb rodents enfatitza una visió com a sistema cognitiu espacial. Tot i que existeix abundant evidència experimental que indica una possible sobreposició d'ambdues teories, aquesta dissociació també es manté en part en base a dades fisiològiques aparentment incompatibles. Aquesta tèsi poposa que l'hippocamp té un rol funcional que s'hauría d'analitzar en termes de la seva estructura i funció, enlloc de mitjança estudis correlació entre activitat neuronal i comportament. La identificació d'un codi a l'hipocamp, es a dir, el conjunt de principis computacionals que conformen les transformacions d'entrada i sortida de l'activitat neuronal, hauría de proporcionar un explicació unificada de la seva funció. En aquesta tèsi presentem un model teòric que descriu quantitativament i que interpreta la selectivitat de certes regions de l'hipocamp en funció de variables espaials i no-espaials, tal i com observada en experiments amb rates. Aquest resultat suggereix que multiples aspectes de la memòria expressada en humans i rodents deriven d'uns mateixos principis. Per aquest motius, proposem nous principis per la memòria, l'auto-completat de patrons i plasticitat. A més, mitjançant aplicacions robòtiques, creem d'un nexe causal entre el circuit neural i el comportament amb el que demostrem la naturalesa conjuntiva de la selectivitat neuronal observada en el hipocamp es necessària per la solució de problemes pràctics comuns, com per example la cerca d'aliments. Tot plegat, aquests resultats avancen en l'idea general de que el codi de l'hipocamp es genèric i aplicable als diversos tipus de memòries estudiades en la literatura.
There is no consensual understanding on what the activity of the hippocampus neurons represents. While experiments with humans foster a dominant view of an episodic memory system, experiments with rodents promote its role as a spatial cognitive system. Although there is abundant evidence pointing to an overlap between these two theories, the dissociation is sustained by conflicting physiological data. This thesis proposes that the functional role of the hippocampus should be analyzed in terms of its structure and function rather than by the correlation of neuronal activity and behavioral performance. The identification of the hippocampus code, i.e. the set of computational principles underlying the input-output transformations of neural activity, might ultimately provide a unifying understanding of its role. In this thesis we present a theoretical model that quantitatively describes and interprets the selectivity of regions of the hippocampus to spatial and non-spatial variables observed in experiments with rats. The results suggest that the multiple aspects of memory expressed in human and rodent data are derived form similar principles. This approach suggests new principles for memory, pattern completion and plasticity. In addition, by creating a causal tie between the neural circuitry and behavior through a robotic control framework we show that the conjunctive nature of neural selectivity observed in the hippocampus is needed for effective problem solving in real-world tasks such as foraging. Altogether, these results advance the concept that the hippocampal code is generic to the different aspects of memory highlighted in the literature.
APA, Harvard, Vancouver, ISO, and other styles
48

Rose, Isaac D. "Aerodynamic Modeling of an Unmanned Aerial Vehicle Using a Computational Fluid Dynamics Prediction Code." Ohio University / OhioLINK, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1237481460.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Shin, Sangmook. "Reynolds-Averaged Navier-Stokes Computation of Tip Clearance Flow in a Compressor Cascade Using an Unstructured Grid." Diss., Virginia Tech, 2001. http://hdl.handle.net/10919/28947.

Full text
Abstract:
A three-dimensional unstructured incompressible RANS code has been developed using artificial compressibility and Spalart-Allmaras eddy viscosity model. A node-based finite volume method is used in which all flow variables are defined at the vertices of tetrahedrons in an unstructured grid. The inviscid fluxes are computed by using the Roe's flux difference splitting method, and higher order accuracy is attained by data reconstruction based on Taylor series expansion. Gauss theorem is used to formulate necessary gradients. For time integration, an implicit scheme based on linearized Euler backward method is used. A tetrahedral unstructured grid generation code has been also developed and applied to the tip clearance flow in a highly staggered cascade. Surface grids are first generated in the flow passage and blade tip by using several triangulation methods including Delaunay triangulation, advancing front method and advancing layer method. Then the whole computational domain including tip gap region is filled with prisms using the surface grids. Each prism is divided into three tetrahedrons. To accomplish this division in a consistent manner, connectivity pattern is assigned to each triangle in the surface grids. A new algorithm is devised to assign the connectivity pattern without reference to the particular method of triangulation. This technique offers great flexibility in surface grid generation. The code has been validated by comparisons with available computational and experimental results for several test cases: invisicd flow around NACA section, laminar and turbulent flow over a flat plate, turbulent flow through double-circular arc cascade and laminar flow through a square duct with 90° bend. For the laminar flat plate case, the velocity profile and skin friction coefficient are in excellent agreement with Blasius solution. For the turbulent flat plate case, velocity profiles are in full agreement with the law of the wall up to Reynolds number of 1.0E8, however, the skin friction coefficient is under-predicted by about 10% in comparison with empirical formula. Blade loading for the two-dimensional circular arc cascade is also compared with experiments. The results obtained with the experimental inflow angle (51.5° ) show some discrepancies at the trailing edge and severely under-predict the suction peak at the leading edge. These discrepancies are completely remedied if the inflow angle is increased to 53.5° . The code is also capable of predicting the secondary flow in the square duct with 90° bend, and the velocity profiles are in good agreement with measurements and published Navier-Stokes computations. Finally the code is applied to a linear cascade that has GE rotor B section with tip clearance and a high stagger angle of 56.9° . The overall structure of the tip clearance flow is well predicted. Loss of loading due to tip leakage flow and reloading due to tip leakage vortex are presented. On the end wall, separation line of the tip leakage vortex and reattachment line of passage vortex are identified. The location of the tip leakage vortex in the passage agrees very well with oil flow visualization. Separation bubble on the blade tip is also predicted. Mean streamwise velocity contours and cross sectional velocity vectors are compared with experimental results in the near wake, and good agreements are observed. It is concluded that Spalart-Allmaras turbulence model is adequate for this type of flow field except at locations where the tip leakage vortex of one blade interacts with the wake of a following blade. This situation may prevail for blades with longer span and/or in the far wake. Prediction of such an interaction presents a challenge to RANS computations. The effects of blade span on the flow structure have been also investigated. Two cascades with blades of aspect ratios of 0.5 and 1.0 are considered. By comparing pressure distributions on the blade, it is shown that the aspect ratio has strong effects on loading distribution on the blade although the tip gap height is very small (0.016 chord). Grid convergence study has been carried out with three different grids for pressure distributions and limiting streamlines on the end wall.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
50

Nawaf, Sameer Obaid. "EFFECT OF ANCILLA LOSSES ON FAULT-TOLERANT QUANTUM ERROR CORRECTION IN THE [[7,1,3]] STEANE CODE." OpenSIUC, 2013. https://opensiuc.lib.siu.edu/theses/1333.

Full text
Abstract:
Fault tolerant quantum error correction is a procedure which satisfies the feature that if one of the gates in the procedure has failed then the failure causes at most one error in the output qubits of the encoded block. Quantum computer is based on the idea of two quantum state systems (Qubits). However, the majority of systems are constructed from higher than two- level subspace. Bad control and environmental interactions in these systems lead to leakage fault. Leakage errors are errors that couple the states inside a code subspace to the states outside a code subspace. One example for leakage fault is loss errors. Since the fault tolerant procedure may be unable to recognize the leakage fault because it was designed to deal with Pauli errors. In that case a single leakage fault might disrupt the fault tolerant technique. In this thesis we investigate the effect of ancilla losses on fault-tolerant quantum error correction in the [[7,1,3]] Steane code. We proved that both Shor and Steane methods are still fault tolerant if loss errors occur.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography