Dissertationen zum Thema „Data structures (Computer science)“

Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Data structures (Computer science).

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Data structures (Computer science)" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Obiedat, Mohammad. „Incrementally Sorted Lattice Data Structures“. Thesis, The George Washington University, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=3732474.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:

Data structures are vital entities that strongly impact the efficiency of several software applications. Compactness, predictable memory access patterns, and good temporal and spacial locality of the structure's operations are increasingly becoming essential factors in the selection of a data structure for a specific application. In general, the less data we store and move the better for efficiency and power consumption, especially in infrastructure software and applications for hand-held devices like smartphones. In this dissertation, we extensively study a data structure named lattice data structure (LDS) that is as compact and suitable for memory hierarchies as the array, yet with a rich structure that enables devising procedures with better time bounds.

To achieve performance similar to the performance of the optimal O(log(N)) time complexity of the searching operations of other structures, we provide a hybrid searching algorithm that can be implemented by searching the lattice using the basic searching algorithm when the degree of the sortedness of the lattice is less than or equal to 0.9h, and the jump searching algorithm when the degree of the sortedness of the lattice is greater than 0.9h. A sorting procedure that can be used, during the system idle time, to incrementally increase the degree of sortedness of the lattice is given. We also provide randomized and parallel searching algorithms that can be used instead of the usual jump-and-walk searching algorithms.

A lattice can be represented by a one-dimensional array, where each cell is represented by one array element. The worst case time complexity of the basic LDS operations and the average time complexity of some of the order-statistic operations are better than the corresponding time complexities of most of other data structures operations. This makes the LDS a good choice for memory-constrained systems, for systems where power consumption is a critical issue, and for real-time systems. A potential application of the LDS is to use it as an index structure for in-memory databases.

2

Kabiri, Chimeh Mozhgan. „Data structures for SIMD logic simulation“. Thesis, University of Glasgow, 2016. http://theses.gla.ac.uk/7521/.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Due to the growth of design size and complexity, design verification is an important aspect of the Logic Circuit development process. The purpose of verification is to validate that the design meets the system requirements and specification. This is done by either functional or formal verification. The most popular approach to functional verification is the use of simulation based techniques. Using models to replicate the behaviour of an actual system is called simulation. In this thesis, a software/data structure architecture without explicit locks is proposed to accelerate logic gate circuit simulation. We call thus system ZSIM. The ZSIM software architecture simulator targets low cost SIMD multi-core machines. Its performance is evaluated on the Intel Xeon Phi and 2 other machines (Intel Xeon and AMD Opteron). The aim of these experiments is to: • Verify that the data structure used allows SIMD acceleration, particularly on machines with gather instructions ( section 5.3.1). • Verify that, on sufficiently large circuits, substantial gains could be made from multicore parallelism ( section 5.3.2 ). • Show that a simulator using this approach out-performs an existing commercial simulator on a standard workstation ( section 5.3.3 ). • Show that the performance on a cheap Xeon Phi card is competitive with results reported elsewhere on much more expensive super-computers ( section 5.3.5 ). To evaluate the ZSIM, two types of test circuits were used: 1. Circuits from the IWLS benchmark suit [1] which allow direct comparison with other published studies of parallel simulators.2. Circuits generated by a parametrised circuit synthesizer. The synthesizer used an algorithm that has been shown to generate circuits that are statistically representative of real logic circuits. The synthesizer allowed testing of a range of very large circuits, larger than the ones for which it was possible to obtain open source files. The experimental results show that with SIMD acceleration and multicore, ZSIM gained a peak parallelisation factor of 300 on Intel Xeon Phi and 11 on Intel Xeon. With only SIMD enabled, ZSIM achieved a maximum parallelistion gain of 10 on Intel Xeon Phi and 4 on Intel Xeon. Furthermore, it was shown that this software architecture simulator running on a SIMD machine is much faster than, and can handle much bigger circuits than a widely used commercial simulator (Xilinx) running on a workstation. The performance achieved by ZSIM was also compared with similar pre-existing work on logic simulation targeting GPUs and supercomputers. It was shown that ZSIM simulator running on a Xeon Phi machine gives comparable simulation performance to the IBM Blue Gene supercomputer at very much lower cost. The experimental results have shown that the Xeon Phi is competitive with simulation on GPUs and allows the handling of much larger circuits than have been reported for GPU simulation. When targeting Xeon Phi architecture, the automatic cache management of the Xeon Phi, handles and manages the on-chip local store without any explicit mention of the local store being made in the architecture of the simulator itself. However, targeting GPUs, explicit cache management in program increases the complexity of the software architecture. Furthermore, one of the strongest points of the ZSIM simulator is its portability. Note that the same code was tested on both AMD and Xeon Phi machines. The same architecture that efficiently performs on Xeon Phi, was ported into a 64 core NUMA AMD Opteron. To conclude, the two main achievements are restated as following: The primary achievement of this work was proving that the ZSIM architecture was faster than previously published logic simulators on low cost platforms. The secondary achievement was the development of a synthetic testing suite that went beyond the scale range that was previously publicly available, based on prior work that showed the synthesis technique is valid.
3

Eastep, Jonathan M. (Jonathan Michael). „Smart data structures : an online machine learning approach to multicore data structures“. Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/65967.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2011.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student submitted PDF version of thesis.
Includes bibliographical references (p. 175-180).
As multicores become prevalent, the complexity of programming is skyrocketing. One major difficulty is eciently orchestrating collaboration among threads through shared data structures. Unfortunately, choosing and hand-tuning data structure algorithms to get good performance across a variety of machines and inputs is a herculean task to add to the fundamental difficulty of getting a parallel program correct. To help mitigate these complexities, this work develops a new class of parallel data structures called Smart Data Structures that leverage online machine learning to adapt themselves automatically. We prototype and evaluate an open source library of Smart Data Structures for common parallel programming needs and demonstrate signicant improvements over the best existing algorithms under a variety of conditions. Our results indicate that learning is a promising technique for balancing and adapting to complex, time-varying tradeoffs and achieving the best performance available.
by Jonathan M. Eastep.
Ph.D.
4

Butts, Robert O. „Heterogeneous construction of spatial data structures“. Thesis, University of Colorado at Denver, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=1588178.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:

Linear spatial trees are typically constructed in two discrete, consecutive stages: calculating location codes, and sorting the spatial data according to the codes. Additionally, a GPU R-tree construction algorithm exists which likewise consists of sorting the spatial data and calculating nodes' bounding boxes. Current GPUs are approximately three orders of magnitude faster than CPUs for perfectly vectorizable problems. However, the best known GPU sorting algorithms only achieve 10-20 times speedup over sequential CPU algorithms. Both calculating location codes and bounding boxes are perfectly vectorizable problems. We thus investigate the construction of linear quadtrees, R-trees, and linear k-d trees using the GPU for location code and bounding box calculation, and parallel CPU algorithms for sorting. In this endeavor, we show how existing GPU linear quadtree and R-tree construction algorithms may be modified to be heterogeneous, and we develop a novel linear k-d tree construction algorithm which uses an existing parallel CPU quicksort partition algorithm. We implement these heterogeneous construction algorithms, and we show that heterogeneous construction of spatial data structures can approach the speeds of homogeneous GPU algorithms, while freeing the GPU to be used for better vectorizable problems.

5

Toussaint, Richard. „Data structures and operations for geographical information“. Thesis, McGill University, 1995. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=23945.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
The topic of this thesis revolves around the organization of geographical information in permanent memory. Our premise is that a recognized and fully documented direct access storage technique called Multidimensional Paging or Multipaging would provide a well balanced storing structure for this category of data. Since multipaging uses a multidimensional perspective on the information to allocate it to pages in secondary storage then spatial data, which is fundamentally multidimensional by nature, would surely offer a suitable profile.
First, we attempt to evaluate the efficiency of multipaging on static files and to suggest possible modifications to the standard algorithm to better serve spatial data.
Our solution to this problem consists in compressing the pages that overflow. Because geographical information is often a representation of occurences of Nature, we hypothesize that Fractal Geometry, which serves to formalize a mathematical description of such elements, could provide the theoretical background to derive an efficient fractal-based compression algorithm. An appreciable improvement is obtained by compressing the pages of the multipaged administrative regions data that exceed their capacity: $ alpha=0.7272$ and $ pi=1.0$.
The outcome of these experiments led us to elaborate a mixed system based on two relatively different approaches: multipaging and fractal-based data compression. The first part consisted in the implementation of the standard static multipaging algorithm using a relational database management system named Relix. The other approach was developed using the C programming language to accommodate some particularities of the multipaged spatial data. The preliminary results were encouraging and allowed us to establish the parameters for a more formal implementation. Also, it brought out the limits of the compression method in view of the intended usage of the data. (Abstract shortened by UMI.)
6

Eid, Ashraf. „Efficient associative data structures for bitemporal databases“. Thesis, University of Ottawa (Canada), 2002. http://hdl.handle.net/10393/6226.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Most applications require storing multiple versions of data and involve a lot of temporal semantics in their schema. This requires maintenance and querying of temporal relations. A Bitemporal DBMS will simplify the development and maintenance of such applications by moving temporal support from the application into the DBMS engine. The success of such Bitemporal DBMSs relies mainly on the availability of high performance indices that handle update and search operations efficiently. A successful associative data structure (index) is the one that can efficiently partition the space of the attributes that are used within the keys. Temporal attributes have unique characteristics and should support now-relative intervals. These intervals grow as time grows and thus we need an index that can handle attributes with variable values. The proposed bitemporal index partitions the bitemporal space into four subspaces according to the end value of the temporal intervals. This results in separating those keys that have variable intervals from those that have fixed interval(s). In this thesis we have used on-the-shelf index that successfully indexes spatial attributes. But instead of representing the two temporal dimensions as a rectangle, we have represented them as 4 dimensional points. This results in better partitioning of each subtree space and in better search performance.
7

Zhu, Yingchun 1968. „Optimizing parallel programs with dynamic data structures“. Thesis, McGill University, 2000. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=36745.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Distributed memory parallel architectures support a memory model where some memory accesses are local, and thus inexpensive, while other memory accesses are remote, and potentially quite expensive. In order to achieve efficiency on such architectures, we need to reduce remote accesses. This is particularly challenging for applications that use dynamic data structures.
In this thesis, I present two compiler techniques to reduce the overhead of remote memory accesses for dynamic data structure based applications: locality techniques and communication optimizations. Locality techniques include a static locality analysis, which statically estimates when an indirect reference via a pointer can be safely assumed to be a local access, and dynamic locality checks, which consists of runtime tests to identify local accesses. Communication techniques include: (1) code movement to issue remote reads earlier and writes later; (2) code transformations to replace repeated/redundant remote accesses with one access; and (3) transformations to block or pipeline a group of remote requests together. Both locality and communication techniques have been implemented and incorporated into our EARTH-McCAT compiler framework, and a series of experiments have been conducted to evaluate these techniques. The experimental results show that we are able to achieve up to 26% performance improvement with each technique alone, and up to 29% performance improvement when both techniques are applied together.
8

Karras, Panagiotis. „Data structures and algorithms for data representation in constrained environments“. Thesis, Click to view the E-thesis via HKUTO, 2007. http://sunzi.lib.hku.hk/hkuto/record/B38897647.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Jain, Jhilmil Cross James H. „User experience design and experimental evaluation of extensible and dynamic viewers for data structures“. Auburn, Ala., 2007. http://repo.lib.auburn.edu/2006%20Fall/Dissertations/JAIN_JHILMIL_3.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Pǎtraşcu, Mihai. „Lower bound techniques for data structures“. Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/45866.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.
Includes bibliographical references (p. 135-143).
We describe new techniques for proving lower bounds on data-structure problems, with the following broad consequences: * the first [omega](lg n) lower bound for any dynamic problem, improving on a bound that had been standing since 1989; * for static data structures, the first separation between linear and polynomial space. Specifically, for some problems that have constant query time when polynomial space is allowed, we can show [omega](lg n/ lg lg n) bounds when the space is O(n - polylog n). Using these techniques, we analyze a variety of central data-structure problems, and obtain improved lower bounds for the following: * the partial-sums problem (a fundamental application of augmented binary search trees); * the predecessor problem (which is equivalent to IP lookup in Internet routers); * dynamic trees and dynamic connectivity; * orthogonal range stabbing. * orthogonal range counting, and orthogonal range reporting; * the partial match problem (searching with wild-cards); * (1 + [epsilon])-approximate near neighbor on the hypercube; * approximate nearest neighbor in the l[infinity] metric. Our new techniques lead to surprisingly non-technical proofs. For several problems, we obtain simpler proofs for bounds that were already known.
by Mihai Pǎtraşcu.
Ph.D.
11

Ohashi, Darin. „Cache Oblivious Data Structures“. Thesis, University of Waterloo, 2001. http://hdl.handle.net/10012/1060.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
This thesis discusses cache oblivious data structures. These are structures which have good caching characteristics without knowing Z, the size of the cache, or L, the length of a cache line. Since the structures do not require these details for good performance they are portable across caching systems. Another advantage of such structures isthat the caching results hold for every level of cache within a multilevel cache. Two simple data structures are studied; the array used for binary search and the linear list. As well as being cache oblivious, the structures presented in this thesis are space efficient, requiring little additional storage. We begin the discussion with a layout for a search tree within an array. This layout allows Searches to be performed in O(log n) time and in O(log n/log L) (the optimal number) cache misses. An algorithm for building this layout from a sorted array in linear time is given. One use for this layout is a heap-like implementation of the priority queue. This structure allows Inserts, Heapifies and ExtractMaxes in O(log n) time and O(log nlog L) cache misses. A priority queue using this layout can be builtfrom an unsorted array in linear time. Besides the n spaces required to hold the data, this structure uses a constant amount of additional storage. The cache oblivious linear list allows scans of the list taking Theta(n) time and incurring Theta(n/L) (the optimal number) cache misses. The running time of insertions and deletions is not constant, however it is sub-polynomial. This structure requires e*n additional storage, where e is any constant greater than zero.
12

Curtis, Ronald Sanger. „Data structure complexity metrics“. Buffalo, N.Y. : Dept. of Computer Science, State University of New York at Buffalo, 1994. http://www.cse.buffalo.edu/tech%2Dreports/94%2D39.ps.Z.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Kuper, Lindsey. „Lattice-based data structures for deterministic parallel and distributed programming“. Thesis, Indiana University, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=3726443.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:

Deterministic-by-construction parallel programming models guarantee that programs have the same observable behavior on every run, promising freedom from bugs caused by schedule nondeterminism. To make that guarantee, though, they must sharply restrict sharing of state between parallel tasks, usually either by disallowing sharing entirely or by restricting it to one type of data structure, such as single-assignment locations.

I show that lattice-based data structures, or LVars, are the foundation for a guaranteed-deterministic parallel programming model that allows a more general form of sharing. LVars allow multiple assignments that are inflationary with respect to a given lattice. They ensure determinism by allowing only inflationary writes and "threshold" reads that block until a lower bound is reached. After presenting the basic LVars model, I extend it to support event handlers, which enable an event-driven programming style, and non-blocking "freezing" reads, resulting in a quasi-deterministic model in which programs behave deterministically modulo exceptions.

I demonstrate the viability of the LVars model with LVish, a Haskell library that provides a collection of lattice-based data structures, a work-stealing scheduler, and a monad in which LVar computations run. LVish leverages Haskell's type system to index such computations with effect levels to ensure that only certain LVar effects can occur, hence statically enforcing determinism or quasi-determinism. I present two case studies of parallelizing existing programs using LVish: a k-CFA control flow analysis, and a bioinformatics application for comparing phylogenetic trees.

Finally, I show how LVar-style threshold reads apply to the setting of convergent replicated data types (CvRDTs), which specify the behavior of eventually consistent replicated objects in a distributed system. I extend the CvRDT model to support deterministic, strongly consistent threshold queries. The technique generalizes to any lattice, and hence any CvRDT, and allows deterministic observations to be made of replicated objects before the replicas' states converge.

14

Miner, andrew S. „Data structures for the analysis of large structured Markov models“. W&M ScholarWorks, 2000. https://scholarworks.wm.edu/etd/1539623985.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
High-level modeling formalisms are increasingly popular tools for studying complex systems. Given a high-level model, we can automatically verify certain system properties or compute performance measures about the system. In the general case, measures must be computed using discrete-event simulations. In certain cases, exact numerical analysis is possible by constructing and analyzing the underlying stochastic process of the system, which is a continuous-time Markov chain (CTMC) in our case. Unfortunately, the number of states in the underlying CTMC can be extremely large, even if the high-level model is "small". In this thesis, we develop data structures and techniques that can tolerate these large numbers of states.;First, we present a multi-level data structure for storing the set of reachable states of a model. We then introduce the concept of event "locality", which considers the components of the model that an event may affect. We show how a state generation algorithm using our multi-level structure can exploit event locality to reduce CPU requirements.;Then, we present a symbolic generation technique based on our multi-level structure and our concept of event locality, in which operations are applied to sets of states. The extremely compact data structure and efficient manipulation routines we present allow for the examination of much larger systems than was previously possible.;The transition rate matrix of the underlying CTMC can be represented with Kronecker algebra under certain conditions. However, the use of Kronecker algebra introduces several sources of CPU overhead during numerical solution. We present data structures, including our new data structure called matrix diagrams, that can reduce this CPU overhead. Using our techniques, we can compute measures for large systems in a fraction of the time required by current state-of-the-art techniques.;Finally, we present a technique for approximating stationary measures using aggregations of the underlying CTMC. Our technique utilizes exact knowledge of the underlying CTMC using our compact data structure for the reachable states and a Kronecker representation for the transition rates. We prove that the approximation is exact for models possessing a product-form solution.
15

Chen, Jiawen (Jiawen Kevin). „Efficient data structures for piecewise-smooth video processing“. Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/66003.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2011.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 95-102).
A number of useful image and video processing techniques, ranging from low level operations such as denoising and detail enhancement to higher level methods such as object manipulation and special effects, rely on piecewise-smooth functions computed from the input data. In this thesis, we present two computationally efficient data structures for representing piecewise-smooth visual information and demonstrate how they can dramatically simplify and accelerate a variety of video processing algorithms. We start by introducing the bilateral grid, an image representation that explicitly accounts for intensity edges. By interpreting brightness values as Euclidean coordinates, the bilateral grid enables simple expressions for edge-aware filters. Smooth functions defined on the bilateral grid are piecewise-smooth in image space. Within this framework, we derive efficient reinterpretations of a number of edge-aware filters commonly used in computational photography as operations on the bilateral grid, including the bilateral filter, edgeaware scattered data interpolation, and local histogram equalization. We also show how these techniques can be easily parallelized onto modern graphics hardware for real-time processing of high definition video. The second data structure we introduce is the video mesh, designed as a flexible central data structure for general-purpose video editing. It represents objects in a video sequence as 2.5D "paper cutouts" and allows interactive editing of moving objects and modeling of depth, which enables 3D effects and post-exposure camera control. In our representation, we assume that motion and depth are piecewise-smooth, and encode them sparsely as a set of points tracked over time. The video mesh is a triangulation over this point set and per-pixel information is obtained by interpolation. To handle occlusions and detailed object boundaries, we rely on the user to rotoscope the scene at a sparse set of frames using spline curves. We introduce an algorithm to robustly and automatically cut the mesh into local layers with proper occlusion topology, and propagate the splines to the remaining frames. Object boundaries are refined with per-pixel alpha mattes. At its core, the video mesh is a collection of texture-mapped triangles, which we can edit and render interactively using graphics hardware. We demonstrate the effectiveness of our representation with special effects such as 3D viewpoint changes, object insertion, depthof- field manipulation, and 2D to 3D video conversion.
by Jiawen Chen.
Ph.D.
16

Goudjil, Amar. „Data structures, binary search trees : a study of random Weyl trees“. Thesis, McGill University, 1998. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=21559.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
This thesis covers the study of a particular class of binary search trees, the Weyl trees formed by consecutive insertion of numbers {theta}, {2theta}, {3theta}, ..., where theta is an irrational number from (0,1), and {x} denotes the fractional part of x. Various properties of the structure of these trees are explored and a relationship with the continued fraction expansion of theta is shown. Among these properties, an approximation of the height Hn of a Weyl tree with n nodes is given when theta is chosen at random and uniformly on (0, 1). The main result of this work is that in probability, Hn ∼ (12/pi2) log n log log n.
17

Costa, Andre. „Analytic modelling of agent-based network routing algorithms“. Title page, contents and abstract only, 2002. http://web4.library.adelaide.edu.au/theses/09PH/09phc8373.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Dabdoub, Sebastien Alberto. „Finding linearization violations in lock-free concurrent data structures“. Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/85413.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2013.
Cataloged from PDF version of thesis.
Includes bibliographical references (page 31).
Finding bugs in lock-free concurrent programs is hard. This is due in part to the difficulty of reasoning about the correctness of concurrent algorithms and the timing-sensitive nature of concurrent programs. One of the most widely used tools for reasoning about the correctness of concurrent algorithms is the linearization property. This thesis presents a tool for automatic dynamic checking of concurrent programs under the Total-Store-Order (TSO) memory model and a methodology for finding linearization violations automatically with the tool.
by Sebastien Alberto Dabdoub.
M. Eng.
19

Lodolini, Lucia. „The representation of symmetric patterns using the Quadtree data structure /“. Online version of thesis, 1988. http://hdl.handle.net/1850/8402.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Mullins, Robert W. „Separating representation for translation of shared data in a heterogeneous computing environment /“. This resource online, 1993. http://scholar.lib.vt.edu/theses/available/etd-04272010-020125/.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Lee, Ka-hing, und 李家興. „The dictionary problem: theory andpractice“. Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1996. http://hub.hku.hk/bib/B31234963.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Zee, Karen K. „Verification of full functional correctness for imperative linked data structures“. Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/58078.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 207-222).
We present the verification of full functional correctness for a collection of imperative linked data structures implemented in Java. A key technique that makes this verification possible is a novel, integrated proof language that we have developed within the context of the Jahob program verification system. Our proof language allows us to embed proof commands directly within the program, making it possible to reason about the behavior of the program in its original context. It also allows us to effectively leverage Jahob's integrated reasoning system. Unlike conventional program verification systems that rely on a single monolithic prover, Jahob includes interfaces to a diverse collection of specialized automated reasoning systems-automated theorem provers, decision procedures, and program analyses-that work together to prove the verification conditions that the system automatically generates. Our proof language enables the developer to direct the efforts of these automated reasoning systems to successfully verify properties that the system is unable to verify without guidance. Our specifications characterize the behavior of the data structures in terms of their abstract state, resulting in verified interfaces that can be used to reason about the behavior of the data structures without revealing the underlying representation. The results demonstrate the effectiveness of our proof language and integrated reasoning approach, and provide valuable insight into the specification and verification of imperative linked data structures.
by Karen K. Zee.
Ph.D.
23

Niemi, Timo. „Conversion of flat files and hierarchical data bases“. Tampere, Finland : University of Tampere, 1985. http://catalog.hathitrust.org/api/volumes/oclc/15674199.html.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

McCallen, Scott J. „Mining Dynamic Structures in Complex Networks“. Kent State University / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=kent1204154279.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Zhang, Xin Iris, und 張欣. „Fast mining of spatial co-location patterns“. Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2004. http://hub.hku.hk/bib/B30462708.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Yang, Lei. „Querying Graph Structured Data“. Case Western Reserve University School of Graduate Studies / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=case1410434109.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Daoud, Amjad M. „Efficient data structures for information retrieval“. Diss., Virginia Tech, 1993. http://hdl.handle.net/10919/40031.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Brown, Patrick R. „A paging scheme for pointer-based quadtrees“. Thesis, This resource online, 1992. http://scholar.lib.vt.edu/theses/available/etd-10062009-020024/.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

宋永健 und Wing-kin Sung. „Fast labeled tree comparison via better matching algorithms“. Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1998. http://hub.hku.hk/bib/B31239316.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

黎少斌 und Shiao-bun Lai. „Trading off time for space for the string matching problem“. Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1996. http://hub.hku.hk/bib/B31214216.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Sung, Wing-kin. „Fast labeled tree comparison via better matching algorithms /“. Hong Kong : University of Hong Kong, 1998. http://sunzi.lib.hku.hk/hkuto/record.jsp?B20229999.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Lai, Shiao-bun. „Trading off time for space for the string matching problem /“. Hong Kong : University of Hong Kong, 1996. http://sunzi.lib.hku.hk/hkuto/record.jsp?B18061795.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Van, den Hooff Jelle (Jelle August). „Fast bug finding in lock-free data structures with CB-DPOR“. Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/92058.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2014.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 39-40).
This thesis describes CB-DPOR, an algorithm for quickly finding bugs in lock-free data structures. CB-DPOR is a combination of the CHESS and DPOR model checking algorithms. CB-DPOR performs similar to the concurrently developed preemption-bounded BPOR algorithm. CODEX is a tool for finding bugs in lock-free data structures. CODEX implements CBDPOR and this thesis demonstrates how to use CODEX to find bugs. This thesis describes new bugs in open-source lock-free data structures, and compares the performance of CBDPOR with the earlier model checking algorithms CHESS, DPOR, and PCT. CB-DPOR find bugs one to two orders of magnitude faster than earlier algorithms.
by Jelle van den Hooff.
M. Eng.
34

Demurjian, Steven Arthur. „The multi-lingual database system : a paradigm and test-bed for the investigation of data-model transformations and data-model semantics /“. The Ohio State University, 1987. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487324944214237.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

John, Ajita. „Linearly Ordered Concurrent Data Structures on Hypercubes“. Thesis, University of North Texas, 1992. https://digital.library.unt.edu/ark:/67531/metadc501197/.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
This thesis presents a simple method for the concurrent manipulation of linearly ordered data structures on hypercubes. The method is based on the existence of a pruned binomial search tree rooted at any arbitrary node of the binary hypercube. The tree spans any arbitrary sequence of n consecutive nodes containing the root, using a fan-out of at most [log₂ 𝑛] and a depth of [log₂ 𝑛] +1. Search trees spanning non-overlapping processor lists are formed using only local information, and can be used concurrently without contention problems. Thus, they can be used for performing broadcast and merge operations simultaneously on sets with non-uniform sizes. Extensions to generalized and faulty hypercubes and applications to image processing algorithms and for m-way search are discussed.
36

Sheng, James Min. „Efficient geographic information systems : data structures, Boolean operations and concurrency control /“. Online version of thesis, 1990. http://hdl.handle.net/1850/10594.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Sundaravadivelu, Rathinasabapathy. „Interoperability between heterogeneous and distributed biodiversity data sources in structured data networks“. Thesis, Cardiff University, 2010. http://orca.cf.ac.uk/18086/.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
The extensive capturing of biodiversity data and storing them in heterogeneous information systems that are accessible on the internet across the globe has created many interoperability problems. One is that the data providers are independent of others and they can run systems which were developed on different platforms at different times using different software products to respond to different needs of information. A second arises from the data modelling used to convert the real world data into a computerised data structure which is not conditioned by a universal standard. Most importantly the need for interoperation between these disparate data sources is to get accurate and useful information for further analysis and decision making. The software representation of a universal or a single data definition structure for depicting a biodiversity entity is ideal. But this is not necessarily possible when integrating data from independently developed systems. The different perspectives of the real-world entity when being modelled by independent teams will result in the use of different terminologies, definition and representation of attributes and operations for the same real-world entity. The research in this thesis is concerned with designing and developing an interoperable flexible framework that allows data integration between various distributed and heterogeneous biodiversity data sources that adopt XML standards for data communication. In particular the problems of scope and representational heterogeneity among the various XML data schemas are addressed. To demonstrate this research a prototype system called BUFFIE (Biodiversity Users‘ Flexible Framework for Interoperability Experiments) was designed using a hybrid of Object-oriented and Functional design principles. This system accepts the query information from the user in a web form, and designs an XML query. This request query is enriched and is made more specific to data providers using the data provider information stored in a repository. These requests are sent to the different heterogeneous data resources across the internet using HTTP protocol. The responses received are in varied XML formats which are integrated using knowledge mapping rules defined in XSLT & XML. The XML mappings are derived from a biodiversity domain knowledgebase defined for schema mappings of different data exchange protocols. The integrated results are presented to users or client programs to do further analysis. The main results of this thesis are: (1) A framework model that allows interoperation between the heterogeneous data source systems. (2) Enriched querying improves the accuracy of responses by finding the correct information existing among autonomous, distributed and heterogeneous data resources. (3) A methodology that provides a foundation for extensibility as any new network data standards in XML can be added to the existing protocols. The presented approach shows that (1) semi automated mapping and integration of datasets from the heterogeneous and autonomous data providers is feasible. (2) Query enriching and integrating the data allows the querying and harvesting of useful data from various data providers for helpful analysis.
38

Brisson, Erik. „Representation of d-dimensional geometric objects /“. Thesis, Connect to this title online; UW restricted, 1990. http://hdl.handle.net/1773/6903.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Mak, Vivian. „Algorithms for proximity problems in the presence of obstacles /“. Hong Kong : University of Hong Kong, 1999. http://sunzi.lib.hku.hk/hkuto/record.jsp?B21414944.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Kim, Deokhwan. „Verification of semantic commutativity conditions and inverse operations on linked data structures“. Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/68502.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2011.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 57-61).
We present a new technique for verifying commutativity conditions, which are logical formulas that characterize when operations commute. Because our technique reasons with the abstract state of verified linked data structure implementations, it can verify commuting operations that produce semantically equivalent (but not necessarily identical) data structure states in different execution orders. We have used this technique to verify sound and complete commutativity conditions for all pairs of operations on a collection of linked data structure implementations, including data structures that export a set interface (ListSet and HashSet) as well as data structures that export a map interface (AssociationList, HashTable, and ArrayList). This effort involved the specification and verification of 765 commutativity conditions. Many speculative parallel systems need to undo the effects of speculatively executed operations. Inverse operations, which undo these effects, are often more efficient than alternate approaches (such as saving and restoring data structure state). We present a new technique for verifying such inverse operations. We have specified and verified, for all of our linked data structure implementations, an inverse operation for every operation that changes the data structure state. Together, the commutativity conditions and inverse operations provide a key resource that language designers, developers of program analysis systems, and implementors of software systems can draw on to build languages, program analyses, and systems with strong correctness guarantees.
by Deokhwan Kim.
S.M.
41

Mak, Vivian, und 麥慧芸. „Algorithms for proximity problems in the presence of obstacles“. Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1999. http://hub.hku.hk/bib/B29822749.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Rao, Ananth K. „The DFS distributed file system : design and implementation“. Online version of thesis, 1989. http://hdl.handle.net/1850/10500.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Mehdawi, Nader. „Monitoring for Underdetermined Underground Structures during Excavation Using Limited Sensor Data“. Master's thesis, University of Central Florida, 2013. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/5670.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
A realistic field monitoring application to evaluate close proximity tunneling effects of a new tunnel on an existing tunnel is presented. A blind source separation (BSS)-based monitoring framework was developed using sensor data collected from the existing tunnel while the new tunnel was excavated. The developed monitoring framework is particularly useful to analyze underdetermined systems due to insufficient sensor data for explicit input force-output deformation relations. The analysis results show that the eigen-parameters obtained from the correlation matrix of raw sensor data can be used as excellent indicators to assess the tunnel structural behaviors during the excavation with powerful visualization capability of tunnel lining deformation. Since the presented methodology is data-driven and not limited to a specific sensor type, it can be employed in various proximity excavation monitoring applications.
M.S.
Masters
Civil, Environmental, and Construction Engineering
Engineering and Computer Science
Civil Engineering; Structures and Geotechnical Engineering
44

Robson, R. „Data views for a programming environment“. Thesis, McGill University, 1988. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=75754.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
A data structure editor is presented for use in an integrated, fragment-based programming environment. This editor employs high resolution computer graphics to present the user with an iconic representation of the internal storage of a running program.
The editor allows the creation, modification, and deletion of data structures. These abilities allow the user to quickly sketch data structures with which to test incomplete program fragments, alleviating the need for driver routines.
To keep the user cognizant of events inside his program, a technique for automated display management is presented allowing the user to keep the most important objects in the viewport at all times. A history facility permits the user to see the former values of all variables.
Execution controls are provided allowing the user to control the scope and speed of execution, manipulate frames on the run-time stack, set breakpoints, and profile the executing algorithm.
45

Wong, Ka Chun. „Optimal expected-case planar point location /“. View abstract or full-text, 2005. http://library.ust.hk/cgi/db/thesis.pl?COMP%202005%20WONG.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Malamatos, Theocharis. „Expected-case planar point location /“. View Abstract or Full-Text, 2002. http://library.ust.hk/cgi/db/thesis.pl?COMP%202002%20MALAMA.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Lai, Ka-ying. „Solving multiparty private matching problems using Bloom-filters“. Click to view the E-thesis via HKUTO, 2006. http://sunzi.lib.hku.hk/hkuto/record/B37854847.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Lai, Ka-ying, und 黎家盈. „Solving multiparty private matching problems using Bloom-filters“. Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2006. http://hub.hku.hk/bib/B37854847.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Lau, Ching Hin. „An I/O-efficient data structure for querying XML with inherited attributes /“. View abstract or full-text, 2009. http://library.ust.hk/cgi/db/thesis.pl?CSED%202009%20LAU.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Su, Wei. „Motif Mining On Structured And Semi-structured Biological Data“. Case Western Reserve University School of Graduate Studies / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=case1365089538.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Zur Bibliographie