Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Distributed transaction systems.

Rozprawy doktorskie na temat „Distributed transaction systems”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „Distributed transaction systems”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.

1

Xia, Yu S. M. Massachusetts Institute of Technology. "Logical timestamps in distributed transaction processing systems". Thesis, Massachusetts Institute of Technology, 2018. https://hdl.handle.net/1721.1/122877.

Pełny tekst źródła
Streszczenie:
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 73-79).
Distributed transactions are such transactions with remote data access. They usually suffer from high network latency (compared to the internal overhead) during data operations on remote data servers, and therefore lengthen the entire transaction executiont time. This increases the probability of conflicting with other transactions, causing high abort rates. This, in turn, causes poor performance. In this work, we constructed Sundial, a distributed concurrency control algorithm that applies logical timestamps seaminglessly with a cache protocol, and works in a hybrid fashion where an optimistic approach is combined with lock-based schemes. Sundial tackles the inefficiency problem in two ways. Firstly, Sundial decides the order of transactions on the fly. Transactions get their commit timestamp according to their data access traces. Each data item in the database has logical leases maintained by the system. A lease corresponds to a version of the item. At any logical time point, only a single transaction holds the 'lease' for any particular data item. Therefore, lease holders do not have to worry about someone else writing to the item because in the logical timeline, the data writer needs to acquire a new lease which is disjoint from the holder's. This lease information is used to calculate the logical commit time for transactions. Secondly, Sundial has a novel caching scheme that works together with logical leases. The scheme allows the local data server to automatically cache data from the remote server while preserving data coherence. We benchmarked Sundial along with state-of-the-art distributed transactional concurrency control protocols. On YCSB, Sundial outperforms the second best protocol by 57% under high data access contention. On TPC-C, Sundial has a 34% improvement over the state-of-the-art candidate. Our caching scheme has performance gain comparable with hand-optimized data replication. With high access skew, it speeds the workload by up to 4.6 x.
"This work was supported (in part) by the U.S. National Science Foundation (CCF-1438955)"
by Yu Xia.
S.M.
S.M. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
Style APA, Harvard, Vancouver, ISO itp.
2

Xie, Wanxia. "Supporting Distributed Transaction Processing Over Mobile and Heterogeneous Platforms". Diss., Georgia Institute of Technology, 2005. http://hdl.handle.net/1853/14073.

Pełny tekst źródła
Streszczenie:
Recent advances in pervasive computing and peer-to-peer computing have opened up vast opportunities for developing collaborative applications. To benefit from these emerging technologies, there is a need for investigating techniques and tools that will allow development and deployment of these applications on mobile and heterogeneous platforms. To meet these challenging tasks, we need to address the typical characteristics of mobile peer-to-peer systems such as frequent disconnections, frequent network partitions, and peer heterogeneity. This research focuses on developing the necessary models, techniques and algorithms that will enable us to build and deploy collaborative applications in the Internet enabled, mobile peer-to-peer environments. This dissertation proposes a multi-state transaction model and develops a quality aware transaction processing framework to incorporate quality of service with transaction processing. It proposes adaptive ACID properties and develops a quality specification language to associate a quality level with transactions. In addition, this research develops a probabilistic concurrency control mechanism and a group based transaction commit protocol for mobile peer-to-peer systems that greatly reduces blockings in transactions and improves the transaction commit ratio. To the best of our knowledge, this is the first attempt to systematically support disconnection-tolerant and partition-tolerant transaction processing. This dissertation also develops a scalable directory service called PeerDS to support the above framework. It addresses the scalability and dynamism of the directory service from two aspects: peer-to-peer and push-pull hybrid interfaces. It also addresses peer heterogeneity and develops a new technique for load balancing in the peer-to-peer system. This technique comprises an improved routing algorithm for virtualized P2P overlay networks and a generalized Top-K server selection algorithm for load balancing, which could be optimized based on multiple factors such as proximity and cost. The proposed push-pull hybrid interfaces greatly reduce the overhead of directory servers caused by frequent queries from directory clients. In order to further improve the scalability of the push interface, this dissertation also studies and evaluates different filter indexing schemes through which the interests of each update could be calculated very efficiently. This dissertation was developed in conjunction with the middleware called System on Mobile Devices (SyD).
Style APA, Harvard, Vancouver, ISO itp.
3

Dixon, Eric Richard. "Developing distributed applications with distributed heterogenous databases". Thesis, Virginia Tech, 1993. http://hdl.handle.net/10919/42748.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Andersson, Joachim, i Byggnings Johan Lindbom. "Reducing the load on transaction-intensive systems through distributed caching". Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-186676.

Pełny tekst źródła
Streszczenie:
Scania is an international trucks, buses and engines manufacturer with sales and service organization in more than 100 countries all over the globe (Scania, 2011). In 2011 alone, Scania delivered over 80 000 vehicles, which is an increase by a margin of 26% from the previous year. The company continues to deliver more trucks each year while expanding to other areas of the world, which means that the data traffic is going to increase remarkably in the transaction- intensive fleet management system (FMS). This increases the need for a scalable system; adding more sources to handle these requests in parallel. Distributed caching is one technique that can solve this issue. The technique makes applications and systems more scalable, and it can be used to reduce load on the underlying data sources. The purpose of this thesis is to evaluate whether or not distributed caching is a suitable technical solution for Scania FMS. The aim of the study is to identify scenarios in FMS where a distributed cache solution could be of use, and to test the performance of two distributed cache products while simulating these scenarios.  The results from the tests are then used to evaluate the distributed cache products and to compare distributed caching performance to a single database. The products evaluated in this thesis are Alachisoft NCache and Microsoft Appfabric. The results from the performance tests show that that NCache outperforms AppFabric in all aspects. In conclusion, distributed caching has been demonstrated  to be a viable option when scaling out the system.
Style APA, Harvard, Vancouver, ISO itp.
5

Hirve, Sachin. "On the Fault-tolerance and High Performance of Replicated Transactional Systems". Diss., Virginia Tech, 2015. http://hdl.handle.net/10919/56668.

Pełny tekst źródła
Streszczenie:
With the recent technological developments in last few decades, there is a notable shift in the way business/consumer transactions are conducted. These transactions are usually triggered over the internet and transactional systems working in the background ensure that these transactions are processed. The majority of these transactions nowadays fall in Online Transaction Processing (OLTP) category, where low latency is preferred characteristic. In addition to low latency, OLTP transaction systems also require high service continuity and dependability. Replication is a common technique that makes the services dependable and therefore helps in providing reliability, availability and fault-tolerance. Deferred Update Replication (DUR) and Deferred Execution Replication (DER) represent the two well known transaction execution models for replicated transactional systems. Under DUR, a transaction is executed locally at one node before a global certification is invoked to resolve conflicts against other transactions running on remote nodes. On the other hand, DER postpones the transaction execution until the agreement on a common order of transaction requests is reached. Both DUR and DER require a distributed ordering layer, which ensures a total order of transactions even in case of faults. In today's distributed transactional systems, performance is of paramount importance. Any loss in performance, e.g., increased latency due to slow processing of client requests, may entail loss of revenue for businesses. On one hand, the DUR model is a good candidate for transaction processing in those systems in case the conflicts among transactions are rare, while it can be detrimental for high conflict workload profiles. On the other hand, the DER model is an attractive choice because of its ability to behave as independent of the characteristics of the workload, but trivial realizations of the model ultimately do not offer a good performance increase margin. Indeed transactions are executed sequentially and the total order layer can be a serious bottleneck for latency and scalability. This dissertation proposes novel solutions and system optimizations to enhance the overall performance of replicated transactional systems. The first presented result is HiperTM, a DER-based transaction replication solution that is able to alleviate the costs of the total order layer via speculative execution techniques. HiperTM exploits the time that is between the broadcast of a client request and the finalization of the order for that request to speculatively execute the request, so to achieve an overlapping between replicas coordination and transactions execution. HiperTM proposes two main components: OS-Paxos, a novel total order layer that is able to early deliver requests optimistically according to a tentative order, which is then either confirmed or rejected by a final total order; SCC, a lightweight speculative concurrency control protocol that is able to exploit the optimistic delivery of OS-Paxos and execute transactions in a speculative fashion. SCC still processes write transactions serially in order to minimize the code instrumentation overheads, but it is able to parallelize the execution of read-only transactions thanks to its built-in object multiversion scheme. The second contribution in this dissertation is X-DUR, a novel transaction replication system that addressed the high cost of local and remote aborts in case of high contention on shared objects in DUR based approaches, due to which the performance is adversely affected. Exploiting the knowledge of client's transaction locality, X-DUR incorporates the benefits of state machine approach to scale-up the distributed performance of DUR systems. As third contribution, this dissertation proposes Archie, a DER-based replicated transactional system that improves HiperTM in two aspects. First, Archie includes a highly optimized total order layer that combines optimistic-delivery and batching thus allowing the anticipation of a big amount of work before the total order is finalized. Then the concurrency control is able to process transactions speculatively and with a higher degree of parallelism, although the order of the speculative commits still follows the order defined by the optimistic delivery. Both HiperTM and Archie perform well up to a certain number of nodes in the system, beyond which their performance is impacted by limitations of single leader-based total-order layer. This motivates the design of Caesar, the forth contribution of this dissertation, which is a transactional system based on a novel multi-leader partial order protocol. Caesar enforces a partial order on the execution of transactions according to their conflicts, by letting non-conflicting transactions to proceed in parallel and without enforcing any synchronization during the execution (e.g., no locks). As the last contribution, this dissertation presents Dexter, a replication framework that exploits the commonly observed phenomenon such that not all read-only workloads require up-to-date data. It harnesses the application specific freshness and content-based constraints of read-only transactions to achieve high scalability. Dexter services the read-only requests according to the freshness guarantees specified by the application and routes the read-only workload accordingly in the system to achieve high performance and low latency. As a result, Dexter framework also alleviates the interference between read-only requests and read-write requests thereby helping to improve the performance of read-write requests execution as well.
Ph. D.
Style APA, Harvard, Vancouver, ISO itp.
6

Chan, Kinson, i 陳傑信. "Distributed software transactional memory with clock validation on clusters". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2013. http://hub.hku.hk/bib/B5053404X.

Pełny tekst źródła
Streszczenie:
Within a decade, multicore processors emerged and revolutionised the world of computing. Nowadays, even a low-end computer comes with a multi-core processor and is capable running multiple threads simultaneously. It becomes impossible to make the best computation power out from a computer with a single-threaded program. Meanwhile, writing multi-threaded software is daunting to a lot of programmers as the threads share data and involve complicated synchronisation techniques such as locks and conditions. Software transactional memory is a promising alternative model that programmers simply need to understand transactional consistency and segment code into transactions. Programming becomes exciting again, without races, deadlocks and other issues that are common in lock-based paradigms. To pursue high throughput, performance-oriented computers have several multicore processors per each. A processor’s cache is not directly accessible by the cores in other processors, leading to non-uniform latency when the threads share data. These computers no longer behave like the classical symmetric multiprocessor computers. Although old programs continue to work, they do not necessary benefit from the added cores and caches. Most software transactional memory implementations fall into this category. They rely on a centralised and shared meta-variable (like logical clock) in order to provide the single-lock atomicity. On a computer with two or more multicore processors, the single and shared meta-variable gets regularly updated by different processors. This leads to a tremendous amount of cache contentions. Much time is spent on inter-processor cache invalidations rather than useful computations. Nevertheless, as computers with four processors or more are exponentially complex and expensive, people would desire solving sophisticated problems with several smaller computers whenever possible. Supporting software transactional consistency across multiple computers is a rarely explored research area. Although we have similar mature research topics such as distributed shared memory and distributed relational database, they have remarkably different characteristics so that most of the implementation techniques and tricks are not applicable to the new system. There are several existing distributed software transactional memory systems, but we feel there is much room for improvement. One crucial area is the conflict detection mechanism. Some of these systems make use of broadcast messages to commit transactions, which are certainly not scalable for large-scale clusters. Others use directories to direct messages to the relevant nodes only, but they also keep visible reader lists for invalidation per node. Updating a shared reader lists involves cache invalidations on processors. Reading shared data on such systems are more expensive compared to the conventional low-cost invisible reader validation systems. In this research, we aim to have a distributed software transactional memory system, with distributed clock validation for conflict detection purpose. As preparation, we first investigate some issues such as concurrency control and conflict detection in single-node systems. Finally, we combine the techniques with a tailor-made cache coherence protocol that is differentiated from typical distributed shared memory.
published_or_final_version
Computer Science
Doctoral
Doctor of Philosophy
Style APA, Harvard, Vancouver, ISO itp.
7

Mena, Eduardo Illarramendi Arantza. "Ontology-based query processing for global information systems /". Boston [u.a.] : Kluwer Acad. Publ, 2001. http://www.loc.gov/catdir/enhancements/fy0813/2001029621-d.html.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Wu, Jiang. "CHECKPOINTING AND RECOVERY IN DISTRIBUTED AND DATABASE SYSTEMS". UKnowledge, 2011. http://uknowledge.uky.edu/cs_etds/2.

Pełny tekst źródła
Streszczenie:
A transaction-consistent global checkpoint of a database records a state of the database which reflects the effect of only completed transactions and not the re- sults of any partially executed transactions. This thesis establishes the necessary and sufficient conditions for a checkpoint of a data item (or the checkpoints of a set of data items) to be part of a transaction-consistent global checkpoint of the database. This result would be useful for constructing transaction-consistent global checkpoints incrementally from the checkpoints of each individual data item of a database. By applying this condition, we can start from any useful checkpoint of any data item and then incrementally add checkpoints of other data items until we get a transaction- consistent global checkpoint of the database. This result can also help in designing non-intrusive checkpointing protocols for database systems. Based on the intuition gained from the development of the necessary and sufficient conditions, we also de- veloped a non-intrusive low-overhead checkpointing protocol for distributed database systems. Checkpointing and rollback recovery are also established techniques for achiev- ing fault-tolerance in distributed systems. Communication-induced checkpointing algorithms allow processes involved in a distributed computation take checkpoints independently while at the same time force processes to take additional checkpoints to make each checkpoint to be part of a consistent global checkpoint. This thesis develops a low-overhead communication-induced checkpointing protocol and presents a performance evaluation of the protocol.
Style APA, Harvard, Vancouver, ISO itp.
9

Blackshaw, Bruce Philip. "Migration of legacy OLTP architectures to distributed systems". Thesis, Queensland University of Technology, 1997. https://eprints.qut.edu.au/36839/1/36839_Blackshaw_1997.pdf.

Pełny tekst źródła
Streszczenie:
Mincom, a successful Australian software company, markets an enterprise product known as the Mincom Information Management System, or MIMS. MIMS is an integrated suite of modules covering materials, maintenance, financials, and human resources management. MIMS is an on-line transaction processing (OLTP) system, meaning it has special requirements in the areas of pe,jormance and scalability. MIMS consists of approxiniately 16 000 000 lines of code, most of which is written in COBOL. Its basic architecture is 3-tier client/server, utilising a database layer, application logic layer, and a Graphical User Inte,face (GUI). While this architecture has proved successful, Mincom is looking to gradually evolve MIMS into a distributed architecture. COREA is the target distributed framework. The development of an enterprise distributed system is fraught with difficulties. Key technical problems are not yet solved, and Mincom cannot afford the risk and cost involved in rewriting MIMS completely. The only viable approach is to gradually evolve MIMS into the desired architecture using a hybrid system that allows clients to access existing and new functionality. This thesis addresses the design and development of distributed systems, and the evolution of existing legacy systems into this architecture. It details the current MIMS architecture, and explains some of its shortcomings. The desirable characteristics of a new system based on a distributed architecture such as COREA are outlined. A case is established for a gradual migration of the current system via a hybrid system rather than a complete rewrite. Two experimental systems designed to investigate the proposed new architecture are discussed. The conclusion reached from the first, known as Genesis, is that the maturity of CORBA for ente1prise development is not sufficient-12-18 months are estimated to be required for the appropriate level of maturity to be reached. The second system, EGEN, demonstrates how workflow can be integrated into a distributed system. An event-based workflow architecture is demonstrated, and it is explained how a workflow event server can be used to provide workflow services across a hybrid system. EGEN also demonstrates how a middleware gateway can be used to allow COREA clients access to the functionality of the existing MIMS system. Finally, a proposed migration strategy for moving MIMS to a distributed architecture based on COREA is outlined. While developed specifically for MIMS, this strategy is broadly applicable to the migration of any large 3-tier client/server system to a distributed architecture.
Style APA, Harvard, Vancouver, ISO itp.
10

Rocha, Tarcisio da. "Serviços de transação abertos para ambientes dinamicos". [s.n.], 2008. http://repositorio.unicamp.br/jspui/handle/REPOSIP/276015.

Pełny tekst źródła
Streszczenie:
Orientador: Maria Beatriz Felgar de Toledo
Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Computação
Made available in DSpace on 2018-08-13T03:59:50Z (GMT). No. of bitstreams: 1 Rocha_Tarcisioda_D.pdf: 1796192 bytes, checksum: 4b25ccccc2fa363f13a02764136f5208 (MD5) Previous issue date: 2008
Resumo: Tecnicas de processamento de transações tem sido de grande importancia no que diz respeito a preservação da correção em diversas areas da computação. Devido a funções como, garantir a consistencia de dados, a recuperação de falhas e o controle de concorrencia, transações são consideradas blocos de construção apropriados para a estruturação de sistemas confiaveis. Contudo, desenvolver tecnicas de apoio a transações para ambientes dinamicos pode ser uma tarefa complexa. O primeiro obstaculo esta no proprio dinamismo - a disponibilidade de recursos pode variar inesperadamente. Isso pode causar dois efeitos diretos: altas taxas de cancelamento de transações e grandes atrasos na execução das tarefas transacionais. O segundo obstaculo esta na crescente flexibilização do conceito de transação. Isso ocorre porque os requisitos transacionais exigidos pelas aplicações atuais estão se tornando mais variados, indo al'em das propriedades tradicionalmente definidas para uma transação. Nesse contexto, esta tese aborda a viabilização de serviços de transações abertos, ou seja, capazes de terem sua estrutura e comportamento configurados pelos programadores de aplicações como um meio de atender a requisitos especificos do dominio de suas aplicações. Como parte desse estudo foi proposto um modelo que abstrai alguns elementos arquiteturais como jumpers, slots e demultiplexadores que podem ser usados na especificação de pontos de configuração em serviços de transação. Esse modelo e implementado como uma camada acima de um modelo de componentes existente. Com isso, desenvolvedores de serviços de transação passam a contar com esses elementos abertos alem daqueles disponibilizados por abordagens tradicionais baseadas em componentes. Para confirmar os beneficios em usabilidade, flexibilidade e extensão, esta tese apresenta dois serviços de transação abertos que foram especificados com base no modelo proposto. O primeiro serviço faz parte de uma plataforma de transações adaptavel para ambientes de computação movel. O segundo serviço faz parte de um sistema que prove adaptação dinamica de protocolos de efetivação (commit) de transações. Segundo os testes realizados, a abordagem apresentada nesta tese trouxe a esses serviços a capacidade de atender requisitos de aplicações de diferentes dominios.
Abstract: Transaction processing techniques are considered important solutions on preserving correctness in several fields of computing. Due their functions such as, failure recovery and concurrency control, transactions are considered appropriated building blocks for structuring reliable systems. Despite its advantages, to develop transaction systems for dynamic environments is not an easy task. The first problem is the dynamism - the resource availability can vary unexpectedly. This can cause the following side effects: high transaction abort rates and relevant delays of transaction operations. The second problem is the flexibilization of the transaction concept. The transactional requirements are becoming more diversified - they extrapolate the bounds of the traditional transactional properties. In this context, this thesis approaches the practicability of open transaction services that can be configured by the application programmers for attending specific requirements of different application domains. This thesis includes a model that abstracts some architectural elements (slots, jumpers and demultiplexers) that can be used for specifying configuration points in transaction services. To confirm its benefits on usability, flexibility and extension, this thesis presents two open transaction services that were specified based on the proposed model. The first service is part of an adaptable transaction platform for mobile computing environments. The second service is part of a system that provides dynamic adaptation of commit protocols. According the accomplished tests, the approach presented in this thesis is able to give to these services the capacity of attending the requirement of applications in different domains.
Doutorado
Sistemas Distribuidos
Doutor em Ciência da Computação
Style APA, Harvard, Vancouver, ISO itp.
11

Kim, Junwhan. "Scheduling Memory Transactions in Distributed Systems". Diss., Virginia Tech, 2013. http://hdl.handle.net/10919/24768.

Pełny tekst źródła
Streszczenie:
Distributed transactional memory (DTM) is an emerging, alternative concurrency control model that promises to alleviate the difficulties of lock-based distributed synchronization. In DTM, transactional conflicts are traditionally resolved by a contention manager. A complementary approach for handling conflicts is through a transactional scheduler, which orders transactional requests to avoid or minimize conflicts. We present a suite of transactional schedulers: Bi-interval, Commutative Requests First (CRF), Reactive Transactional Scheduler (RTS), Dependency-Aware Transactional Scheduler} (DATS), Scheduling-based Parallel Nesting} (SPN), Cluster-based Transactional Scheduler} (CTS), and Locality-aware Transactional Scheduler} (LTS). The schedulers consider Herlihy and Sun's dataflow execution model, where transactions are immobile and objects are migrated to invoking transactions, relying on directory-based cache-coherence protocols to locate and move objects. Within this execution model, the proposed schedulers target different DTM models. Bi-interval considers the single object copy DTM model, and categorizes concurrent requests into read and write intervals to maximize the concurrency of read transactions. This allows an object to be simultaneously sent to read transactions, improving transactional makespan. We show that Bi-interval improves the makespan competitive ratio of DTM without such a scheduler to O(log(N)) for the worst-case and (log(N - k) for the average-case, for N nodes and k read transactions. Our implementation reveals that Bi-interval enhances transactional throughput over the no-scheduler case by as much as 1.71x, on average. CRF considers multi-versioned DTM. Traditional multi-versioned TM models use multiple object versions to guarantee commits of read transactions, but limit concurrency of write transactions. CRF relies on the notion of commutative transactions, i.e., those that ensure consistency of the shared data-set even when they are validated and committed concurrently. CRF detects conflicts between commutative and non-commutative write transactions and then schedules them according to the execution state, enhancing the concurrency of write transactions. Our implementation shows that transactional throughput is improved by up to 5x over a state-of-the-art competitor (DecentSTM). RTS and DATS consider transactional nesting in DTM, and focus on the closed and open nesting models, respectively. RTS determines whether a conflicting outer transaction must be aborted or enqueued according to the level of contention. If a transaction is enqueued, its closed-nested transactions do not have to retrieve objects again, resulting in reduced communication delays. DATS's goal is to boost the throughput of open-nested transactions by reducing the overhead of running expensive compensating actions and acquiring/releasing abstract locks when the outer transaction aborts. The contribution of DATS is twofold. First, it allows commutable outer transactions to be validated concurrently and allows non-commutable outer transactions -- depending on their inner transactions -- to be committed before others without dependencies. Implementations reveal effectiveness: RTS and DATS improve throughput (over the no-scheduler case), by as much as 1.88x and 2.2x, respectively. SPN considers parallel nested transactions in DTM. The idea of parallel nesting is to execute the inner transactions that access different objects concurrently, and execute the inner transactions that access the same objects serially, increasing performance. However, the parallel nesting model may be ineffective if all inner transactions access the same object due to the additional overheads needed to identify both types of inner transactions. SPN avoids this overhead and allows inner transactions to request objects and to execute them in parallel. Implementations reveal that SPN outperforms non-parallel nesting (i.e., closed nesting) by up to 3.5x and 4.5x on a micro-benchmark (bank) and the TPC-C transactional benchmark, respectively. CTS considers the replicated DTM model: object replicas are distributed across clusters of nodes, where clusters are determined based on inter-node distance, to maximize locality and fault-tolerance, and to minimize memory usage and communication overhead. CTS enqueues transactions that are aborted due to early validation over clusters and assigns their backoff times, reducing communication overhead. Implementation reveals that CTS improves throughput over competitor replicated DTM solutions including GenRSTM and DecentSTM by as much as 1.64x, on average. LTS considers the genuine partial replicated DTM model. In this model, LTS exploits locality by: 1) employing a transaction scheduler, which enables/disables object ownership changes depending on workload fluctuations, and 2) splitting hot-spot objects into multiple replicas for reducing contention. Our implementation reveals that LTS outperforms state-of-the-art competitors (Score and CTS) by up to 2.6x on micro-benchmarks (Linked List and Skip List) and by up to 2.2x on TPC-C.
Ph. D.
Style APA, Harvard, Vancouver, ISO itp.
12

Kendric, Hood A. "Improving Cryptocurrency Blockchain Security and Availability Adaptive Security and Partitioning". Kent State University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=kent1595038779436782.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
13

Turcu, Alexandru. "On Improving Distributed Transactional Memory through Nesting, Partitioning and Ordering". Diss., Virginia Tech, 2015. http://hdl.handle.net/10919/51593.

Pełny tekst źródła
Streszczenie:
Distributed Transactional Memory (DTM) is an emerging, alternative concurrency control model that aims to overcome the challenges of distributed-lock based synchronization. DTM employs transactions in order to guarantee consistency in a concurrent execution. When two or more transactions conflict, all but one need to be delayed or rolled back. Transactional Memory supports code composability by nesting transactions. Nesting how- ever can be used as a strategy to improve performance. The closed nesting model enables partial rollback by allowing a sub-transaction to abort without aborting its parent, thus reducing the amount of work that needs to be retried. In the open nesting model, sub- transactions can commit to the shared state independently of their parents. This reduces isolation and increases concurrency. Our first main contribution in this dissertation are two extensions to the existing Transac- tional Forwarding Algorithm (TFA). Our extensions are N-TFA and TFA-ON, and support closed nesting and open nesting, respectively. We additionally extend the existing SCORe algorithm with support for open nesting (we call the result SCORe-ON). We implement these algorithms in a Java DTM framework and evaluate them. This represents the first study of transaction nesting in the context of DTM, and contributes the first DTM implementation which supports closed nesting or open nesting. Closed nesting through our N-TFA implementation proved insufficient for any significant throughput improvements. It ran on average 2% faster than flat nesting, while performance for individual tests varied between 42% slowdown and 84% speedup. The workloads that benefit most from closed nesting are characterized by short transactions, with between two and five sub-transactions. Open nesting, as exemplified by our TFA-ON and SCORe-ON implementations, showed promising results. We determined performance improvement to be a trade-off between the overhead of additional commits and the fundamental conflict rate. For write-intensive, high- conflict workloads, open nesting may not be appropriate, and we observed a maximum speedup of 30%. On the other hand, for lower fundamental-conflict workloads, open nesting enabled speedups of up to 167% in our tests. In addition to the two nesting algorithms, we also develop Hyflow2, a high-performance DTM framework for the Java Virtual Machine, written in Scala. It has a clean Scala API and a compatibility Java API. Hyflow2 was on average two times faster than Hyflow on high-contention workloads, and up to 16 times faster in low-contention workloads. Our second main contribution for improving DTM performance is automated data partition- ing. Modern transactional processing systems need to be fast and scalable, but this means many such systems settled for weak consistency models. It is however possible to achieve all of strong consistency, high scalability and high performance, by using fine-grained partitions and light-weight concurrency control that avoids superfluous synchronization and other over- heads such as lock management. Independent transactions are one such mechanism, that rely on good partitions and appropriately defined transactions. On the downside, it is not usually straightforward to determine optimal partitioning schemes, especially when dealing with non-trivial amounts of data. Our work attempts to solve this problem by automating the partitioning process, choosing the correct transactional primitive, and routing transactions appropriately. Our third main contribution is Alvin, a system for managing concurrently running trans- actions on a geographically replicated data-store. Alvin supports general-purpose transactions, and guarantees strong consistency criteria. Through a novel partial order broadcast protocol, Alvin maximizes the parallelism of ordering and local transaction processing, resulting in low client-perceived latency. Alvin can process read-only transactions either lo- cally or globally, according to the desired consistency criterion. Conflicting transactions are ordered across all sites. We built Alvin in the Go programming language. We conducted our evaluation study on Amazon EC2 infrastructure and compared against Paxos- and EPaxos- based state machine replication protocols. Our results reveal that Alvin provides significant speed-up for read-dominated TPC-C workloads: as much as 4.8x when compared to EPaxos on 7 datacenters, and up to 26% in write-intensive workloads. Our fourth and final contribution is M2Paxos, a multi-leader implementation of Generalized Consensus. Single leader-based consensus protocols are known to stop scaling once the leader reaches its saturation point. Ordering commands based on conflicts is appealing due to the potentially higher parallelism, but is imperfect due to the higher quorum sizes required for fast decisions and the need to compare commands and track their dependencies. M2Paxos on the other hand exploits fast decisions (i.e., delivery of a command in two communication delays) by leveraging a classic quorum size, matching a majority of nodes deployed. M2Paxos does not establish command dependencies based on conflicts, but it binds accessed objects to nodes, making sure commands operating on the same object will be ordered by the same node. Our evaluation study of M2Paxos (also built in Go) confirms the effectiveness of this approach, getting up to 7⨉ improvements in performance over state- of-the-art consensus and generalized consensus algorithms.
Ph. D.
Style APA, Harvard, Vancouver, ISO itp.
14

Taleb, Nasser. "Transactions serialization in distributed multidatabase systems". Thesis, University of Salford, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.420459.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
15

Hedman, Surlien Peter. "Economic advantages of Blockchain technology VS Relational database : An study focusing on economic advantages with Blockchain technology and relational databases". Thesis, Blekinge Tekniska Högskola, Institutionen för industriell ekonomi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-17366.

Pełny tekst źródła
Streszczenie:
Many IT-systems are when created not designed to be flexible and dynamic resulting in old and complex systems hard to maintain. Systems usually build their functionality and capability on the data contained in their databases. The database underlines such system, and when data do not correspond between different and synchronizing systems, it is a troublesome debugging process. This is because systems are complex and the software architecture is not always easy to understand. Due to increasing complexity in systems over time, making systems harder to debug and understand, there is a need for a system that decreases debugging costs. Furthermore, result in better transaction costs. This study proposes a system based on blockchain technology to accomplish this.   An ERP system based on blockchain with encrypted transactions was constructed to determine if the proposed system can contribute in better transaction costs. A case study at multiple IT-companies and comparison to an existing ERP system module validated the system. A successful simulation showed that multiple parts could read and append data to an immutable storage system for one truth of data. By all counts, and with proven results, the constructed blockchain solution based on encrypted transactions for an ERP system can reduce debugging costs.   It is also shown that a centralized database structure where external and internal systems can get one truth of data, decreases transaction costs. However, it is the decision makers in companies that need to be convinced for the constructed system to be implemented. A problem is also when modifications to the object type, then historical transactions cannot be changed in an immutable storage solution. Blockchain is still a new technology, and the knowledge of the technology and the evolution of the system determines if the proposed software architecture will result in better transaction costs.
Style APA, Harvard, Vancouver, ISO itp.
16

Pang, Gene. "Scalable Transactions for Scalable Distributed Database Systems". Thesis, University of California, Berkeley, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=3733329.

Pełny tekst źródła
Streszczenie:

With the advent of the Internet and Internet-connected devices, modern applications can experience very rapid growth of users from all parts of the world. A growing user base leads to greater usage and large data sizes, so scalable database systems capable of handling the great demands are critical for applications. With the emergence of cloud computing, a major movement in the industry, modern applications depend on distributed data stores for their scalable data management solutions. Many large-scale applications utilize NoSQL systems, such as distributed key-value stores, for their scalability and availability properties over traditional relational database systems. By simplifying the design and interface, NoSQL systems can provide high scalability and performance for large data sets and high volume workloads. However, to provide such benefits, NoSQL systems sacrifice traditional consistency models and support for transactions typically available in database systems. Without transaction semantics, it is harder for developers to reason about the correctness of the interactions with the data. Therefore, it is important to support transactions for distributed database systems without sacrificing scalability.

In this thesis, I present new techniques for scalable transactions for scalable database systems. Distributed data stores need scalable transactions to take advantage of cloud computing, and to meet the demands of modern applications. Traditional techniques for transactions may not be appropriate in a large, distributed environment, so in this thesis, I describe new techniques for distributed transactions, without having to sacrifice traditional semantics or scalability.

I discuss three facets to improving transaction scalability and support in distributed database systems. First, I describe a new transaction commit protocol that reduces the response times for distributed transactions. Second, I propose a new transaction programming model that allows developers to better deal with the unexpected behavior of distributed transactions. Lastly, I present a new scalable view maintenance algorithm for convergent join views. Together, the new techniques in this thesis contribute to providing scalable transactions for modern, distributed database systems.

Style APA, Harvard, Vancouver, ISO itp.
17

Vale, Tiago Marques do. "A modular distributed transactional memory framework". Master's thesis, Faculdade de Ciências e Tecnologia, 2012. http://hdl.handle.net/10362/8738.

Pełny tekst źródła
Streszczenie:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática
The traditional lock-based concurrency control is complex and error-prone due to its low-level nature and composability challenges. Software transactional memory (STM), inherited from the database world, has risen as an exciting alternative, sparing the programmer from dealing explicitly with such low-level mechanisms. In real world scenarios, software is often faced with requirements such as high availability and scalability, and the solution usually consists on building a distributed system. Given the benefits of STM over traditional concurrency controls, Distributed Software Transactional Memory (DSTM) is now being investigated as an attractive alternative for distributed concurrency control. Our long-term objective is to transparently enable multithreaded applications to execute over a DSTM setting. In this work we intend to pave the way by defining a modular DSTM framework for the Java programming language. We extend an existing, efficient, STM framework with a new software layer to create a DSTM framework. This new layer interacts with the local STM using well-defined interfaces, and allows the implementation of different distributed memory models while providing a non-intrusive, familiar,programming model to applications, unlike any other DSTM framework. Using the proposed DSTM framework we have successfully, and easily, implemented a replicated STM which uses a Certification protocol to commit transactions. An evaluation using common STM benchmarks showcases the efficiency of the replicated STM,and its modularity enables us to provide insight on the relevance of different implementations of the Group Communication System required by the Certification scheme, with respect to performance under different workloads.
Fundação para a Ciência e Tecnologia - project (PTDC/EIA-EIA/113613/2009)
Style APA, Harvard, Vancouver, ISO itp.
18

Silva, João André Almeida e. "Partial replication in distributed software transactional memory". Master's thesis, Faculdade de Ciências e Tecnologia, 2013. http://hdl.handle.net/10362/10769.

Pełny tekst źródła
Streszczenie:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática
Distributed software transactional memory (DSTM) is emerging as an interesting alternative for distributed concurrency control. Usually, DSTM systems resort to data distribution and full replication techniques in order to provide scalability and fault tolerance. Nevertheless, distribution does not provide support for fault tolerance and full replication limits the system’s total storage capacity. In this context, partial data replication rises as an intermediate solution that combines the best of the previous two trying to mitigate their disadvantages. This strategy has been explored by the distributed databases research field, but has been little addressed in the context of transactional memory and, to the best of our knowledge, it has never before been incorporated into a DSTM system for a general-purpose programming language. Thus, we defend the claim that it is possible to combine both full and partial data replication in such systems. Accordingly, we developed a prototype of a DSTM system combining full and partial data replication for Java programs. We built from an existent DSTM framework and extended it with support for partial data replication. With the proposed framework, we implemented a partially replicated DSTM. We evaluated the proposed system using known benchmarks, and the evaluation showcases the existence of scenarios where partial data replication can be advantageous, e.g., in scenarios with small amounts of transactions modifying fully replicated data. The results of this thesis show that we were able to sustain our claim by implementing a prototype that effectively combines full and partial data replication in a DSTM system. The modularity of the presented framework allows the easy implementation of its various components, and it provides a non-intrusive interface to applications.
Fundação para a Ciência e Tecnologia - (FCT/MCTES) in the scope of the research project PTDC/EIA-EIA/113613/2009 (Synergy-VM)
Style APA, Harvard, Vancouver, ISO itp.
19

Niles, Duane Francis Jr. "Improving Performance of Highly-Programmable Concurrent Applications by Leveraging Parallel Nesting and Weaker Isolation Levels". Thesis, Virginia Tech, 2015. http://hdl.handle.net/10919/54557.

Pełny tekst źródła
Streszczenie:
The recent development of multi-core computer architectures has largely affected the creation of everyday applications, requiring the adoption of concurrent programming to significantly utilize the divided processing power of computers. Applications must be split into sections able to execute in parallel, without any of these sections conflicting with one another, thereby necessitating some form of synchronization to be declared. The most commonly used methodology is lock-based synchronization; although, to improve performance the most, developers must typically form complex, low-level implementations for large applications, which can easily create potential errors or hindrances. An abstraction from database systems, known as transactions, is a rising concurrency control design aimed to circumvent the challenges with programmability, composability, and scalability in lock-based synchronization. Transactions execute their operations speculatively and are capable of being restarted (or rolled back) when there exist conflicts between concurrent actions. As such issues can occur later in the lifespans of transactions, entire rollbacks are not that effective for performance. One particular method, known as nesting, was created to counter that drawback. Nesting is the act of enclosing transactions within other transactions, essentially dividing the work into pieces called sub-transactions. These sub-transactions can roll back without affecting the entire main transaction, although general nesting models only allow one sub-transaction to perform work at a time. The first main contribution in this thesis is SPCN, an algorithm that parallelizes nested transactions while automatically processing any potential conflicts that may arise, eliminating the burden of additional processing from the application developers. Two versions of SPCN exist: Strict, which enforces the sub-transactions' work to be made visible in a serialized order; and Relaxed, which allows sub-transactions to distribute their information immediately as they finish (therefore invalidation may occur after-the-fact and must be handled). Despite the additional logic required by SPCN, it outperforms traditional closed nesting by 1.78x at the lowest and 3.78x at the highest in the experiments run. Another method to alter transactional execution and boost performance is to relax the rules of visibility for parallel operations (known as their isolation). Depending on the application, correctness is not broken even if some transactions see external work that may later be undone due to a rollback, or if an object is written while another transaction is using an older instance of its data. With lock-based synchronization, developers would have to explicitly design their application with varying amounts of locks, and different lock organizations or hierarchies, to change the strictness of the execution. With transactional systems, the processing performed by the system itself can be set to utilize different rulings, which can change the performance of an application without requiring it to be largely redesigned. This notion leads to the second contribution in this thesis: AsR, or As-Serializable transactions. Serializability is the general form of isolation or strictness for transactions in many applications. In terms of execution, its definition is equivalent to only one transaction running at a time in a given system. Many transactional systems use their own internal form of locking to create Serializable executions, but it is typically too strict for many applications. AsR transactions allow the internal processing to be relaxed while additional meta-data is used external to the system, without requiring any interaction from the developer or any changes to the given application. AsR transactions offer multiple orders of magnitude more in throughput in highly-contentious scenarios, due to their capability to outlast traditional levels of isolation.
Master of Science
Style APA, Harvard, Vancouver, ISO itp.
20

Yahalom, Raphael. "Managing the order of transactions in widely-distributed data systems". Thesis, University of Cambridge, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.359877.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
21

Abouzeki, Jenane Hassib. "A tool for building distributed transactions system with XML". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/MQ62180.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
22

Pu, Calton. "Replication and nested transactions in the Eden Distributed System /". Thesis, Connect to this title online; UW restricted, 1986. http://hdl.handle.net/1773/6881.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
23

Björk, Mårten, i Sofia Max. "ARTSY : A Reproduction Transaction System". Thesis, Linköping University, Department of Electrical Engineering, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-1611.

Pełny tekst źródła
Streszczenie:

A Transaction Reproduction System (ARTSY) is a distributed system that enables secure transactions and reproductions of digital content over an insecure network. A field of application is reproductions of visual arts: A print workshop could for example use ARTSY to print a digital image that is located at a remote museum. The purpose of this master thesis project was to propose a specification for ARTSY and to show that it is technically feasible to implement it.

An analysis of the security threats in the ARTSY context was performed and a security model was developed. The security model was approved by a leading computer security expert. The security mechanisms that were chosen for the model were: Asymmetric cryptology, digital signatures, symmetric cryptology and a public key registry. A Software Requirements Specification was developed. It contains extra directives for image reproduction systems but it is possible to use it for an arbitrary type of reproduction system. A prototype of ARTSY was implemented using the Java programming language. The prototype uses XML to manage information and Java RMI to enable remote communication between its components. It was built as a platform independent system and it has been tested and proven to be operational on the Sun Solaris platform as well as the Win32 platform.

Style APA, Harvard, Vancouver, ISO itp.
24

Felgar, de Toledo Maria Beatriz. "A flexible approach to transaction management in a distributed cooperative system". Thesis, Lancaster University, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.359908.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
25

Trigonakis, Vasileios. "Design of a Distributed Transactional Memory for Many-core systems". Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-48339.

Pełny tekst źródła
Streszczenie:
The emergence of Multi/Many-core systems signified an increasing need for parallel programming. Transactional Memory (TM) is a promising programming paradigm for creating concurrent applications. At current date, the design of Distributed TM (DTM) tailored for non coherent Manycore architectures is largely unexplored. This thesis addresses this topic by analysing, designing, and implementing a DTM system suitable for low latency message passing platforms. The resulting system, named SC-TM, the Single-Chip Cloud TM, is a fully decentralized and scalable DTM, implemented on Intel’s SCC processor; a 48-core ’concept vehicle’ created by Intel Labs as a platform for Many-core software research. SC-TM is one of the first fully decentralized DTMs that guarantees starvation-freedom and the first to use an actual pluggable Contention Manager (CM) to ensure liveness. Finally, this thesis introduces three completely decentralized CMs; Offset-Greedy, a decentralized version of Greedy, Wholly, which relies on the number of completed transactions, and FairCM, that makes use off the effective transactional time. The evaluation showed the latter outperformed the three.
Style APA, Harvard, Vancouver, ISO itp.
26

Saad, Ibrahim Mohamed Mohamed. "HyFlow: A High Performance Distributed Software Transactional Memory Framework". Thesis, Virginia Tech, 2011. http://hdl.handle.net/10919/32966.

Pełny tekst źródła
Streszczenie:
We present HyFlow - a distributed software transactional memory (D-STM) framework for distributed concurrency control. Lock-based concurrency control suffers from drawbacks including deadlocks, livelocks, and scalability and composability challenges. These problems are exacerbated in distributed systems due to their distributed versions which are more complex to cope with (e.g., distributed deadlocks). STM and D-STM are promising alternatives to lock-based and distributed lock-based concurrency control for centralized and distributed systems, respectively, that overcome these difficulties. HyFlow is a Java framework for DSTM, with pluggable support for directory lookup protocols, transactional synchronization and recovery mechanisms, contention management policies, cache coherence protocols, and network communication protocols. HyFlow exports a simple distributed programming model that excludes locks: using (Java 5) annotations, atomic sections are defiend as transactions, in which reads and writes to shared, local and remote objects appear to take effect instantaneously. No changes are needed to the underlying virtual machine or compiler. We describe HyFlow's architecture and implementation, and report on experimental studies comparing HyFlow against competing models including Java remote method invocation (RMI) with mutual exclusion and read/write locks, distributed shared memory (DSM), and directory-based D-STM.
Master of Science
Style APA, Harvard, Vancouver, ISO itp.
27

Pandey, Utkarsh. "Optimizing Distributed Transactions: Speculative Client Execution, Certified Serializability, and High Performance Run-Time". Thesis, Virginia Tech, 2016. http://hdl.handle.net/10919/72867.

Pełny tekst źródła
Streszczenie:
On-line services already form an important part of modern life with an immense potential for growth. Most of these services are supported by transactional systems, which are backed by database management systems (DBMS) in many cases. Many on-line services use replication to ensure high-availability, fault tolerance and scalability. Replicated systems typically consist of different nodes running the service co-ordinated by a distributed algorithm which aims to drive all the nodes along the same sequence of states by providing a total order to their operations. Thus optimization of both local DBMS operations through concurrency control and the distributed algorithm driving replicated services can lead to enhancing the performance of the on-line services. Deferred Update Replication (DUR) is a well-known approach to design scalable replicated systems. In this method, the database is fully replicated on each distributed node. User threads perform transactions locally and optimistically before a total order is reached. DUR based systems find their best usage when remote transactions rarely conflict. Even in such scenarios, transactions may abort due to local contention on nodes. A generally adopted method to alleviate the local contention is to invoke a local certification phase to check if a transaction conflicts with other local transactions already completed. If so, the given transaction is aborted locally without burdening the ordering layer. However, this approach still results in many local aborts which significantly degrades the performance. The first main contribution of this thesis is PXDUR, a DUR based transactional system, which enhances the performance of DUR based systems by alleviating local contention and increasing the transaction commit rate. PXDUR alleviates local contention by allowing speculative forwarding of shared objects from locally committed transactions awaiting total order to running transactions. PXDUR allows transactions running in parallel to use speculative forwarding, thereby enabling the system to utilize the highly parallel multi-core platforms. PXDUR also enhances the performance by optimizing the transaction commit process. It allows the committing transactions to skip read-set validation when it is safe to do so. PXDUR achieves performance gains of an order of magnitude over closest competitors under favorable conditions. Transactions also form an important part of centralized DBMS, which tend to support multi-threaded access to utilize the highly parallel hardware platforms. The applications can be wrapped in transactions which can then access the DBMS as per the rules of concurrency control. This allows users to develop applications that can run on DBMSs without worrying about synchronization. texttt{Serializability} is the de-facto standard form of isolation required by transactions for many applications. The existing methods employed by DBMSs to enforce serializability employ explicit fine-grained locking. The eager-locking based approach is pessimistic and can be too conservative for many applications. The locking approach can severely limit the performance of DBMSs especially for scenarios with moderate to high contention. This leads to the second major contribution of this thesis is TSAsR, an adaptive transaction processing framework, which can be applied to DBMSs to improve performance. TSAsR allows the DBMS's internal synchronization to be more relaxed and enforces serializability through the processng of external meta-data in an optimistic manner. It does not require any changes in the application code and achieves orders of magnitude performance improvements for high and moderate contention cases. The replicated transaction processing systems require a distributed algorithm to keep the system consistent by ensuring that each node executes the same sequence of deterministic commands. These algorithms generally employ texttt{State Machine Replication (SMR)}. Enhancing the performance of such algorithms is a potential way to increase the performance of distributed systems. However, developing new SMR algorithms is limited in production settings because of the huge verification cost involved in proving their correctness. There are frameworks that allow easy specification of SMR algorithms and subsequent verification. However, algorithms implemented in such framework, give poor performance. This leads to the third major contribution of this thesis Verified JPaxos, a JPaxos based runtime system which can be integrated to an easy to verify I/O automaton based on Multipaxos protocol. Multipaxos is specified in Higher Order Logic (HOL) for ease of verification which is used to generates executable code representing the Multipaxos state changes (I/O Automaton). The runtime drives the HOL generated code and interacts with the service and network to create a fully functional replicated Multipaxos system. The runtime inherits its design from JPaxos along with some optimizations. It achieves significant improvement over a state-of-art SMR verification framework while still being comparable to the performance of non-verified systems.
Master of Science
Style APA, Harvard, Vancouver, ISO itp.
28

Sylla, Adja Ndeye. "Support intergiciel pour la conception et le déploiement adaptatifs fiables, application aux bâtiments intelligents". Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAM095/document.

Pełny tekst źródła
Streszczenie:
Dans le contexte de l’informatique pervasive et de l’internet des objets, les systèmes sonthétérogènes, distribués et adaptatifs (p. ex., systèmes de gestion des transports, bâtimentsintelligents). La conception et le déploiement de ces systèmes sont rendus difficiles par leurnature hétérogène et distribuée mais aussi le risque de décisions d’adaptation conflictuelleset d’inconsistances à l’exécution. Les inconsistances sont causées par des pannes matériellesou des erreurs de communication. Elles surviennent lorsque des actions correspondant auxdécisions d’adaptation sont supposées être effectuées alors qu’elles ne le sont pas.Cette thèse propose un support intergiciel, appelé SICODAF, pour la conception et ledéploiement de systèmes adaptatifs fiables. SICODAF combine une fiabilité comportementale(absence de décisions conflictuelles) au moyen de systèmes de transitions et une fiabilitéd’exécution (absence d’inconsistances) à l’aide d’un intergiciel transactionnel. SICODAF estbasé sur le calcul autonomique. Il permet de concevoir et de déployer un système adaptatifsous la forme d’une boucle autonomique qui est constituée d’une couche d’abstraction, d’unmécanisme d’exécution transactionnelle et d’un contrôleur. SICODAF supporte trois typesde contrôleurs (basés sur des règles, sur la théorie du contrôle continu ou discret). Il permetégalement la reconfiguration d’une boucle, afin de gérer les changements d’objectifs quisurviennent dans le système considéré, et l’intégration d’un système de détection de pannesmatérielles. Enfin, SICODAF permet la conception de boucles multiples pour des systèmesqui sont constitués de nombreuses entités ou qui requièrent des contrôleurs de types différents.Ces boucles peuvent être combinées en parallèle, coordonnées ou hiérarchiques.SICODAF a été mis en oeuvre à l’aide de l’intergiciel transactionnel LINC, de l’environnementd’abstraction PUTUTU et du langage Heptagon/BZR qui est basé sur des systèmesde transitions. SICODAF a été également évalué à l’aide de trois études de cas
In the context of pervasive computing and internet of things, systems are heterogeneous,distributed and adaptive (e.g., transport management systems, building automation). Thedesign and the deployment of these systems are made difficult by their heterogeneous anddistributed nature but also by the risk of conflicting adaptation decisions and inconsistenciesat runtime. Inconsistencies are caused by hardware failures or communication errors. Theyoccur when actions corresponding to the adaptation decisions are assumed to be performedbut are not done.This thesis proposes a middleware support, called SICODAF, for the design and thedeployment of reliable adaptive systems. SICODAF combines a behavioral reliability (absenceof conflicting decisions) by means of transitions systems and an execution reliability(absence of inconsistencies) through a transactional middleware. SICODAF is based on autonomiccomputing. It allows to design and deploy an adaptive system in the form of anautonomic loop which consists of an abstraction layer, a transactional execution mechanismand a controller. SICODAF supports three types of controllers (based on rules, on continuousor discrete control theory). SICODAF also allows for loop reconfiguration, to dealwith changing objectives in the considered system, and the integration of a hardware failuredetection system. Finally, SICODAF allows for the design of multiple loops for systems thatconsist of a high number of entities or that require controllers of different types. These loopscan be combined in parallel, coordinated or hierarchical.SICODAF was implemented using the transactional middleware LINC, the abstractionenvironment PUTUTU and the language Heptagon/BZR that is based on transitionssystems. SICODAF was also evaluated using three case studies
Style APA, Harvard, Vancouver, ISO itp.
29

Rahgozar, Maseud. "Controle de concurrence par gestion des evenements". Paris 6, 1987. http://www.theses.fr/1987PA066594.

Pełny tekst źródła
Streszczenie:
Presentation d'une nouvelle approche de l'utilisation de connaissances semantiques sur les transactions et les donnees, dans un systeme de base de donnees reparties pour optimiser l'execution des transactions et pour simplifier certains des problemes lies a la distribution des donnees et a la concurrence entre transactions
Style APA, Harvard, Vancouver, ISO itp.
30

Zhang, Bo. "Supporting Software Transactional Memory in Distributed Systems: Protocols for Cache-Coherence, Conflict Resolution and Replication". Diss., Virginia Tech, 2011. http://hdl.handle.net/10919/29571.

Pełny tekst źródła
Streszczenie:
Lock-based synchronization on multiprocessors is inherently non-scalable, non-composable, and error-prone. These problems are exacerbated in distributed systems due to an additional layer of complexity: multinode concurrency. Transactional memory (TM) is an emerging, alternative synchronization abstraction that promises to alleviate these difficulties. With the TM model, code that accesses shared memory objects are organized as transactions, which speculatively execute, while logging changes. If transactional conflicts are detected, one of the conflicting transaction is aborted and re-executed, while the other is allowed to commit, yielding the illusion of atomicity. TM for multiprocessors has been proposed in software (STM), in hardware (HTM), and in a combination (HyTM). This dissertation focuses on supporting the TM abstraction in distributed systems, i.e., distributed STM (or D-STM). We focus on three problem spaces: cache-coherence (CC), conflict resolution, and replication. We evaluate the performance of D-STM by measuring the competitive ratio of its makespan --- i.e., the ratio of its makespan (the last completion time for a given set of transactions) to the makespan of an optimal off-line clairvoyant scheduler. We show that the performance of D-STM for metric-space networks is O(N^2) for N transactions requesting an object under the Greedy contention manager and an arbitrary CC protocol. To improve the performance, we propose a class of location-aware CC protocols, called LAC protocols. We show that the combination of the Greedy manager and a LAC protocol yields an O(NlogN s) competitive ratio for s shared objects. We then formalize two classes of CC protocols: distributed queuing cache-coherence (DQCC) protocols and distributed priority queuing cache-coherence (DPQCC) protocols, both of which can be implemented using distributed queuing protocols. We show that a DQCC protocol is O(NlogD)-competitive and a DPQCC protocol is O(log D_delta)-competitive for N dynamically generated transactions requesting an object, where D_delta is the normalized diameter of the underlying distributed queuing protocol. Additionally, we propose a novel CC protocol, called Relay, which reduces the total number of aborts to O(N) for N conflicting transactions requesting an object, yielding a significantly improvement over past CC protocols which has O(N^2) total number of aborts. We also analyze Relay's dynamic competitive ratio in terms of the communication cost (for dynamically generated transactions), and show that Relay's dynamic competitive ratio is O(log D_0), where D_0 is the normalized diameter of the underlying network spanning tree. To reduce unnecessary aborts and increase concurrency for D-STM based on globally-consistent contention management policies, we propose the distributed dependency-aware (DDA) conflict resolution model, which adopts different conflict resolution strategies based on transaction types. In the DDA model, read-only transactions never abort by keeping a set of versions for each object. Each transaction only keeps precedence relations based on its local knowledge of precedence relations. We show that the DDA model ensures that 1) read-only transactions never abort, 2) every transaction eventually commits, 3) supports invisible reads, and 4) efficiently garbage collects useless object versions. To establish competitive ratio bounds for contention managers in D-STM, we model the distributed transactional contention management problem as the traveling salesman problem (TSP). We prove that for D-STM, any online, work conserving, deterministic contention manager provides an Omega(max[s,s^2/D]) competitive ratio in a network with normalized diameter D and s shared objects. Compared with the Omega(s) competitive ratio for multiprocessor STM, the performance guarantee for D-STM degrades by a factor proportional to s/D. We present a randomized algorithm, called Randomized, with a competitive ratio O(sClog n log ^{2} n) for s objects shared by n transactions, with a maximum conflicting degree C. To break this lower bound, we present a randomized algorithm Cutting, which needs partial information of transactions and an approximate TSP algorithm A with approximation ratio phi_A. We show that the average case competitive ratio of Cutting is O(s phi_A log^{2}m log^{2}n), which is close to O(s). Single copy (SC) D-STM keeps only one writable copy of each object, and thus cannot tolerate node failures. We propose a quorum-based replication (QR) D-STM model, which provides provable fault-tolerance without incurring high communication overhead, when compared with the SC model. The QR model stores object replicas in a tree quorum system, where two quorums intersect if one of them is a write quorum, and ensures the consistency among replicas at commit-time. The communication cost of an operation in the QR model is proportional to the communication cost from the requesting node to its closest read or write quorum. In the presence of node failures, the QR model exhibits high availability and degrades gracefully when the number of failed nodes increases, with reasonable higher communication cost. We develop a protoytpe implementation of the dissertation's proposed solutions, including DQCC and DPQCC protocols, Relay protocol, and the DDA model, in the HyFlow Java D-STM framework. We experimentally evaluated these solutions with respective competitor solutions on a set of microbenchmarks (e.g., data structures including distributed linked list, binary search tree and red-black tree) and macrobenchmarks (e.g., distributed versions of the applications in the STAMP STM benchmark suite for multiprocessors). Our experimental studies revealed that: 1) based on the same distributed queuing protocol (i.e., Ballistic CC protocol), DPQCC yields better transactional throughput than DQCC, by a factor of 50% - 100%, on a range of transactional workloads; 2) Relay outperforms competitor protocols (including Arrow, Ballistic and Home) by more than 200% when the network size and contention increase, as it efficiently reduces the average aborts per transaction (less than 0.5); 3) the DDA model outperforms existing contention management policies (including Greedy, Karma and Kindergarten managers) by upto 30%-40% in high contention environments; For read/write-balanced workloads, the DDA model outperforms these contention management policies by 30%-60% on average; for read-dominated workloads, the model outperforms by over 200%.
Ph. D.
Style APA, Harvard, Vancouver, ISO itp.
31

Aivars, Sablis. "Benefits of transactive memory systems in large-scale development". Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-11703.

Pełny tekst źródła
Streszczenie:
Context. Large-scale software development projects are those consisting of a large number of teams, maybe even spread across multiple locations, and working on large and complex software tasks. That means that neither a team member individually nor an entire team holds all the knowledge about the software being developed and teams have to communicate and coordinate their knowledge. Therefore, teams and team members in large-scale software development projects must acquire and manage expertise as one of the critical resources for high-quality work. Objectives. We aim at understanding whether software teams in different contexts develop transactive memory systems (TMS) and whether well-developed TMS leads to performance benefits as suggested by research conducted in other knowledge-intensive disciplines. Because multiple factors may influence the development of TMS, based on related TMS literature we also suggest to focus on task allocation strategies, task characteristics and management decisions regarding the project structure, team structure and team composition. Methods. We use the data from two large-scale distributed development companies and 9 teams, including quantitative data collected through a survey and qualitative data from interviews to measure transactive memory systems and their role in determining team performance. We measure teams’ TMS with a latent variable model. Finally, we use focus group interviews to analyze different organizational practices with respect to team management, as a set of decisions based on two aspects: team structure and composition, and task allocation. Results. Data from two companies and 9 teams are analyzed and the positive influence of well-developed TMS on team performance is found. We found that in large-scale software development, teams need not only well-developed team’s internal TMS, but also have well- developed and effective team’s external TMS. Furthermore, we identified practices that help of hinder development of TMS in large-scale projects. Conclusions. Our findings suggest that teams working in large-scale software development can achieve performance benefits if transactive memory practices within the team are supported with networking practices in the organization.
Style APA, Harvard, Vancouver, ISO itp.
32

Zajac, Stephanie. "Exploring new boundaries in team cognition: Integrating knowledge in distributed teams". Master's thesis, University of Central Florida, 2014. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/6390.

Pełny tekst źródła
Streszczenie:
Distributed teams continue to emerge in response to the complex organizational environments brought about by globalization, technological advancements, and the shift toward a knowledge-based economy. These teams are comprised of members who hold the disparate knowledge necessary to take on cognitively demanding tasks. However, knowledge coordination between team members who are not co-located is a significant challenge, often resulting in process loss and decrements to the effectiveness of team level knowledge structures. The current effort explores the configuration dimension of distributed teams, and specifically how subgroup formation based on geographic location, may impact the effectiveness of a team's transactive memory system and subsequent team process. In addition, the role of task cohesion as a buffer to negative intergroup interaction is explored.
M.S.
Masters
Psychology
Sciences
Industrial Organizational Psychology
Style APA, Harvard, Vancouver, ISO itp.
33

Poudel, Pavan. "Tools and Techniques for Efficient Transactions". Kent State University / OhioLINK, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=kent1630591700589561.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
34

Poudel, Pavan. "Tools and Techniques for Efficient Transactions". Kent State University / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=kent1630591700589561.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
35

Balachandran, Nandu. "Utilization of Distributed Generation in Power System Peak Hour Load Shedding Reduction". ScholarWorks@UNO, 2016. http://scholarworks.uno.edu/td/2124.

Pełny tekst źródła
Streszczenie:
An approach to utilize Distributed Generation (DG) to minimize the total load shedding by analyzing the power system in Transactive energy framework is proposed. An algorithm to optimize power system in forward and spot markets to maximize an electric utility’s profit by optimizing purchase of power from DG is developed. The proposed algorithm is a multi-objective optimization with the main objective to maximize a utility’s profit by minimizing overall cost of production, load shedding, and purchase of power from distributed generators. This work also proposes a method to price power in forward and spot markets using existing LMP techniques. Transactive accounting has been performed to quantify the consumer payments in both markets. The algorithm is tested in two test systems; a 6-bus system and modified IEEE 14-bus system. The results show that by investing in DG, utility benefits from profit increase, load shedding reduction, and transmission line loading improvement.
Style APA, Harvard, Vancouver, ISO itp.
36

Xiong, Fanfan. "Resource Efficient Parallel VLDB with Customizable Degree of Redundancy". Diss., Temple University Libraries, 2009. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/43445.

Pełny tekst źródła
Streszczenie:
Computer and Information Science
Ph.D.
This thesis focuses on the practical use of very large scale relational databases. It leverages two recent breakthroughs in parallel and distributed computing: a) synchronous transaction replication technologies by Justin Y. Shi and Suntain Song; and b) Stateless Parallel Processing principle pioneered by Justin Y. Shi. These breakthroughs enable scalable performance and reliability of database service using multiple redundant shared-nothing database servers. This thesis presents a Functional Horizontal Partitioning method with customizable degree of redundancy to address practical very large scale database applications problems. The prototype VLDB implementation is designed for transparent non-intrusive deployments. The prototype system supports Microsoft SQL Servers databases. Computational experiments are conducted using industry-standard benchmark (TPC-E).
Temple University--Theses
Style APA, Harvard, Vancouver, ISO itp.
37

Poti, Allison Tamara S. "Building a multi-tier enterprise system utilizing visual Basic, MTS, ASP, and MS SQL". Virtual Press, 2001. http://liblink.bsu.edu/uhtbin/catkey/1221293.

Pełny tekst źródła
Streszczenie:
Multi-tier enterprise systems consist of more than two distributed tiers. The design of multi-tier systems is considerably more involved than two tier systems. Not all systems should be designed as multi-tier, but if the decision to build a multi-tier system is made, there are benefits to this type of system design. CSCources is a system that tracks computer science course information. The requirements of this system indicate that it should be a multi-tier system. This system has three tiers, client, business and data. Microsoft tools are used such as Visual Basic (VB) that was used to build the client tier that physically resides on the client machine. VB is also used to create the business tier. This tier consists of the business layer and the data layer. The business layer contains most of the business logic for the system. The data layer communicates with the data tier. Microsoft SQL Server (MS SQL) is used for the data store. The database containsseveral tables and stored procedures. The stored procedures are used to add, edit, update and delete records in the database. Microsoft Transaction Server (MTS) is used to control modifications to the database. The transaction and security features available in the MTS environment are used. The business tier and data tier may or may not reside on the same physical computer or server. Active Server Pages (ASP) was built that accesses the business tier to retrieve the needed information for display on a web page. The cost of designing a distributed system, building a distributed system, upgrades to the system and error handling are examined.Ball State UniversityMuncie, IN 47306
Department of Computer Science
Style APA, Harvard, Vancouver, ISO itp.
38

Danes, Ferencz Adriana. "Service transactionnel souple pour systèmes répartis à objets persistants". Grenoble 1, 1996. https://theses.hal.science/tel-00004984.

Pełny tekst źródła
Streszczenie:
Le Service Transactionnel Guide (STG) défini dans le cadre de cette thèse a été implanté sur une plate-forme système (Guide) fournissant des objets persistants et répartis ainsi qu'un environnement de développement d'applications réparties. Le STG est basé sur une extension du modèle des transactions emboitées qui permet plusieurs formes de parallélisme au sein des transactions (par exemple entre transaction mère et transaction filles) et hors transaction. Par ailleurs, le modèle rend possible l'utilisation des objets atomiques par les transactions, mais aussi dans un contexte non-transactionnel. Si le modèle de concurrence vise principalement la souplesse du service, le modèle d'atomicité aux pannes doit prendre en compte les contraintes imposées par le système sous-jacent. L'atomicité aux pannes des transactions emboitées réparties est basé sur la mise en oeuvre du concept de représentant et d'un protocole de terminaison original qui ne nécessite qu'une phase pour assurer la validation d'une sous-transaction répartie
Le STG est composé de classes spécialisées qui constituent son interface, et qui repose sur des mécanismes systèmes spécifiques implantés dans le noyau du système Guide
Style APA, Harvard, Vancouver, ISO itp.
39

Zia, Muhammad Fahad. "On energy management optimization for microgrids enriched with renewable energy sources Microgrids energy management systems: a critical review on methods, solutions, and prospects, in Applied Energy 222, July 2018 Optimal operational planning of scalable DC microgrid with demand response, islanding, and battery degradation cost considerations, in Applied Energy 237, March 2019 Energy management system for an islanded microgrid with convex relaxation, in IEEE Transactions on Industry Applications 55, Nov.-Dec. 2019 Microgrid transactive energy: review, architectures, distributed ledger technologies, and market analysis, in IEEE Access, January 2020". Thesis, Brest, 2020. http://theses-scd.univ-brest.fr/2020/These-2020-SPI-Genie_electrique-ZIA_Muhammad_Fahad.pdf.

Pełny tekst źródła
Streszczenie:
Le réseau électrique actuel est confronté à plusieurs défis liés aux exigences environnementales, à l'augmentation de la demande mondiale d'électricité, aux contraintes de fiabilité élevées, à la nécessité d’une énergie décarbonisée et aux restrictions de planification. Afin d’évoluer vers un système d'énergie électrique respectueux de l’environnement et intelligent, les installations de production centralisées sont de nos jours transformées en de plus petites centrales de génération distribuées. Le concept de micro-réseau émerge ainsi. Le micro-réseau peut être considéré comme un système de distribution basse tension avec un ensemble de charges contrôlables et de ressources énergétiques distribuées, qui peuvent inclure de nombreuses sources d'énergie renouvelables et des systèmes de stockage d'énergie. La gestion d’énergie d'un grand nombre de ressources énergétiques distribuées est nécessaire au bon fonctionnement d'un micro-réseau afin d’en assurer la stabilité, la fiabilité et la disponibilité. Par conséquent,un système de gestion d'énergie est au coeur de l'exploitation des micro-réseaux afin d’en assurer un développement économique et durable. À cet égard, cette thèse se focalise sur la proposition de modèles d'optimisation de système de gestion de l'énergie pour une exploitation optimale des micro-réseaux. Une gestion d’énergie optimale requiert la prise en compte de plusieurs contraintes techniques, économiques et environnementales. De plus, ces travaux de recherche prennent en considération un modèle pratique du coût de dégradation des batteries Li-ion. Le problème de gestion d’énergie optimale se traduit ainsi par un problème d’optimisation sous contraintes. La fonction objective regroupe le coût d'exploitation des générateurs distribués, le coût des émissions de gaz à effet de serre des sources de production conventionnelles, l’obligation d’une utilisation maximale des sources d'énergie renouvelables, le coût de dégradation des batteries, les différentes incitations afin de modifier le profil de la demande et des pénalités en cas de délestage. Les contraintes quant à elles sont liées aux contraintes techniques des différents sous-systèmes du micro-réseau. Par ailleurs, un modèle conceptuel complet à sept couches est également développé afin de fournir des informations normalisées sur la mise en oeuvre d’une nouvelle économie de l’énergie
The current electric power system isfacing the challenges of environmental protection,increasing global electricity demand, high reliability requirement, cleanliness of energy, and planning restrictions. To evolve towards green and smart electric power system, centralized generating facilities are now being transformed into smaller and more distributed generations. As a consequence, the concept of microgrid emerges, where a microgrid can operate as a single controllable system and can be assumed as a cluster of loads and distributed energy resources, which may include many renewable energy sources and energy storage systems. The energy management of large numbers of distributed energy resources is needed for reliable operation of microgrid system. Therefore, energy management is the fundamental part of the microgrid operation for economical and sustainable development. In this regard, this thesis focuses on proposing energy management optimization models for optimal operation of microgrid system that include proposed practical Li-ion battery degradation cost model. These different energy management models include objective functions of operating cost of distributed generators, emission cost of conventional generation source, maximum utilization of renewable energy sources, battery degradation cost, demand response incentives, and load shedding penalization cost, with microgrid component and physical network constraints. A comprehensive conceptual seven layer model is also developed to provide standardized insights in implementing real transactive energy systems
Style APA, Harvard, Vancouver, ISO itp.
40

Belfkih, Abderrahmen. "Contraintes temporelles dans les bases de données de capteurs sans fil". Thesis, Le Havre, 2016. http://www.theses.fr/2016LEHA0014/document.

Pełny tekst źródła
Streszczenie:
Dans ce travail, nous nous focalisons sur l’ajout de contraintes temporelles dans les Bases de Données de Capteurs Sans Fil (BDCSF). La cohérence temporelle d’une BDCSF doit être assurée en respectant les contraintes temporelles des transactions et la validité temporelle des données, pour que les données prélevées par les capteurs reflètent fidèlement l’état réel de l’environnement. Cependant, les retards de transmission et/ou de réception pendant la collecte des données peuvent conduire au non-respect de la validité temporelle des données. Une solution de type bases de données s'avère la plus adéquate. Il faudrait pour cela faire coïncider les aspects BD traditionnelles avec les capteurs et leur environnement. À cette fin, les capteurs déployés au sein d'un réseau sans fils sont considérés comme une table d'une base de données distribuée, à laquelle sont appliquées des transactions (interrogations, mises à jour, etc.). Les transactions sur une BD de capteurs nécessitent des modifications pour prendre en compte l'aspect continu des données et l'aspect temps réel. Les travaux réalisés dans cette thèse portent principalement sur trois contributions : (i) une étude comparative des propriétés temporelles entre une collecte périodique des données avec une base de données classique et une approche de traitement des requêtes avec une BDCSF, (ii) la proposition d’un modèle de traitement des requêtes temps réel, (iii) la mise en œuvre d’une BDCSF temps réel, basée sur les techniques décrites dans la deuxième contribution
In this thesis, we are interested in adding real-time constraints in the Wireless Sensor Networks Database (WSNDB). Temporal consistency in WSNDB must be ensured by respecting the transaction deadlines and data temporal validity, so that sensor data reflect the current state of the environment. However, delays of transmission and/or reception in a data collection process can lead to not respect the data temporal validity. A database solution is most appropriate, which should coincide with the traditional database aspects with sensors and their environment. For this purpose, the sensor in WSN is considered as a table in a distributed database, which applied transactions (queries, updates, etc.). Transactions in a WSNDB require modifications to take into account of the continuous datastream and real-time aspects. Our contribution in this thesis focus on three parts: (i) a comparative study of temporal properties between a periodic data collection based on a remote database and query processing approach with WSNDB, (ii) the proposition of a real-time query processing model, (iii) the implementation of a real time WSNDB, based on the techniques described in the second contribution
Style APA, Harvard, Vancouver, ISO itp.
41

Rustagi, Ram Prakash. "Studies in concurrency control for centralized, distributed and nested transaction systems". Thesis, 1998. http://localhost:8080/iit/handle/2074/2211.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
42

Rahm, Erhard. "A Framework for workload allocation in distributed transaction processing systems". 1992. https://ul.qucosa.de/id/qucosa%3A31970.

Pełny tekst źródła
Streszczenie:
Ever-increasing demands for high transaction rates, limitations of high-end processors, high availability, and modular growth considerations are all driving forces toward distributed architectures for transaction processing. However, a prerequisite to taking advantage of the capacity of a distributed transaction processing system is an effective strategy for workload allocation. The distribution of the workload should not only achieve load balancing, but also support an efficient transaction processing with a minimum of intersystem communication. To this end, adaptive schemes for transaction routing have to be employed that are highly responsive to workload fluctuations and configuration changes. Adaptive allocation schemes are also important for simplifying system administration, which is a major problem in distributed transaction processing systems. In this article we develop a taxonomic framework for workload allocation, in particular, transaction routing, in distributed transaction processing systems. This framework considers the influence of the underlying system architecture (e.g., shared nothing, shared disk) and transaction execution model as well as the major dependencies between workload, program, and data allocation. The main part of the framework covers structural (or architectural) and implementational alternatives for transaction routing to help identify key factors and basic tradeoffs in the design of appropriate allocation schemes. Finally, we show how existing schemes fit our taxonomy. The framework substantially facilitates a comparison of the different schemes and can guide the development of new, more effective protocols.
Style APA, Harvard, Vancouver, ISO itp.
43

Haghjoo, Mostafa S. "Transactional actors in cooperative information systems". Phd thesis, 1995. http://hdl.handle.net/1885/137457.

Pełny tekst źródła
Streszczenie:
Transaction management in advanced distributed information systems is a very important issue under research scrutiny with many technical and open problems. Most of the research and development activities use conventional database technology to address this important issue. The transaction model presented in this thesis combines attractive properties of the actor model of computation with advanced database transaction concepts in an object-oriented environment to address transactional necessities of cooperative information systems. The novel notion of transaction tree in our model includes subtransactions as well as a rich collection of decision making, chronological ordering, and communication and synchronization constructs for them. Advanced concepts such as blocking/ non_blocking synchronization, vital and non_vital subtransactions , contingency transactions, temporal and value dependencies, and delegation are supported. Compensatable subtransactions are distinguished and early commit is accomplished in order to release resources and facilitate cooperative as well as longduration transactions. Automatic cancel procedures are provided to logically undo the effects of such commits if the global transaction fails. The complexity and semantics-orientation of advanced database applications is our main motivation to design and implement a high-level scripting language for the proposed transaction model. Database programming can gain in performance and problem-orientation if the semantic dependencies between transactions can be expressed directly. Simple and flexible mechanisms are provided for advanced users to query the databases, program their transactions accordingly, and accept weak forms of semantic coherence that allows for more concurrency. The transaction model is grafted onto the concurrent obj ect-oriented programming language Sather developed at UC Berkeley which has a nice high-level syntax, supports advanced obj ect-oriented concepts, and aims toward performance and reusability. W have augmented the language with distributed programming facilities and various types of message passing routines as well as advanced transactions management constructs . The thesis is organized in three parts. The first part introduces the problem, reviews state of the art, and presents the transaction model. The second part describes the scripting language and talks about implementation details. The third part presents the formal semantics of the transaction model using mathematical notations and concludes the thesis.
Style APA, Harvard, Vancouver, ISO itp.
44

Chen, Hong-Ren, i 陳鴻仁. "A Study on Transaction Scheduling in Distributed Real-Time Database Systems". Thesis, 2002. http://ndltd.ncl.edu.tw/handle/60189547967791054074.

Pełny tekst źródła
Streszczenie:
博士
國立清華大學
資訊工程學系
91
In recent years, a distributed real-time database system (RTDBS) has become increasingly important in many real-life applications such as Internet stocking trading system, telecommunication, aircraft tracking, and automated control of medical patient monitoring. It must not only satisfy the data consistency, but also consider the time constraints associated with a real-time transaction. In general, a remote transaction usually accesses some data located at remote sites. However, traditional real-time schedulers do not consider the impact of communication delay in transferring remote data, resulting in a high MissRatio for remote transactions. This dissertation proposes a new real-time scheduler called FHR to reduce the MissRatio and lessen the unnecessary waiting time due to communication delay for a remote transaction. Transactions in advanced real-time database applications are characterized by being long and complex. The traditional flat transaction model cannot support all the requirements of these advanced applications. Thus, nested transactions play an important role in such applications. Nowadays, most of real-time schedulers are developed on the base of flat transaction models. A new real-time scheduler for nested transaction models called FHRN is proposed to efficiently schedule real-time nested transactions in a distributed RTDBS. FHRN consists of (1) FHRNp policy to schedule real-time nested transactions, and (2) 2PL_HPN resolve the concurrent data-accessing problem among interleaved nested transactions.
Style APA, Harvard, Vancouver, ISO itp.
45

Gupta, Ramesh Kumar. "Commit Processing In Distributed On-Line And Real-Time Transaction Processing Systems". Thesis, 1997. https://etd.iisc.ac.in/handle/2005/1856.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
46

Gupta, Ramesh Kumar. "Commit Processing In Distributed On-Line And Real-Time Transaction Processing Systems". Thesis, 1997. http://etd.iisc.ernet.in/handle/2005/1856.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
47

(11132985), Thamir Qadah. "High-performant, Replicated, Queue-oriented Transaction Processing Systems on Modern Computing Infrastructures". Thesis, 2021.

Znajdź pełny tekst źródła
Streszczenie:
With the shifting landscape of computing hardware architectures and the emergence of new computing environments (e.g., large main-memory systems, hundreds of CPUs, distributed and virtualized cloud-based resources), state-of-the-art designs of transaction processing systems that rely on conventional wisdom suffer from lost performance optimization opportunities. This dissertation challenges conventional wisdom to rethink the design and implementation of transaction processing systems for modern computing environments.

We start by tackling the vertical hardware scaling challenge, and propose a deterministic approach to transaction processing on emerging multi-sockets, many-core, shared memory architecture to harness its unprecedented available parallelism. Our proposed priority-based queue-oriented transaction processing architecture eliminates the transaction contention footprint and uses speculative execution to improve the throughput of centralized deterministic transaction processing systems. We build QueCC and demonstrate up to two orders of magnitude better performance over the state-of-the-art.

We further tackle the horizontal scaling challenge and propose a distributed queue-oriented transaction processing engine that relies on queue-oriented communication to eliminate the traditional overhead of commitment protocols for multi-partition transactions. We build Q-Store, and demonstrate up to 22x improvement in system throughput over the state-of-the-art deterministic transaction processing systems.

Finally, we propose a generalized framework for designing distributed and replicated deterministic transaction processing systems. We introduce the concept of speculative replication to hide the latency overhead of replication. We prototype the speculative replication protocol in QR-Store and perform an extensive experimental evaluation using standard benchmarks. We show that QR-Store can achieve a throughput of 1.9 million replicated transactions per second in under 200 milliseconds and a replication overhead of 8%-25%compared to non-replicated configurations.
Style APA, Harvard, Vancouver, ISO itp.
48

Renaud, Karen Vera. "A comparative study of transaction management services in multidatabase heterogeneous systems". Diss., 1996. http://hdl.handle.net/10500/17612.

Pełny tekst źródła
Streszczenie:
Multidatabases are being actively researched as a relatively new area in which many aspects are not yet fully understood. This area of transaction management in multidatabase systems still has many unresolved problems. The problem areas which this dissertation addresses are classification of multidatabase systems, global concurrency control, correctness criterion in a multidatabase environment, global deadlock detection, atomic commitment and crash recovery. A core group of research addressing these problems was identified and studied. The dissertation contributes to the multidatabase transaction management topic by introducing an alternative classification method for such multiple database systems; assessing existing research into transaction management schemes and based on this assessment, proposes a transaction processing model founded on the optimal properties of transaction management identified during the course of this research.
Computing
M. Sc. (Computer Science)
Style APA, Harvard, Vancouver, ISO itp.
49

Blackburn, Stephen. "Persistent store interface : a foundation for scalable persistent system design". Phd thesis, 1998. http://hdl.handle.net/1885/145415.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
50

Alagarsamy, K. "Some Theoretical Contributions To The Mutual Exclusion Problem". Thesis, 1997. https://etd.iisc.ac.in/handle/2005/1833.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii