Segui questo link per vedere altri tipi di pubblicazioni sul tema: Transaction systems (Computer systems).

Tesi sul tema "Transaction systems (Computer systems)"

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Vedi i top-50 saggi (tesi di laurea o di dottorato) per l'attività di ricerca sul tema "Transaction systems (Computer systems)".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Vedi le tesi di molte aree scientifiche e compila una bibliografia corretta.

1

Yoo, Richard M. "Adaptive transaction scheduling for transactional memory systems". Thesis, Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/22587.

Testo completo
Abstract (sommario):
Thesis (M. S.)--Electrical and Computer Engineering, Georgia Institute of Technology, 2008.
Committee Chair: Lee, Hsien-Hsin; Committee Member: Blough, Douglas; Committee Member: Yalamanchili, Sudhakar.
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Prabhu, Nitin Kumar Vijay. "Transaction processing in Mobile Database System". Diss., UMK access, 2006.

Cerca il testo completo
Abstract (sommario):
Thesis (Ph. D.)--School of Computing and Engineering. University of Missouri--Kansas City, 2006.
"A dissertation in computer science and informatics and telecommunications and computer networking." Advisor: Vijay Kumar. Typescript. Vita. Title from "catalog record" of the print edition Description based on contents viewed Nov. 9, 2007. Includes bibliographical references (leaves 152-157). Online version of the print edition.
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Chen, Jianwen, University of Western Sydney, of Science Technology and Environment College e School of Computing and Information Technology. "Data and knowledge transaction in mobile environments". THESIS_CSTE_CIT_Chen_J.xml, 2004. http://handle.uws.edu.au:8081/1959.7/806.

Testo completo
Abstract (sommario):
Advances in wireless networking technology have engendered a new paradigm of computing, called mobile computing; in which users carrying portable devices have access to a shared infrastructure independent of their physical location. Mobile computing has matured rapidly as a field of computer science. In environments of mobile computing, the mobility and disconnection of portable computing devices introduce many new challenging problems that have never been encountered in conventional computer networks. New research issues combine different areas of computer science: networking, operating systems, data and knowledge management, and databases. This thesis studies data and knowledge transaction in mobile environments. To study transaction processing at the fundamental and theoretical level in mobile environments, a range of classical notions and protocols of transaction processing are rechecked and redefined in this thesis, and form the foundation for studying transaction processing in mobile environments. A criterion for mobile serial history is given and two new concurrency theorems are proved in mobile environments. In addition to data transaction, this thesis explores knowledge transaction in mobile environments. To study knowledge transaction in mobile environments this thesis presents and formalizes a knowledge transaction language and model for use in mobile computing environments. The thesis further formalizes a framework/model for a mobile logic programming multi-agent system which can be used to study knowledge transaction in multi-agent systems in mobile environments and is a very early effort towards a formal study of knowledge base and intelligent agents in mobile environments. This work provides a foundation for the formal specification and development of real-world mobile software systems, in the same way as traditional software systems have developed.
Doctor of Philosophy (PhD) (Science)
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Xia, Yu S. M. Massachusetts Institute of Technology. "Logical timestamps in distributed transaction processing systems". Thesis, Massachusetts Institute of Technology, 2018. https://hdl.handle.net/1721.1/122877.

Testo completo
Abstract (sommario):
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 73-79).
Distributed transactions are such transactions with remote data access. They usually suffer from high network latency (compared to the internal overhead) during data operations on remote data servers, and therefore lengthen the entire transaction executiont time. This increases the probability of conflicting with other transactions, causing high abort rates. This, in turn, causes poor performance. In this work, we constructed Sundial, a distributed concurrency control algorithm that applies logical timestamps seaminglessly with a cache protocol, and works in a hybrid fashion where an optimistic approach is combined with lock-based schemes. Sundial tackles the inefficiency problem in two ways. Firstly, Sundial decides the order of transactions on the fly. Transactions get their commit timestamp according to their data access traces. Each data item in the database has logical leases maintained by the system. A lease corresponds to a version of the item. At any logical time point, only a single transaction holds the 'lease' for any particular data item. Therefore, lease holders do not have to worry about someone else writing to the item because in the logical timeline, the data writer needs to acquire a new lease which is disjoint from the holder's. This lease information is used to calculate the logical commit time for transactions. Secondly, Sundial has a novel caching scheme that works together with logical leases. The scheme allows the local data server to automatically cache data from the remote server while preserving data coherence. We benchmarked Sundial along with state-of-the-art distributed transactional concurrency control protocols. On YCSB, Sundial outperforms the second best protocol by 57% under high data access contention. On TPC-C, Sundial has a 34% improvement over the state-of-the-art candidate. Our caching scheme has performance gain comparable with hand-optimized data replication. With high access skew, it speeds the workload by up to 4.6 x.
"This work was supported (in part) by the U.S. National Science Foundation (CCF-1438955)"
by Yu Xia.
S.M.
S.M. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Tang, Rong. "Transaction management in peer-to-peer multidatabase systems". Thesis, University of Ottawa (Canada), 2005. http://hdl.handle.net/10393/27055.

Testo completo
Abstract (sommario):
Peer-to-peer multidatabase systems (P2P MDBSs) are dynamic networks of peers with total absence of any global schema, any central administrative authority, any data integration, any global access to multiple databases, permanent participation of databases, etc. Global and local transactions are supported in P2P MDBSs. A global transaction generates descendent transactions when it is propagated to other peers over acquaintances in a P2P MDBS. Descendent transactions are translations of the original global transaction based on mappings between attributes in two acquainted peers. We present a serializability theory for transactions in P2P MDBSs, and then a concurrency control protocol is proposed to ensure the global serializability of global histories by controlling the consistency over each single acquaintance. The correctness of the concurrency control protocol is proved by using the developed theory.
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Gao, Shen. "Transaction logging and recovery on phase-change memory". HKBU Institutional Repository, 2013. http://repository.hkbu.edu.hk/etd_ra/1549.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Chen, Jianwen. "Data and knowledge transaction in mobile environments". Thesis, View thesis, 2004. http://handle.uws.edu.au:8081/1959.7/806.

Testo completo
Abstract (sommario):
Advances in wireless networking technology have engendered a new paradigm of computing, called mobile computing; in which users carrying portable devices have access to a shared infrastructure independent of their physical location. Mobile computing has matured rapidly as a field of computer science. In environments of mobile computing, the mobility and disconnection of portable computing devices introduce many new challenging problems that have never been encountered in conventional computer networks. New research issues combine different areas of computer science: networking, operating systems, data and knowledge management, and databases. This thesis studies data and knowledge transaction in mobile environments. To study transaction processing at the fundamental and theoretical level in mobile environments, a range of classical notions and protocols of transaction processing are rechecked and redefined in this thesis, and form the foundation for studying transaction processing in mobile environments. A criterion for mobile serial history is given and two new concurrency theorems are proved in mobile environments. In addition to data transaction, this thesis explores knowledge transaction in mobile environments. To study knowledge transaction in mobile environments this thesis presents and formalizes a knowledge transaction language and model for use in mobile computing environments. The thesis further formalizes a framework/model for a mobile logic programming multi-agent system which can be used to study knowledge transaction in multi-agent systems in mobile environments and is a very early effort towards a formal study of knowledge base and intelligent agents in mobile environments. This work provides a foundation for the formal specification and development of real-world mobile software systems, in the same way as traditional software systems have developed.
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Sleat, Philip M. "A static, transaction based design methodology for hard real-time systems". Thesis, City, University of London, 1991. http://openaccess.city.ac.uk/17414/.

Testo completo
Abstract (sommario):
This thesis is concerned with the design and implementation stages of the development lifecycle of a class of systems known as hard real-time systems. Many of the existing methodologies are appropriate for meeting the functional requirements of this class of systems. However, it is proposed that these methodologies are not entirely appropriate for meeting the non-functional requirement of deadlines for work within these real-time systems. After discussing the concept of real-time systems and their characteristic requirements, this thesis proposes the use of a general transaction model of execution for the implementation of the system. Whereas traditional methodologies consider the system from the flow of data or control in the system, we consider the system from the viewpoint of the role of each shared data entity. A control dependency is implied between otherwise independent processes that make use of a shared data entity; our viewpoint is known as the data dependency viewpoint. This implied control dependency between independent processes, necessary to preserve the consistency of the entity in the face of concurrent access, is ignored during the design stages of other methodologies. In considering the role of each data entity, it is possible to generate other viewpoints, such as the dataflow through the processes, automatically. This however, is not considered in the work. This thesis describes a staged methodology for taking the requirements specification for a system and generating a design and implementation for that system. The methodology is intended to be more than a set of vague guidelines for implementation; a more rigid approach to the design and implementation stages is sought. The methodology begins by decomposing the system into more manageable units of processing. These units are known as tasks with a very low degree of coupling and high degree of cohesion. Following the system decomposition, the data dependency viewpoint is constructed; a descriptive notation and CASE tool support this viewpoint. From this viewpoint, implementation issues such as generating control flow; task and data allocation and hard real-time scheduling concerns, are addressed. A complete runtime environment to support the transaction model is described. This environment is hierarchical and can be adapted to many distributed implementations. Finally, the stages of the methodology are applied to a large example, a Ship Control System. Starting with a specification of the requirements, the methodology is applied to generate a design and implementation of the system.
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Dwyer, Barry. "Automatic design of batch processing systems". Title page, abstract, table of contents and introduction only, 1999. http://web4.library.adelaide.edu.au/theses/09PH/09phd993.pdf.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Gin, Andrew. "Building a Secure Short Duration Transaction Network". Thesis, University of Canterbury. Computer Science and Software Engineering, 2007. http://hdl.handle.net/10092/1188.

Testo completo
Abstract (sommario):
The objective of this project was to design and test a secure IP-based architecture suitable for short duration transactions. This included the development of a prototype test-bed in which various operating scenarios (such as cryptographic options, various IP-based architectures and fault tolerance) were demonstrated. A solution based on SIP secured with TLS was tested on two IP based architectures. Total time, CPU time and heap usage was measured for each architecture and encryption scheme to examine the viability of such a solution. The results showed that the proposed solution stack was able to complete transactions in reasonable time and was able to recover from transaction processor failure. This research has demonstrated a possible architecture and protocol stack suitable for IP-based transaction networks. The benefits of an IP-based transaction network include reduced operating costs for network providers and clients, as shared IP infrastructure is used, instead of maintaining a separate IP and X.25 network.
Gli stili APA, Harvard, Vancouver, ISO e altri
11

Momin, Kaleem A. "A transaction execution model for mobile computing environments". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape7/PQDD_0017/MQ54939.pdf.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
12

Bushager, Aisha Fouad. "Smart card systems : managing risks and modelling security protocols using SystemC and Transaction Level Modelling". Thesis, University of Southampton, 2011. https://eprints.soton.ac.uk/300444/.

Testo completo
Abstract (sommario):
Smart cards are examples of advanced chip technology. They allow information transfer between the card holder and the system over secure networks, but they contain sensitive data related to both the card holder and the system, that has to be kept private and confidential. The aim of the research is to conduct a risk management programme on the smart cards systems that are employed in e-business systems, suggest the best safeguards to be applied to better secure the smart card systems depending on the services and applications the smart card serves, and produce a simulation tool using a high level of abstraction programming language to be able to test the robustness of the proposed solutions. The study contributions are producing a Risk Analysis Guide specifically on smart card systems to support managerial decision making, modelling the current and proposed smart card systems including modelling the possible attacks using the Unified Modelling Language (UML) diagrams, and developing an executable model using SystemC and Transaction Level Modelling (TLM)extensions, which is a new way of modelling and testing smart card systems security. The security objectives have to be considered during the early stages of systems development and design; an executable model will give the designer the advantage of identifying vulnerabilities at an early stage, and therefore enhance the system security. The developed model is used to examine the effectiveness of number of authentication mechanisms with different probabilities of failure. Numbers of probable attacks on the current security protocol are modeled to identify vulnerabilities. The executable model shows that the smart card system security protocols and transactions need further improvement to withstand different types of security attacks.
Gli stili APA, Harvard, Vancouver, ISO e altri
13

Chen, Jianwen. "Data and knowledge transaction in mobile environments". View thesis, 2004. http://library.uws.edu.au/adt-NUWS/public/adt-NUWS20050729.131812/index.html.

Testo completo
Abstract (sommario):
Thesis (Ph.D.) -- University of Western Sydney, 2004.
A thesis submitted in fulfillment of the requirements for the degree of Doctor of Philosophy (Science) - Computing and Information Technology. Includes bibliography.
Gli stili APA, Harvard, Vancouver, ISO e altri
14

Madiraju, Sugandhi. "An Agent Based Transaction Manager for Multidatabase Systems". Digital Archive @ GSU, 2006. http://digitalarchive.gsu.edu/cs_theses/36.

Testo completo
Abstract (sommario):
A multidatabase system (MDBMS) is a facility that allows users to access data located in multiple autonomous database management systems (DBMSs) at different sites. To ensure global atomicity for multidatabase transactions, a reliable global atomic commitment protocol is a possible solution. In this protocol a centralized transaction manager (TM) receives global transactions, submits subtransactions to the appropriate sites via AGENTS. An AGENT is a component of MDBS that runs on each site; AGENTS after receiving subtransactions from the transaction manager perform the transaction and send the results back to TM. We have presented a unique proof-of-concept, a JAVA application for an Agent Based Transaction Manager that preserves global atomicity. It provides a user friendly interface through which reliable atomic commitment protocol for global transaction execution in multidatabase environment can be visualized. We demonstrated with three different test case scenarios how the protocol works. This is useful in further research in this area where atomicity of transactions can be verified for protocol correctness.
Gli stili APA, Harvard, Vancouver, ISO e altri
15

Chan, Yew Meng. "Processing mobile read-only transactions in broadcast environments with group consistency /". access full-text access abstract and table of contents, 2005. http://libweb.cityu.edu.hk/cgi-bin/ezdb/thesis.pl?mphil-cs-b19887504a.pdf.

Testo completo
Abstract (sommario):
Thesis (M.Phil.)--City University of Hong Kong, 2005.
"Submitted to Department of Computer Science in partial fulfillment of the requirements for the degree of Master of Philosophy" Includes bibliographical references (leaves 98-102)
Gli stili APA, Harvard, Vancouver, ISO e altri
16

Chan, Kinson, e 陳傑信. "Distributed software transactional memory with clock validation on clusters". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2013. http://hub.hku.hk/bib/B5053404X.

Testo completo
Abstract (sommario):
Within a decade, multicore processors emerged and revolutionised the world of computing. Nowadays, even a low-end computer comes with a multi-core processor and is capable running multiple threads simultaneously. It becomes impossible to make the best computation power out from a computer with a single-threaded program. Meanwhile, writing multi-threaded software is daunting to a lot of programmers as the threads share data and involve complicated synchronisation techniques such as locks and conditions. Software transactional memory is a promising alternative model that programmers simply need to understand transactional consistency and segment code into transactions. Programming becomes exciting again, without races, deadlocks and other issues that are common in lock-based paradigms. To pursue high throughput, performance-oriented computers have several multicore processors per each. A processor’s cache is not directly accessible by the cores in other processors, leading to non-uniform latency when the threads share data. These computers no longer behave like the classical symmetric multiprocessor computers. Although old programs continue to work, they do not necessary benefit from the added cores and caches. Most software transactional memory implementations fall into this category. They rely on a centralised and shared meta-variable (like logical clock) in order to provide the single-lock atomicity. On a computer with two or more multicore processors, the single and shared meta-variable gets regularly updated by different processors. This leads to a tremendous amount of cache contentions. Much time is spent on inter-processor cache invalidations rather than useful computations. Nevertheless, as computers with four processors or more are exponentially complex and expensive, people would desire solving sophisticated problems with several smaller computers whenever possible. Supporting software transactional consistency across multiple computers is a rarely explored research area. Although we have similar mature research topics such as distributed shared memory and distributed relational database, they have remarkably different characteristics so that most of the implementation techniques and tricks are not applicable to the new system. There are several existing distributed software transactional memory systems, but we feel there is much room for improvement. One crucial area is the conflict detection mechanism. Some of these systems make use of broadcast messages to commit transactions, which are certainly not scalable for large-scale clusters. Others use directories to direct messages to the relevant nodes only, but they also keep visible reader lists for invalidation per node. Updating a shared reader lists involves cache invalidations on processors. Reading shared data on such systems are more expensive compared to the conventional low-cost invisible reader validation systems. In this research, we aim to have a distributed software transactional memory system, with distributed clock validation for conflict detection purpose. As preparation, we first investigate some issues such as concurrency control and conflict detection in single-node systems. Finally, we combine the techniques with a tailor-made cache coherence protocol that is differentiated from typical distributed shared memory.
published_or_final_version
Computer Science
Doctoral
Doctor of Philosophy
Gli stili APA, Harvard, Vancouver, ISO e altri
17

King, Stuart. "Optimizations and applications of Trie-Tree based frequent pattern mining". Diss., Connect to online resource - MSU authorized users, 2006.

Cerca il testo completo
Abstract (sommario):
Thesis (M. S.)--Michigan State University. Dept. of Computer Science and Engineering, 2006.
Title from PDF t.p. (viewed on June 19, 2009) Includes bibliographical references (p. 79-80). Also issued in print.
Gli stili APA, Harvard, Vancouver, ISO e altri
18

Mena, Eduardo Illarramendi Arantza. "Ontology-based query processing for global information systems /". Boston [u.a.] : Kluwer Acad. Publ, 2001. http://www.loc.gov/catdir/enhancements/fy0813/2001029621-d.html.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
19

Xing, Xuemin. "Ad-hoc recovery in workflow systems : formal model and a prototype system /". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape7/PQDD_0035/MQ62442.pdf.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
20

Wang, Mingzhong. "ARTS : agent-oriented robust transactional system /". Connect to thesis, 2009. http://repository.unimelb.edu.au/10187/6778.

Testo completo
Abstract (sommario):
Internet computing enables the construction of large-scale and complex applications by aggregating and sharing computational, data and other resources across institutional boundaries. The agent model can address the ever-increasing challenges of scalability and complexity, driven by the prevalence of Internet computing, by its intrinsic properties of autonomy and reactivity, which support the flexible management of application execution in distributed, open, and dynamic environments. However, the non-deterministic behaviour of autonomous agents leads to a lack of control, which complicates exception management in the system, thus threatening the robustness and reliability of the system, because improperly handled exceptions may cause unexpected system failure and crashes.
In this dissertation, we investigate and develop mechanisms to integrate intrinsic support for concurrency control, exception handling, recoverability, and robustness into multi-agent systems. The research covers agent specification, planning and scheduling, execution, and overall coordination, in order to reduce the impact of environmental uncertainty. Simulation results confirm that our model can improve the robustness and performance of the system, while relieving developers from dealing with the low level complexity of exception handling.
A survey, along with a taxonomy, of existing proposals and approaches for building robust multi-agent systems is provided first. In addition, the merits and limitations of each category are highlighted.
Next, we introduce the ARTS (Agent-Oriented Robust Transactional System) platform which allows agent developers to compose recursively-defined, atomically-handled tasks to specify scoped and hierarchically-organized exception-handling plans for a given goal. ARTS then supports automatic selection, execution, and monitoring of appropriate plans in a systematic way, for both normal and recovery executions. Moreover, we propose multiple-step backtracking, which extends the existing step-by-step plan reversal, to serve as the default exception handling and recovery mechanism in ARTS. This mechanism utilizes previous planning results in determining the response to a failure, and allows a substitutable path to start, prior to, or in parallel with, the compensation process, thus allowing an agent to achieve its goals more directly and efficiently. ARTS helps developers to focus on high-level business logic and relaxes them from considering low-level complexity of exception management.
One of the reasons for the occurrence of exceptions in a multi-agent system is that agents are unable to adhere to their commitments. We propose two scheduling algorithms for minimising such exceptions when commitments are unreliable. The first scheduling algorithm is trust-based scheduling, which incorporates the concept of trust, that is, the probability that an agent will comply with its commitments, along with the constraints of system budget and deadline, to improve the predictability and stability of the schedule. Trust-based scheduling supports the runtime adaptation and evolvement of the schedule by interleaving the processes of evaluation, scheduling, execution, and monitoring in the life cycle of a plan. The second scheduling algorithm is commitment-based scheduling, which focuses on the interaction and coordination protocol among agents, and augments agents with the ability to reason about and manipulate their commitments. Commitment-based scheduling supports the refactoring and parallel execution of commitments to maximize the system's overall robustness and performance. While the first scheduling algorithm needs to be performed by a central coordinator, the second algorithm is designed to be distributed and embedded into the individual agent.
Finally, we discuss the integration of our approaches into Internet-based applications, to build flexible but robust systems. Specifically, we discuss the designs of an adaptive business process management system and of robust scientific workflow scheduling.
Gli stili APA, Harvard, Vancouver, ISO e altri
21

Naik, Aniket Dilip. "Efficient Conditional Synchronization for Transactional Memory Based System". Thesis, Georgia Institute of Technology, 2006. http://hdl.handle.net/1853/10517.

Testo completo
Abstract (sommario):
Multi-threaded applications are needed to realize the full potential of new chip-multi-threaded machines. Such applications are very difficult to program and orchestrate correctly, and transactional memory has been proposed as a way of alleviating some of the programming difficulties. However, transactional memory can directly be applied only to critical sections, while conditional synchronization remains difficult to implement correctly and efficiently. This dissertation describes EasySync, a simple and inexpensive extension to transactional memory that allows arbitrary conditional synchronization to be expressed in a simple and composable way. Transactional memory eliminates the need to use locks and provides composability for critical sections: atomicity of a transaction is guaranteed regardless of how other code is written. EasySync provides the same benefits for conditional synchronizations: it eliminates the need to use conditional variables, and it guarantees wakeup of the waiting transaction when the real condition it is waiting for is satisfied, regardless of whether other code correctly signals that change. EasySync also allows transactional memory systems to efficiently provide lock-free and condition variable-free conditional critical regions and even more advanced synchronization primitives, such as guarded execution with arbitrary conditional or guard code. Because EasySync informs the hardware the that a thread is waiting, it allows simple and effective optimizations, such as stopping the execution of a thread until there is a change in the condition it is waiting for. Like transactional memory, EasySync is backward compatible with existing code, which we confirm by running unmodified Splash-2 applications linked with an EasySync-based synchronization library. We also re-write some of the synchronization in three Splash-2 applications, to take advantage of better code readability, and to replace spin-waiting with its more efficient EasySync equivalents. Our experimental evaluation shows that EasySync successfully eliminates processor activity while waiting, reducing the number of executed instructions by 8.6% on average in a 16-processor CMP. We also show that these savings increase with the number of processors, and also for applications written for transactional memory systems. Finally, EasySync imposes virtually no performance overheads, and can in fact improve performance.
Gli stili APA, Harvard, Vancouver, ISO e altri
22

Reid, Elizabeth G. "Design and evaluation of a benchmark for main memory transaction processing systems". Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/53162.

Testo completo
Abstract (sommario):
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.
Includes bibliographical references (p. 63).
We designed a diverse collection of benchmarks for Main Memory Database Systems (MMDBs) to validate and compare entries in a programming contest. Each entrant to the contest programmed an indexing system optimized for multicore multithread execution. The contest framework provided an API for the contestants, and benchmarked their submissions. This thesis describes the test goals, the API, and the test environment. It documents the website used by the contestants, describes the general nature of the tests run on each submission, and summarizes the results for each submission that was able to complete the tests.
by Elizabeth G. Reid.
M.Eng.
Gli stili APA, Harvard, Vancouver, ISO e altri
23

Zhu, Huang. "A transaction model for environmental resource dependent Cyber-Physical Systems". Diss., Kansas State University, 2014. http://hdl.handle.net/2097/18122.

Testo completo
Abstract (sommario):
Doctor of Philosophy
Department of Computing and Information Sciences
Gurdip Singh
Cyber-Physical Systems (CPSs) represent the next-generation systems characterized by strong coupling of computing, sensing, communication, and control technologies. They have the potential to transform our world with more intelligent and efficient systems, such as Smart Home, Intelligent Transportation System, Energy-Aware Building, Smart Power Grid, and Surgical Robot. A CPS is composed of a computational and a physical subsystem. The computational subsystem monitors, coordinates and controls operations of the physical subsystem to create desired physical effects, while the physical subsystem performs physical operations and gives feedback to the computational subsystem. This dissertation contributes to the research of CPSs by proposing a new transaction model for Environmental Resource Dependent Cyber-Physical Systems (ERDCPSs). The physical operations of such type of CPSs rely on environmental resources, and they are commonly seen in areas such as transportation and manufacturing. For example, an autonomous car views road segments as resources to make movements and a warehouse robot views storage spaces as resources to fetch and place goods. The operating environment of such CPSs, CPS Network, contains multiple CPS entities that share common environmental resources and interact with each other through usages of these resources. We model physical operations of an ERDCPS as a set of transactions of different types that achieve different goals, and each transaction consists of a sequence of actions. A transaction or an action may require environmental resources for its operations, and the usage of an environmental resource is precise in both time and space. Moreover, a successful execution of a transaction or an action requires exclusive access to certain resources. Transactions from different CPS entities of a CPS Network constitute a schedule. Since environmental resources are shared, transactions in the schedule may have conflicts in using these resources. A schedule must remain consistent to avoid unexpected consequences caused by resource usage conflicts between transactions. A two-phase commit algorithm is proposed to process transactions. In the pre-commit phase, a transaction is scheduled by reserving usage times of required resources, and potential conflicts are detected and resolved using different strategies, such as Win-Lose, Win-Win, and Transaction Preemption. Two general algorithms are presented to process transactions in the pre-commit phase for both centralized and distributed resource management environments. In the commit phase, a transaction is executed using reserved resources. An exception occurs when the real-time resource usage is different from what has been predicted. By doing internal and external check before a scheduled transaction is executed, exceptions can be detected and handled properly. A simulation platform (CPSNET) is developed to simulate the transaction model. The simulation platform simulates a CPS Network, where different CPS entities coordinate resource usages of their transactions through a Communication Network. Depending on the resource management environment, a Resource Server may exist in the CPS Network to manage resource usages of all CPS entities. The simulation platform is highly configurable and configuration of the simulation environment, CPS entities and two-phase commit algorithm are supported. Moreover, various statistical information and operation logs are provided to monitor and evaluate the platform itself and the transaction model. Seven groups of simulation experiments are carried out to verify the simulation platform and the transaction model. Simulation results show that the platform is capable of simulating a large load of CPS entities and transactions, and entities and components perform their functions correctly with respect to the processing of transactions. The two-phase commit algorithm is evaluated, and the results show that, compared with traditional cases where no conflict resolving is applied or a conflicting transaction is directly aborted, the proposed conflict resolving strategies improve the schedule productivity by allowing more transactions to be executed and the scheduling throughput by maintaining a higher concurrency level.
Gli stili APA, Harvard, Vancouver, ISO e altri
24

Chan, Kinson, e 陳傑信. "An adaptive software transactional memory support for multi-core programming". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2009. http://hub.hku.hk/bib/B43278759.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
25

Chan, Kinson. "An adaptive software transactional memory support for multi-core programming". Click to view the E-thesis via HKUTO, 2009. http://sunzi.lib.hku.hk/hkuto/record/B43278759.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
26

Avdic, Adnan, e Albin Ekholm. "Anomaly Detection in an e-Transaction System using Data Driven Machine Learning Models : An unsupervised learning approach in time-series data". Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-18421.

Testo completo
Abstract (sommario):
Background: Detecting anomalies in time-series data is a task that can be done with the help of data driven machine learning models. This thesis will investigate if, and how well, different machine learning models, with an unsupervised approach,can detect anomalies in the e-Transaction system Ericsson Wallet Platform. The anomalies in our domain context is delays on the system. Objectives: The objectives of this thesis work is to compare four different machine learning models ,in order to find the most relevant model. The best performing models are decided by the evaluation metric F1-score. An intersection of the best models are also being evaluated in order to decrease the number of False positives in order to make the model more precise. Methods: Investigating a relevant time-series data sample with 10-minutes interval data points from the Ericsson Wallet Platform was used. A number of steps were taken such as, handling data, pre-processing, normalization, training and evaluation.Two relevant features was trained separately as one-dimensional data sets. The two features that are relevant when finding delays in the system which was used in this thesis is the Mean wait (ms) and the feature Mean * N were the N is equal to the Number of calls to the system. The evaluation metrics that was used are True positives, True Negatives, False positives, False Negatives, Accuracy, Precision, Recall, F1-score and Jaccard index. The Jaccard index is a metric which will reveal how similar each algorithm are at their detection. Since the detection are binary, it’s classifying the each data point in the time-series data. Results: The results reveals the two best performing models regards to the F1-score.The intersection evaluation reveals if and how well a combination of the two best performing models can reduce the number of False positives. Conclusions: The conclusion to this work is that some algorithms perform better than others. It is a proof of concept that such classification algorithms can separate normal from non-normal behavior in the domain of the Ericsson Wallet Platform.
Gli stili APA, Harvard, Vancouver, ISO e altri
27

Pang, Gene. "Scalable Transactions for Scalable Distributed Database Systems". Thesis, University of California, Berkeley, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=3733329.

Testo completo
Abstract (sommario):

With the advent of the Internet and Internet-connected devices, modern applications can experience very rapid growth of users from all parts of the world. A growing user base leads to greater usage and large data sizes, so scalable database systems capable of handling the great demands are critical for applications. With the emergence of cloud computing, a major movement in the industry, modern applications depend on distributed data stores for their scalable data management solutions. Many large-scale applications utilize NoSQL systems, such as distributed key-value stores, for their scalability and availability properties over traditional relational database systems. By simplifying the design and interface, NoSQL systems can provide high scalability and performance for large data sets and high volume workloads. However, to provide such benefits, NoSQL systems sacrifice traditional consistency models and support for transactions typically available in database systems. Without transaction semantics, it is harder for developers to reason about the correctness of the interactions with the data. Therefore, it is important to support transactions for distributed database systems without sacrificing scalability.

In this thesis, I present new techniques for scalable transactions for scalable database systems. Distributed data stores need scalable transactions to take advantage of cloud computing, and to meet the demands of modern applications. Traditional techniques for transactions may not be appropriate in a large, distributed environment, so in this thesis, I describe new techniques for distributed transactions, without having to sacrifice traditional semantics or scalability.

I discuss three facets to improving transaction scalability and support in distributed database systems. First, I describe a new transaction commit protocol that reduces the response times for distributed transactions. Second, I propose a new transaction programming model that allows developers to better deal with the unexpected behavior of distributed transactions. Lastly, I present a new scalable view maintenance algorithm for convergent join views. Together, the new techniques in this thesis contribute to providing scalable transactions for modern, distributed database systems.

Gli stili APA, Harvard, Vancouver, ISO e altri
28

Flodin, Anton. "Leerec : A scalable product recommendation engine suitable for transaction data". Thesis, Mittuniversitetet, Avdelningen för informationssystem och -teknologi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-33941.

Testo completo
Abstract (sommario):
We are currently living in the Internet of Things (IoT) era, which involves devices that are connected to Internet and are communicating with each other. Each year, the number of devices increases rapidly, which result in rapid growth of data that is generated. This large amount of data is sometimes titled as Big Data, which is generated from different sources, such as log data of user behavior. These log files can be collected and analyzed in different ways, such as creating product recommendations. Product recommendations have been around since the late 90s, when the amount of data collected were not at the same level as it is today. The aim of this thesis has been to investigating methods to process and create product recommendations to see how well they are adapted for Big Data. This has been accomplished by three theory studies on how to process user events, how to make the product recommendation algorithm called collaborative filtering scalable and finally how to convert implicit feedback to explicit feedback (ratings). This resulted in a recommendation engine consisting of Apache Spark as the data processing system, which had three functions: read multiple log files and concatenate log files for each month, parsing the log files of the user events to create explicit ratings from the transactions and create four types of recommendations. The NoSQL database MongoDB was chosen as the database to store the different types of product recommendations that was created. To be able to get the recommendations from the recommendation engine and the database, a REST API was implemented which can be used by any third-party. What can be concluded from the results of this thesis work is that the system that was implemented is partial scalable. This means that Apache Spark was scalable for both concatenating files, parse and create ratings and also create the recommendations using the ALS method. However, MongoDB was shown to be not scalable when managing more than 100 concurrent requests. Future work involves making the recommendation engine distributed in a multi-node cluster to utilize the parallelization of Apache Spark. Other recommendations include considering other NoSQL databases that might be more scalable than MongoDB.
Gli stili APA, Harvard, Vancouver, ISO e altri
29

Brodin, Anette. "Applying DB-transaction semantics to agent interactions". Thesis, University of Skövde, Department of Computer Science, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-724.

Testo completo
Abstract (sommario):

Both artificial intelligence (AI) and database (DB) communities have advantages in incorporating features from each other's areas. From a DB view, handling of complexity in today's information systems can be aided by incorporating AI features, and database environments could gain in flexibility by entrusting some of their functionality to agent systems. Contemplated from the AI view, agent systems could gain in robustness by being based on DB-systems.

By applying the semantics of database-transaction to interactions between agents in a multiagent-system, and analysing the consequences, this project endeavours to cross borders of the two research areas. In the project, states where drop-out of some agent is severe to the task fulfilment, of the system, have been identified, and examined after applying transaction semantics to the agent interactions. An existing system for multiagent applications, JADE, has been examined in order to investigate how problem situations are handled in practice. The result from this work shows the feasibility to contemplate both type of systems as very similar, but modelled and viewed in different ways.

Gli stili APA, Harvard, Vancouver, ISO e altri
30

Helal, Mohammad Rahat. "Efficient Isolation Enabled Role-Based Access Control for Database Systems". University of Toledo / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1501627843916302.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
31

Dixon, Eric Richard. "Developing distributed applications with distributed heterogenous databases". Thesis, Virginia Tech, 1993. http://hdl.handle.net/10919/42748.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
32

Xie, Wanxia. "Supporting Distributed Transaction Processing Over Mobile and Heterogeneous Platforms". Diss., Georgia Institute of Technology, 2005. http://hdl.handle.net/1853/14073.

Testo completo
Abstract (sommario):
Recent advances in pervasive computing and peer-to-peer computing have opened up vast opportunities for developing collaborative applications. To benefit from these emerging technologies, there is a need for investigating techniques and tools that will allow development and deployment of these applications on mobile and heterogeneous platforms. To meet these challenging tasks, we need to address the typical characteristics of mobile peer-to-peer systems such as frequent disconnections, frequent network partitions, and peer heterogeneity. This research focuses on developing the necessary models, techniques and algorithms that will enable us to build and deploy collaborative applications in the Internet enabled, mobile peer-to-peer environments. This dissertation proposes a multi-state transaction model and develops a quality aware transaction processing framework to incorporate quality of service with transaction processing. It proposes adaptive ACID properties and develops a quality specification language to associate a quality level with transactions. In addition, this research develops a probabilistic concurrency control mechanism and a group based transaction commit protocol for mobile peer-to-peer systems that greatly reduces blockings in transactions and improves the transaction commit ratio. To the best of our knowledge, this is the first attempt to systematically support disconnection-tolerant and partition-tolerant transaction processing. This dissertation also develops a scalable directory service called PeerDS to support the above framework. It addresses the scalability and dynamism of the directory service from two aspects: peer-to-peer and push-pull hybrid interfaces. It also addresses peer heterogeneity and develops a new technique for load balancing in the peer-to-peer system. This technique comprises an improved routing algorithm for virtualized P2P overlay networks and a generalized Top-K server selection algorithm for load balancing, which could be optimized based on multiple factors such as proximity and cost. The proposed push-pull hybrid interfaces greatly reduce the overhead of directory servers caused by frequent queries from directory clients. In order to further improve the scalability of the push interface, this dissertation also studies and evaluates different filter indexing schemes through which the interests of each update could be calculated very efficiently. This dissertation was developed in conjunction with the middleware called System on Mobile Devices (SyD).
Gli stili APA, Harvard, Vancouver, ISO e altri
33

Shang, Pengju. "Research in high performance and low power computer systems for data-intensive environment". Doctoral diss., University of Central Florida, 2011. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/5033.

Testo completo
Abstract (sommario):
According to the data affinity, DAFA re-organizes data to maximize the parallelism of the affinitive data, and also subjective to the overall load balance. This enables DAFA to realize the maximum number of map tasks with data-locality. Besides the system performance, power consumption is another important concern of current computer systems. In the U.S. alone, the energy used by servers which could be saved comes to 3.17 million tons of carbon dioxide, or 580,678 cars {Kar09}. However, the goals of high performance and low energy consumption are at odds with each other. An ideal power management strategy should be able to dynamically respond to the change (either linear or nonlinear, or non-model) of workloads and system configuration without violating the performance requirement. We propose a novel power management scheme called MAR (modeless, adaptive, rule-based) in multiprocessor systems to minimize the CPU power consumption under performance constraints. By using richer feedback factors, e.g. the I/O wait, MAR is able to accurately describe the relationships among core frequencies, performance and power consumption. We adopt a modeless control model to reduce the complexity of system modeling. MAR is designed for CMP (Chip Multi Processor) systems by employing multi-input/multi-output (MIMO) theory and per-core level DVFS (Dynamic Voltage and Frequency Scaling).; TRAID deduplicates this overlap by only logging one compact version (XOR results) of recovery references for the updating data. It minimizes the amount of log content as well as the log flushing overhead, thereby boosts the overall transaction processing performance. At the same time, TRAID guarantees comparable RAID reliability, the same recovery correctness and ACID semantics of traditional transactional processing systems. On the other hand, the emerging myriad data intensive applications place a demand for high-performance computing resources with massive storage. Academia and industry pioneers have been developing big data parallel computing frameworks and large-scale distributed file systems (DFS) widely used to facilitate the high-performance runs of data-intensive applications, such as bio-informatics {Sch09}, astronomy {RSG10}, and high-energy physics {LGC06}. Our recent work {SMW10} reported that data distribution in DFS can significantly affect the efficiency of data processing and hence the overall application performance. This is especially true for those with sophisticated access patterns. For example, Yahoo's Hadoop {refg} clusters employs a random data placement strategy for load balance and simplicity {reff}. This allows the MapReduce {DG08} programs to access all the data (without or not distinguishing interest locality) at full parallelism. Our work focuses on Hadoop systems. We observed that the data distribution is one of the most important factors that affect the parallel programming performance. However, the default Hadoop adopts random data distribution strategy, which does not consider the data semantics, specifically, data affinity. We propose a Data-Affinity-Aware (DAFA) data placement scheme to address the above problem. DAFA builds a history data access graph to exploit the data affinity.; The evolution of computer science and engineering is always motivated by the requirements for better performance, power efficiency, security, user interface (UI), etc {CM02}. The first two factors are potential tradeoffs: better performance usually requires better hardware, e.g., the CPUs with larger number of transistors, the disks with higher rotation speed; however, the increasing number of transistors on the single die or chip reveals super-linear growth in CPU power consumption {FAA08a}, and the change in disk rotation speed has a quadratic effect on disk power consumption {GSK03}. We propose three new systematic approaches as shown in Figure 1.1, Transactional RAID, data-affinity-aware data placement DAFA and Modeless power management, to tackle the performance problem in Database systems, large scale clusters or cloud platforms, and the power management problem in Chip Multi Processors, respectively. The first design, Transactional RAID (TRAID), is motivated by the fact that in recent years, more storage system applications have employed transaction processing techniques Figure 1.1 Research Work Overview] to ensure data integrity and consistency. In transaction processing systems(TPS), log is a kind of redundancy to ensure transaction ACID (atomicity, consistency, isolation, durability) properties and data recoverability. Furthermore, high reliable storage systems, such as redundant array of inexpensive disks (RAID), are widely used as the underlying storage system for Databases to guarantee system reliability and availability with high I/O performance. However, the Databases and storage systems tend to implement their independent fault tolerant mechanisms {GR93, Tho05} from their own perspectives and thereby leading to potential high overhead. We observe the overlapped redundancies between the TPS and RAID systems, and propose a novel reliable storage architecture called Transactional RAID (TRAID).
ID: 030423445; System requirements: World Wide Web browser and PDF reader.; Mode of access: World Wide Web.; Thesis (Ph.D.)--University of Central Florida, 2011.; Includes bibliographical references (p. 119-128).
Ph.D.
Doctorate
Electrical Engineering and Computer Science
Engineering and Computer Science
Computer Science
Gli stili APA, Harvard, Vancouver, ISO e altri
34

Yan, Cong S. M. Massachusetts Institute of Technology. "Exploiting fine-grain parallelism in transactional database systems". Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/101592.

Testo completo
Abstract (sommario):
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2015.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 47-50).
Current database engines designed for conventional multicore systems exploit a fraction of the parallelism available in transactional workloads. Specifically, database engines only exploit inter-transaction parallelism: they use speculation to concurrently execute multiple, potentially-conflicting database transactions while maintaining atomicity and isolation. However, they do not exploit intra-transaction parallelism: each transaction is executed sequentially on a single thread. While fine-grain intra-transaction parallelism is often abundant, it is too costly to exploit in conventional multicores. Software would need to implement fine-grain speculative execution and scheduling, introducing prohibitive overheads that would negate the benefits of additional intra-transaction parallelism. In this thesis, we leverage novel hardware support to design and implement a database engine that effectively exploits both inter- and intra-transaction parallelism. Specifically, we use Swarm, a new parallel architecture that exploits fine-grained and ordered parallelism. Swarm executes tasks speculatively and out of order, but commits them in order. Integrated hardware task queueing and speculation mechanisms allow Swarm to speculate thousands of tasks ahead of the earliest active task and reduce task management overheads. We modify Silo, a state-of-the-art in-memory database engine, to leverage Swarm's features. The resulting database engine, which we call SwarmDB, has several key benefits over Silo: it eliminates software concurrency control, reducing overheads; it efficiently executes tasks within a database transaction in parallel; it reduces conflicts; and it reduces the amount of work that needs to be discarded and re-executed on each conflict. We evaluate SwarmDB on simulated Swarm systems of up to 64 cores. At 64 cores, SwarmDB outperforms Silo by 6.7x on TPC-C and 6.9x on TPC-E, and achieves near-linear scalability.
by Cong Yan.
S.M.
Gli stili APA, Harvard, Vancouver, ISO e altri
35

Goehring, David (David G. ). "Decibel : transactional branched versioning for relational data systems". Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/106022.

Testo completo
Abstract (sommario):
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 151-159).
As scientific endeavors and data analysis become increasingly collaborative, there is a need for data management systems that natively support the versioning or branching of datasets to enable concurrent analysis, cleaning, integration, manipulation, or curation of data across teams of individuals. Common practice for sharing and collaborating on datasets involves creating or storing multiple copies of the dataset, one for each stage of analysis, with no provenance information tracking the relationships between these datasets. This results not only in wasted storage, but also makes it challenging to track and integrate modifications made by different users to the same dataset. Transaction management (ACID) for such systems requires additional tools to efficiently handle concurrent changes and ensure transactional consistency of the version graph (concurrent versioned commits, branches, and merges as well as changes to records). Furthermore, a new conflict model is required to describe how versioned operations can interfere with each other while still remaining serializable. Decibel is a new relational storage system with built-in version control and transaction management designed to address these shortcomings. Decibel's natural versioning primitives can also be leveraged to implement versioned transactions. Thorough evaluation of three versioned storage engine designs that focus on efficient query processing with minimal storage overhead via the development of an exhaustive benchmark suggest that Decibel is vastly superior to and enables more cross version analysis and functionality than existing techniques and DVCS software like git. Read only and historical cross-version query transactions are non-blocking and proceed all in parallel with minimal overhead. The benchmark also supports analyzing performance of versioned databases with transactional support. It also enables rigorous testing and evaluation of future versioned storage engine designs.
by David Goehring.
M. Eng.
Gli stili APA, Harvard, Vancouver, ISO e altri
36

Yahalom, Raphael. "Managing the order of transactions in widely-distributed data systems". Thesis, University of Cambridge, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.359877.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
37

Åberg, Ludvig. "Parallelism within queue application". Thesis, Mittuniversitetet, Avdelningen för informationssystem och -teknologi, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-31575.

Testo completo
Abstract (sommario):
The aim of this thesis was to modify an existing order queue application which was unable to execute orders in a queue in parallel which in turn could lead to a bad user experience due to the increased queue delay. The thesis proposes two queue structures to allow parallel execution within a queue. One of the two is selected for implemented in the modified order queue application. The implementation was carried out in Java EE and used different types of frameworks such as JPQL. Some parts of the order queue application had to be modified to be able to handle the new queue structure. New attributes that defines dependencies of the orders are used to find a suitable parent for each order in the queue. The queue structure was visualized making it possible to see the execution in real time, and a test server was implemented to test the queue structure. This resulted in a working prototype able to handle dependencies and parallel orders. The modified order queue application was performance measured and compared to the original order queue application. The measurement showed that the modified order queue application performed better than the original order queue application in terms of execution time below a certain number of queues. Future work includes optimizing the methods and queries in the implementation to increase the performance and to handle parallelism within the orders.
Gli stili APA, Harvard, Vancouver, ISO e altri
38

Zhao, Haiquan. "Measurement and resource allocation problems in data streaming systems". Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/34785.

Testo completo
Abstract (sommario):
In a data streaming system, each component consumes one or several streams of data on the fly and produces one or several streams of data for other components. The entire Internet can be viewed as a giant data streaming system. Other examples include real-time exploratory data mining and high performance transaction processing. In this thesis we study several measurement and resource allocation optimization problems of data streaming systems. Measuring quantities associated with one or several data streams is often challenging because the sheer volume of data makes it impractical to store the streams in memory or ship them across the network. A data streaming algorithm processes a long stream of data in one pass using a small working memory (called a sketch). Estimation queries can then be answered from one or more such sketches. An important task is to analyze the performance guarantee of such algorithms. In this thesis we describe a tail bound problem that often occurs and present a technique for solving it using majorization and convex ordering theories. We present two algorithms that utilize our technique. The first is to store a large array of counters in DRAM while achieving the update speed of SRAM. The second is to detect global icebergs across distributed data streams. Resource allocation decisions are important for the performance of a data streaming system. The processing graph of a data streaming system forms a fork and join network. The underlying data processing tasks consists of a rich set of semantics that include synchronous and asynchronous data fork and data join. The different types of semantics and processing requirements introduce complex interdependence between various data streams within the network. We study the distributed resource allocation problem in such systems with the goal of achieving the maximum total utility of output streams. For networks with only synchronous fork and join semantics, we present several decentralized iterative algorithms using primal and dual based optimization techniques. For general networks with both synchronous and asynchronous fork and join semantics, we present a novel modeling framework to formulate the resource allocation problem, and present a shadow-queue based decentralized iterative algorithm to solve the resource allocation problem. We show that all the algorithms guarantee optimality and demonstrate through simulation that they can adapt quickly to dynamically changing environments.
Gli stili APA, Harvard, Vancouver, ISO e altri
39

Wu, Jiang. "CHECKPOINTING AND RECOVERY IN DISTRIBUTED AND DATABASE SYSTEMS". UKnowledge, 2011. http://uknowledge.uky.edu/cs_etds/2.

Testo completo
Abstract (sommario):
A transaction-consistent global checkpoint of a database records a state of the database which reflects the effect of only completed transactions and not the re- sults of any partially executed transactions. This thesis establishes the necessary and sufficient conditions for a checkpoint of a data item (or the checkpoints of a set of data items) to be part of a transaction-consistent global checkpoint of the database. This result would be useful for constructing transaction-consistent global checkpoints incrementally from the checkpoints of each individual data item of a database. By applying this condition, we can start from any useful checkpoint of any data item and then incrementally add checkpoints of other data items until we get a transaction- consistent global checkpoint of the database. This result can also help in designing non-intrusive checkpointing protocols for database systems. Based on the intuition gained from the development of the necessary and sufficient conditions, we also de- veloped a non-intrusive low-overhead checkpointing protocol for distributed database systems. Checkpointing and rollback recovery are also established techniques for achiev- ing fault-tolerance in distributed systems. Communication-induced checkpointing algorithms allow processes involved in a distributed computation take checkpoints independently while at the same time force processes to take additional checkpoints to make each checkpoint to be part of a consistent global checkpoint. This thesis develops a low-overhead communication-induced checkpointing protocol and presents a performance evaluation of the protocol.
Gli stili APA, Harvard, Vancouver, ISO e altri
40

Rocha, Tarcisio da. "Serviços de transação abertos para ambientes dinamicos". [s.n.], 2008. http://repositorio.unicamp.br/jspui/handle/REPOSIP/276015.

Testo completo
Abstract (sommario):
Orientador: Maria Beatriz Felgar de Toledo
Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Computação
Made available in DSpace on 2018-08-13T03:59:50Z (GMT). No. of bitstreams: 1 Rocha_Tarcisioda_D.pdf: 1796192 bytes, checksum: 4b25ccccc2fa363f13a02764136f5208 (MD5) Previous issue date: 2008
Resumo: Tecnicas de processamento de transações tem sido de grande importancia no que diz respeito a preservação da correção em diversas areas da computação. Devido a funções como, garantir a consistencia de dados, a recuperação de falhas e o controle de concorrencia, transações são consideradas blocos de construção apropriados para a estruturação de sistemas confiaveis. Contudo, desenvolver tecnicas de apoio a transações para ambientes dinamicos pode ser uma tarefa complexa. O primeiro obstaculo esta no proprio dinamismo - a disponibilidade de recursos pode variar inesperadamente. Isso pode causar dois efeitos diretos: altas taxas de cancelamento de transações e grandes atrasos na execução das tarefas transacionais. O segundo obstaculo esta na crescente flexibilização do conceito de transação. Isso ocorre porque os requisitos transacionais exigidos pelas aplicações atuais estão se tornando mais variados, indo al'em das propriedades tradicionalmente definidas para uma transação. Nesse contexto, esta tese aborda a viabilização de serviços de transações abertos, ou seja, capazes de terem sua estrutura e comportamento configurados pelos programadores de aplicações como um meio de atender a requisitos especificos do dominio de suas aplicações. Como parte desse estudo foi proposto um modelo que abstrai alguns elementos arquiteturais como jumpers, slots e demultiplexadores que podem ser usados na especificação de pontos de configuração em serviços de transação. Esse modelo e implementado como uma camada acima de um modelo de componentes existente. Com isso, desenvolvedores de serviços de transação passam a contar com esses elementos abertos alem daqueles disponibilizados por abordagens tradicionais baseadas em componentes. Para confirmar os beneficios em usabilidade, flexibilidade e extensão, esta tese apresenta dois serviços de transação abertos que foram especificados com base no modelo proposto. O primeiro serviço faz parte de uma plataforma de transações adaptavel para ambientes de computação movel. O segundo serviço faz parte de um sistema que prove adaptação dinamica de protocolos de efetivação (commit) de transações. Segundo os testes realizados, a abordagem apresentada nesta tese trouxe a esses serviços a capacidade de atender requisitos de aplicações de diferentes dominios.
Abstract: Transaction processing techniques are considered important solutions on preserving correctness in several fields of computing. Due their functions such as, failure recovery and concurrency control, transactions are considered appropriated building blocks for structuring reliable systems. Despite its advantages, to develop transaction systems for dynamic environments is not an easy task. The first problem is the dynamism - the resource availability can vary unexpectedly. This can cause the following side effects: high transaction abort rates and relevant delays of transaction operations. The second problem is the flexibilization of the transaction concept. The transactional requirements are becoming more diversified - they extrapolate the bounds of the traditional transactional properties. In this context, this thesis approaches the practicability of open transaction services that can be configured by the application programmers for attending specific requirements of different application domains. This thesis includes a model that abstracts some architectural elements (slots, jumpers and demultiplexers) that can be used for specifying configuration points in transaction services. To confirm its benefits on usability, flexibility and extension, this thesis presents two open transaction services that were specified based on the proposed model. The first service is part of an adaptable transaction platform for mobile computing environments. The second service is part of a system that provides dynamic adaptation of commit protocols. According the accomplished tests, the approach presented in this thesis is able to give to these services the capacity of attending the requirement of applications in different domains.
Doutorado
Sistemas Distribuidos
Doutor em Ciência da Computação
Gli stili APA, Harvard, Vancouver, ISO e altri
41

Pant, Shristi. "A SECURE ONLINE PAYMENT SYSTEM". UKnowledge, 2011. http://uknowledge.uky.edu/cs_etds/1.

Testo completo
Abstract (sommario):
An online payment system allows a customer to make a payment to an online merchant or a service provider. Payment gateways, a channel between customers and payment processors, use various security tools to secure a customer’s payment information, usually debit or credit card information, during an online payment. However, the security provided by a payment gateway cannot completely protect a customer’s payment information when a merchant also has the ability to obtain the payment information in some form. Furthermore, not all merchants provide a secure payment environment to their customers and, despite having a standard payment policy, adhere to it. Consequently, this exposes a customer’s payment information to risks of being compromised or misused by merchants or stolen by hackers and spammers. In this thesis we propose a new approach to payment systems in which a customer’s payment information cannot be obtained by a merchant. A customer sends his payment information directly to a payment gateway and a payment gateway, upon verifying the transaction, sends a payment to the appropriate merchant. We use the Pedersen commitment scheme along with dual signatures to securely transfer funds to a merchant and protect a customer’s payment information from any Internet vulnerabilities.
Gli stili APA, Harvard, Vancouver, ISO e altri
42

Blackshaw, Bruce Philip. "Migration of legacy OLTP architectures to distributed systems". Thesis, Queensland University of Technology, 1997. https://eprints.qut.edu.au/36839/1/36839_Blackshaw_1997.pdf.

Testo completo
Abstract (sommario):
Mincom, a successful Australian software company, markets an enterprise product known as the Mincom Information Management System, or MIMS. MIMS is an integrated suite of modules covering materials, maintenance, financials, and human resources management. MIMS is an on-line transaction processing (OLTP) system, meaning it has special requirements in the areas of pe,jormance and scalability. MIMS consists of approxiniately 16 000 000 lines of code, most of which is written in COBOL. Its basic architecture is 3-tier client/server, utilising a database layer, application logic layer, and a Graphical User Inte,face (GUI). While this architecture has proved successful, Mincom is looking to gradually evolve MIMS into a distributed architecture. COREA is the target distributed framework. The development of an enterprise distributed system is fraught with difficulties. Key technical problems are not yet solved, and Mincom cannot afford the risk and cost involved in rewriting MIMS completely. The only viable approach is to gradually evolve MIMS into the desired architecture using a hybrid system that allows clients to access existing and new functionality. This thesis addresses the design and development of distributed systems, and the evolution of existing legacy systems into this architecture. It details the current MIMS architecture, and explains some of its shortcomings. The desirable characteristics of a new system based on a distributed architecture such as COREA are outlined. A case is established for a gradual migration of the current system via a hybrid system rather than a complete rewrite. Two experimental systems designed to investigate the proposed new architecture are discussed. The conclusion reached from the first, known as Genesis, is that the maturity of CORBA for ente1prise development is not sufficient-12-18 months are estimated to be required for the appropriate level of maturity to be reached. The second system, EGEN, demonstrates how workflow can be integrated into a distributed system. An event-based workflow architecture is demonstrated, and it is explained how a workflow event server can be used to provide workflow services across a hybrid system. EGEN also demonstrates how a middleware gateway can be used to allow COREA clients access to the functionality of the existing MIMS system. Finally, a proposed migration strategy for moving MIMS to a distributed architecture based on COREA is outlined. While developed specifically for MIMS, this strategy is broadly applicable to the migration of any large 3-tier client/server system to a distributed architecture.
Gli stili APA, Harvard, Vancouver, ISO e altri
43

Viana, Giovanni Bogéa 1981. "Agentes no gerenciamento de transações moveis". [s.n.], 2006. http://repositorio.unicamp.br/jspui/handle/REPOSIP/276294.

Testo completo
Abstract (sommario):
Orientador: Maria Beatriz Felgar de Toledo
Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação
Made available in DSpace on 2018-08-07T04:30:16Z (GMT). No. of bitstreams: 1 Viana_GiovanniBogea_M.pdf: 694479 bytes, checksum: 8f1fc7bfaffc85403fba40175dcebd16 (MD5) Previous issue date: 2006
Resumo: Esta dissertação apresenta um modelo de transações para ambientes de computação móvel que leva em conta o dinamismo e interatividade nesse ambiente. Para lidar com o dinamismo, tanto aplicações como o gerenciador da transação e os objetos participantes de uma transação são executados como agentes e podem se mover a critério da aplicação. As responsabilidades de adaptação ao dinamismo do ambiente são divididas entre aplicações e o sistema de apoio. O sistema monitora o ambiente e envia notificações às aplicações sobre variações no ambiente. As aplicações decidem sobre as políticas para se adaptar às mudanças. Para lidar com a interatividade, as operações de uma transação podem ser submetidas passo-a-passo e o usuário pode adotar as estratégias mais adequadas conforme as necessidades da aplicação e as mudanças no ambiente
Abstract: This dissertation presents a transaction model for mobile computing environments that takes into account their dynamism and interactivity. To deal with dynamism, applications, transaction managers and participating objects are executed as agents which can move as commanded by the application. The responsabilities for adaptation are divided between the applications and the underlying system. The system monitors the environment and sends notifications to applications about variations in the environment. Applications decide about policies to adapt to changes. To deal with interactivity, operations of one transaction may be submitted step-by-step and the user is able to adopt the best strategies considering application requirements and changes in the environment
Mestrado
Mestre em Ciência da Computação
Gli stili APA, Harvard, Vancouver, ISO e altri
44

Schricker, Marc. "Extract of reasons which could determine the decision to change from an EDI to a XML transaction processing system". Thesis, University of Skövde, School of Humanities and Informatics, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-932.

Testo completo
Abstract (sommario):

EDI as well as XML are the basic communication standards used in B2B e-commerce. With special interest to transaction processing systems. According to the specific attributes EDI and XML provides. There could be some common and merely different features derived. But also there could be the impact of considerations from the business organisation domain with additional issues which determine the use of EDI or XML.

In this study the particular interest is in the finding of a set of reasons, not only in the single sight of the performance of the two techniques. The study surveys the impact of business processes, the found business environment settings and the choose of standards with special reference to the scrutinised area.

Gli stili APA, Harvard, Vancouver, ISO e altri
45

Almeida, Fábio Renato de [UNESP]. "Gerenciamento de transação e mecanismo de serialização baseado em Snapshot". Universidade Estadual Paulista (UNESP), 2014. http://hdl.handle.net/11449/122161.

Testo completo
Abstract (sommario):
Made available in DSpace on 2015-04-09T12:28:25Z (GMT). No. of bitstreams: 0 Previous issue date: 2014-02-28Bitstream added on 2015-04-09T12:47:36Z : No. of bitstreams: 1 000811822.pdf: 1282272 bytes, checksum: ffbcb6d3dc96adfefe2d6b8418c1e323 (MD5)
Dentre os diversos níveis de isolamento sob os quais uma transação pode executar, Snapshot se destaca pelo fato de lidar com uma visão isolada da base de dados. Uma transação sob o isolamento Snapshot nunca bloqueia e nunca é bloqueada quando solicita uma operação de leitura, permitindo portanto uma maior concorrência quando a mesma é comparada a uma execução sob um isolamento baseado em bloqueios. Entretanto, Snapshot não é imune a todos os problemas decorrentes da concorrência e, portanto, não oferece garantia de serialização. Duas estratégias são comumente empregadas para se obter tal garantia. Na primeira delas o próprio Snapshot é utilizado, mas uma alteração estratégica na aplicação e na base de dados, ou até mesmo a inclusão de um componente de software extra, são empregados como auxiliares para se obter apenas históricos serializáveis. Outra estratégia, explorada nos últimos anos, tem sido a construção de algoritmos fundamentados no protocolo de Snapshot, mas adaptados de modo a impedir as anomalias decorrentes do mesmo e, portanto, garantir serialização. A primeira estratégia traz como vantagem o fato de se aproveitar os benefícios de Snapshot, principalmente no que diz respeito ao monitoramento apenas dos elementos que são escritos pela transação. Contudo, parte da responsabilidade em se lidar com problemas de concorrência é transferida do Sistema Gerenciador de Banco de Dados (SGBD) para a aplicação. Por sua vez, a segunda estratégia deixa apenas o SGBD como responsável pelo controle de concorrência, mas os algoritmos até então apresentados nesta categoria tem exigido também o monitoramento dos elementos lidos. Neste trabalho é desenvolvida uma técnica onde os benefícios de Snapshot são mantidos e a garantia de serialização é obtida sem a necessidade de adaptação do código da aplicação ou da introdução de uma camada de software extra. A técnica proposta é ...
Among the various isolation levels under which a transaction can execute, Snapshot stands out because of its capacity to work on an isolated view of the database. A transaction under the Snapshot isolation never blocks and is never blocked when requesting a read operation, thus allowing a higher level of concurrency when it is compared to an execution under a lock-based isolation. However, Snapshot is not immune to all the problems that arise from the competition, and therefore no serialization warranty exists. Two strategies are commonly employed to obtain such assurance. In the first one Snapshot itself is used, but a strategic change in the application and database, or even the addition of an extra software component, are employed as assistants to get only serializable histories. Another strategy, explored in recent years, has been the coding of algorithms based on the Snapshot protocol, but adapted to prevent the anomalies arising from it, and therefore ensure serialization. The first strategy has the advantage of exploring the benefits of Snapshot, especially with regard to monitoring only the elements that are written by the transaction. However, part of the responsibility for dealing with competition issues is transferred from the Database Management System (DBMS) to the application. In turn, the second strategy leaves only the DBMS as responsible for concurrency control, but the algorithms presented so far in this category also require the monitoring of the elements that the transaction reads. In this work we developed a technique where the benefits of Snapshot use are retained and serialization warranty is achieved without the need for adaptation of application code or the addition of an extra software layer. The proposed technique is implemented in a prototype of a DBMS that has temporal features and has been built to demonstrate the applicability of the technique in systems that employ the object-oriented model. However, the ...
Gli stili APA, Harvard, Vancouver, ISO e altri
46

Almeida, Fábio Renato de. "Gerenciamento de transação e mecanismo de serialização baseado em Snapshot /". São José do Rio Preto, 2014. http://hdl.handle.net/11449/122161.

Testo completo
Abstract (sommario):
Orientador: Carlos Roberto Valêncio
Banca: Elaine Parros Machado de Sousa
Banca: Rogéria Cristiane Gratão de Souza
Resumo: Dentre os diversos níveis de isolamento sob os quais uma transação pode executar, Snapshot se destaca pelo fato de lidar com uma visão isolada da base de dados. Uma transação sob o isolamento Snapshot nunca bloqueia e nunca é bloqueada quando solicita uma operação de leitura, permitindo portanto uma maior concorrência quando a mesma é comparada a uma execução sob um isolamento baseado em bloqueios. Entretanto, Snapshot não é imune a todos os problemas decorrentes da concorrência e, portanto, não oferece garantia de serialização. Duas estratégias são comumente empregadas para se obter tal garantia. Na primeira delas o próprio Snapshot é utilizado, mas uma alteração estratégica na aplicação e na base de dados, ou até mesmo a inclusão de um componente de software extra, são empregados como auxiliares para se obter apenas históricos serializáveis. Outra estratégia, explorada nos últimos anos, tem sido a construção de algoritmos fundamentados no protocolo de Snapshot, mas adaptados de modo a impedir as anomalias decorrentes do mesmo e, portanto, garantir serialização. A primeira estratégia traz como vantagem o fato de se aproveitar os benefícios de Snapshot, principalmente no que diz respeito ao monitoramento apenas dos elementos que são escritos pela transação. Contudo, parte da responsabilidade em se lidar com problemas de concorrência é transferida do Sistema Gerenciador de Banco de Dados (SGBD) para a aplicação. Por sua vez, a segunda estratégia deixa apenas o SGBD como responsável pelo controle de concorrência, mas os algoritmos até então apresentados nesta categoria tem exigido também o monitoramento dos elementos lidos. Neste trabalho é desenvolvida uma técnica onde os benefícios de Snapshot são mantidos e a garantia de serialização é obtida sem a necessidade de adaptação do código da aplicação ou da introdução de uma camada de software extra. A técnica proposta é ...
Abstract: Among the various isolation levels under which a transaction can execute, Snapshot stands out because of its capacity to work on an isolated view of the database. A transaction under the Snapshot isolation never blocks and is never blocked when requesting a read operation, thus allowing a higher level of concurrency when it is compared to an execution under a lock-based isolation. However, Snapshot is not immune to all the problems that arise from the competition, and therefore no serialization warranty exists. Two strategies are commonly employed to obtain such assurance. In the first one Snapshot itself is used, but a strategic change in the application and database, or even the addition of an extra software component, are employed as assistants to get only serializable histories. Another strategy, explored in recent years, has been the coding of algorithms based on the Snapshot protocol, but adapted to prevent the anomalies arising from it, and therefore ensure serialization. The first strategy has the advantage of exploring the benefits of Snapshot, especially with regard to monitoring only the elements that are written by the transaction. However, part of the responsibility for dealing with competition issues is transferred from the Database Management System (DBMS) to the application. In turn, the second strategy leaves only the DBMS as responsible for concurrency control, but the algorithms presented so far in this category also require the monitoring of the elements that the transaction reads. In this work we developed a technique where the benefits of Snapshot use are retained and serialization warranty is achieved without the need for adaptation of application code or the addition of an extra software layer. The proposed technique is implemented in a prototype of a DBMS that has temporal features and has been built to demonstrate the applicability of the technique in systems that employ the object-oriented model. However, the ...
Mestre
Gli stili APA, Harvard, Vancouver, ISO e altri
47

Hamilton, Howard Gregory. "An Examination of Service Level Agreement Attributes that Influence Cloud Computing Adoption". NSUWorks, 2015. http://nsuworks.nova.edu/gscis_etd/53.

Testo completo
Abstract (sommario):
Cloud computing is perceived as the technological innovation that will transform future investments in information technology. As cloud services become more ubiquitous, public and private enterprises still grapple with concerns about cloud computing. One such concern is about service level agreements (SLAs) and their appropriateness. While the benefits of using cloud services are well defined, the debate about the challenges that may inhibit the seamless adoption of these services still continues. SLAs are seen as an instrument to help foster adoption. However, cloud computing SLAs are alleged to be ineffective, meaningless, and costly to administer. This could impact widespread acceptance of cloud computing. This research was based on the transaction cost economics theory with focus on uncertainty, asset specificity and transaction cost. SLA uncertainty and SLA asset specificity were introduced by this research and used to determine the technical and non-technical attributes for cloud computing SLAs. A conceptual model, built on the concept of transaction cost economics, was used to highlight the theoretical framework for this research. This study applied a mixed methods sequential exploratory research design to determine SLA attributes that influence the adoption of cloud computing. The research was conducted using two phases. First, interviews with 10 cloud computing experts were done to identify and confirm key SLA attributes. These attributes were then used as the main thematic areas for this study. In the second phase, the output from phase one was used as the input to the development of an instrument which was administered to 97 businesses to determine their perspectives on the cloud computing SLA attributes identified in the first phase. Partial least squares structural equation modelling was used to test for statistical significance of the hypotheses and to validate the theoretical basis of this study. Qualitative and quantitative analyses were done on the data to establish a set of attributes considered SLA imperatives for cloud computing adoption.
Gli stili APA, Harvard, Vancouver, ISO e altri
48

Chandler, Shawn Aaron. "Global Time-Independent Agent-Based Simulation for Transactive Energy System Dispatch and Schedule Forecasting". PDXScholar, 2015. https://pdxscholar.library.pdx.edu/open_access_etds/2212.

Testo completo
Abstract (sommario):
Electricity service providers (ESP) worldwide have increased their interest in the use of electrical distribution, transmission, generation, storage, and responsive load resources as integrated systems. Referred to commonly as "smart grid," their interest is driven by widespread goals to improve the operations, management and control of large-scale power systems. In this thesis I provide research into a novel agent-based simulation (ABS) approach for exploring smart grid system (SGS) dispatch, schedule forecasting and resource coordination. I model an electrical grid and its assets as an adaptive ABS, assigning an agent construct to every SGS resource including demand response, energy storage, and distributed generation assets. Importantly, real time is represented as an environment variable within the simulation, such that each resource is characterized temporally by multiple agents that reside in different times. The simulation contains at least as many agents per resource as there are time intervals being investigated. These agents may communicate with each other during the simulation, but only agents assigned to represent the same unique resource may exchange information between time periods. Thus, confined within each time interval, each resource agent may also interact with other resource agents. As with any agent-based model, the agents may also interact with the environment, in this case, containing forecasted environment, load and price information specific to each time interval. The resulting model is a time-independent global approach capable of: (1) capturing time-variant local grid conditions and distribution grid load balancing constraints; (2) capturing time-variant resource availability and price constraints, and finally, (3) simulating efficient unit-commitment real-time dispatches and schedule forecasts considering time-variant forecasted transactive market prices. This thesis details the need for such a system, discusses the form of the ABS, and analyzes the predictive behavior of the model through a critical lens by applying the resulting proof-of-concept simulation to a set of comprehensive validation scenarios. The resulting analysis demonstrates ABS as an effective tool for real-time dispatch and SGS schedule forecasting as applied to research, short-term economic operations planning and transactive systems alike. The model is shown to converge on economic opportunities regardless of the price or load-forecast shape and to correctly perform least-cost dispatch and schedule forecasting functionality.
Gli stili APA, Harvard, Vancouver, ISO e altri
49

Leung, Philip, e Daniel Svensson. "SecuRES: Secure Resource Sharing System : AN INVESTIGATION INTO USE OF PUBLIC LEDGER TECHNOLOGY TO CREATE DECENTRALIZED DIGITAL RESOURCE-SHARING SYSTEMS". Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-187348.

Testo completo
Abstract (sommario):
The project aims at solving the problem of non-repudiation, integrity and confidentiality of data when digitally exchanging sensitive resources between parties that need to be able to trust each other without the need for a trusted third party. This is done in the framework of answering to what extent digital resources can be shared securely in a decentralized public ledger-based system compared to trust-based alternatives. A background of existing resource sharing solutions is explored which shows an abundance third party trust-based systems, but also an interest in public ledger solutions in the form of the Storj network which uses such technology, but focuses on storage rather than sharing. The proposed solution, called SecuRES, is a communication protocol based on public ledger technology which acts similar to Bitcoin. A prototype based on the protocol has been implemented which proves the ability to share encrypted files with one or several recipients through a decentralized public ledger-based network. It was concluded that the SecuRES solution could do away with the requirement of trust in third parties for all but some optional operations using external authentication services. This is done while still maintaining data integrity of a similar or greater degree to trust-based solutions and offers the additional benefits of non-repudiation, high confidentiality and high transparency from the ability to make source code and protocol documentation openly available without endangering the system. Further research is needed to investigate whether the system can scale up for widespread adoption while maintaining security and reasonable performance requirements.
Projektet ämnar lösa problemen med oförnekbarhet, integritet och konfidentialitet när man delar känsligt data mellan parter som behöver lita på varandra utan inblanding av betrodd tredje part. Detta diskuteras för att besvara till vilken omfattning digitala resurser kan delas säkert i ett decentraliserat system baserat på publika liggare jämfört med existerande tillitsbaserade alternativ. En undersökning av nuvarande resursdelningslösningar visar att det existerar många tillitsbaserade system men även en växande andel lösningar baserade på publika liggare. En intressant lösning som lyfts fram är Storj som använder sådan teknologi men fokuserar på resurslagring mer är delning. Projektets föreslagna lösning, kallad SecuRES, är ett kommunikationsprotokoll baserat på en publik liggare likt Bitcoin. En prototyp baserad på protokollet har tagits fram som visar att det är möjligt att dela krypterade filer med en eller flera mottagare genom ett decentraliserat nätverk baserat på publika liggare. Slutsatsen som dras är att SecuRES klarar sig utan betrodda tredje parter för att dela resurser medan vissa operationer kan göras mer användarvänliga genom externa autentiseringstjänster. Själva lösningen garanterar integritet av data och medför ytterligare fördelar såsom oförnekbarhet, konfidentialitet och hög transparens då man kan göra källkoden och protocoldokumentation fritt läsbar utan att utsätta systemet för fara. Vidare forskning behövs för att undersöka om systemet kan skalas upp för allmän användning och alltjämt bibehålla säkerhets- samt prestandakrav.
Gli stili APA, Harvard, Vancouver, ISO e altri
50

Kendric, Hood A. "Improving Cryptocurrency Blockchain Security and Availability Adaptive Security and Partitioning". Kent State University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=kent1595038779436782.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Offriamo sconti su tutti i piani premium per gli autori le cui opere sono incluse in raccolte letterarie tematiche. Contattaci per ottenere un codice promozionale unico!

Vai alla bibliografia