Dissertations / Theses on the topic 'Transaction systems (Computer systems)'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Transaction systems (Computer systems).'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Yoo, Richard M. "Adaptive transaction scheduling for transactional memory systems." Thesis, Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/22587.
Full textCommittee Chair: Lee, Hsien-Hsin; Committee Member: Blough, Douglas; Committee Member: Yalamanchili, Sudhakar.
Prabhu, Nitin Kumar Vijay. "Transaction processing in Mobile Database System." Diss., UMK access, 2006.
Find full text"A dissertation in computer science and informatics and telecommunications and computer networking." Advisor: Vijay Kumar. Typescript. Vita. Title from "catalog record" of the print edition Description based on contents viewed Nov. 9, 2007. Includes bibliographical references (leaves 152-157). Online version of the print edition.
Chen, Jianwen, University of Western Sydney, of Science Technology and Environment College, and School of Computing and Information Technology. "Data and knowledge transaction in mobile environments." THESIS_CSTE_CIT_Chen_J.xml, 2004. http://handle.uws.edu.au:8081/1959.7/806.
Full textDoctor of Philosophy (PhD) (Science)
Xia, Yu S. M. Massachusetts Institute of Technology. "Logical timestamps in distributed transaction processing systems." Thesis, Massachusetts Institute of Technology, 2018. https://hdl.handle.net/1721.1/122877.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (pages 73-79).
Distributed transactions are such transactions with remote data access. They usually suffer from high network latency (compared to the internal overhead) during data operations on remote data servers, and therefore lengthen the entire transaction executiont time. This increases the probability of conflicting with other transactions, causing high abort rates. This, in turn, causes poor performance. In this work, we constructed Sundial, a distributed concurrency control algorithm that applies logical timestamps seaminglessly with a cache protocol, and works in a hybrid fashion where an optimistic approach is combined with lock-based schemes. Sundial tackles the inefficiency problem in two ways. Firstly, Sundial decides the order of transactions on the fly. Transactions get their commit timestamp according to their data access traces. Each data item in the database has logical leases maintained by the system. A lease corresponds to a version of the item. At any logical time point, only a single transaction holds the 'lease' for any particular data item. Therefore, lease holders do not have to worry about someone else writing to the item because in the logical timeline, the data writer needs to acquire a new lease which is disjoint from the holder's. This lease information is used to calculate the logical commit time for transactions. Secondly, Sundial has a novel caching scheme that works together with logical leases. The scheme allows the local data server to automatically cache data from the remote server while preserving data coherence. We benchmarked Sundial along with state-of-the-art distributed transactional concurrency control protocols. On YCSB, Sundial outperforms the second best protocol by 57% under high data access contention. On TPC-C, Sundial has a 34% improvement over the state-of-the-art candidate. Our caching scheme has performance gain comparable with hand-optimized data replication. With high access skew, it speeds the workload by up to 4.6 x.
"This work was supported (in part) by the U.S. National Science Foundation (CCF-1438955)"
by Yu Xia.
S.M.
S.M. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
Tang, Rong. "Transaction management in peer-to-peer multidatabase systems." Thesis, University of Ottawa (Canada), 2005. http://hdl.handle.net/10393/27055.
Full textGao, Shen. "Transaction logging and recovery on phase-change memory." HKBU Institutional Repository, 2013. http://repository.hkbu.edu.hk/etd_ra/1549.
Full textChen, Jianwen. "Data and knowledge transaction in mobile environments." Thesis, View thesis, 2004. http://handle.uws.edu.au:8081/1959.7/806.
Full textSleat, Philip M. "A static, transaction based design methodology for hard real-time systems." Thesis, City, University of London, 1991. http://openaccess.city.ac.uk/17414/.
Full textDwyer, Barry. "Automatic design of batch processing systems." Title page, abstract, table of contents and introduction only, 1999. http://web4.library.adelaide.edu.au/theses/09PH/09phd993.pdf.
Full textGin, Andrew. "Building a Secure Short Duration Transaction Network." Thesis, University of Canterbury. Computer Science and Software Engineering, 2007. http://hdl.handle.net/10092/1188.
Full textMomin, Kaleem A. "A transaction execution model for mobile computing environments." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape7/PQDD_0017/MQ54939.pdf.
Full textBushager, Aisha Fouad. "Smart card systems : managing risks and modelling security protocols using SystemC and Transaction Level Modelling." Thesis, University of Southampton, 2011. https://eprints.soton.ac.uk/300444/.
Full textChen, Jianwen. "Data and knowledge transaction in mobile environments." View thesis, 2004. http://library.uws.edu.au/adt-NUWS/public/adt-NUWS20050729.131812/index.html.
Full textA thesis submitted in fulfillment of the requirements for the degree of Doctor of Philosophy (Science) - Computing and Information Technology. Includes bibliography.
Madiraju, Sugandhi. "An Agent Based Transaction Manager for Multidatabase Systems." Digital Archive @ GSU, 2006. http://digitalarchive.gsu.edu/cs_theses/36.
Full textChan, Yew Meng. "Processing mobile read-only transactions in broadcast environments with group consistency /." access full-text access abstract and table of contents, 2005. http://libweb.cityu.edu.hk/cgi-bin/ezdb/thesis.pl?mphil-cs-b19887504a.pdf.
Full text"Submitted to Department of Computer Science in partial fulfillment of the requirements for the degree of Master of Philosophy" Includes bibliographical references (leaves 98-102)
Chan, Kinson, and 陳傑信. "Distributed software transactional memory with clock validation on clusters." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2013. http://hub.hku.hk/bib/B5053404X.
Full textpublished_or_final_version
Computer Science
Doctoral
Doctor of Philosophy
King, Stuart. "Optimizations and applications of Trie-Tree based frequent pattern mining." Diss., Connect to online resource - MSU authorized users, 2006.
Find full textTitle from PDF t.p. (viewed on June 19, 2009) Includes bibliographical references (p. 79-80). Also issued in print.
Mena, Eduardo Illarramendi Arantza. "Ontology-based query processing for global information systems /." Boston [u.a.] : Kluwer Acad. Publ, 2001. http://www.loc.gov/catdir/enhancements/fy0813/2001029621-d.html.
Full textXing, Xuemin. "Ad-hoc recovery in workflow systems : formal model and a prototype system /." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape7/PQDD_0035/MQ62442.pdf.
Full textWang, Mingzhong. "ARTS : agent-oriented robust transactional system /." Connect to thesis, 2009. http://repository.unimelb.edu.au/10187/6778.
Full textIn this dissertation, we investigate and develop mechanisms to integrate intrinsic support for concurrency control, exception handling, recoverability, and robustness into multi-agent systems. The research covers agent specification, planning and scheduling, execution, and overall coordination, in order to reduce the impact of environmental uncertainty. Simulation results confirm that our model can improve the robustness and performance of the system, while relieving developers from dealing with the low level complexity of exception handling.
A survey, along with a taxonomy, of existing proposals and approaches for building robust multi-agent systems is provided first. In addition, the merits and limitations of each category are highlighted.
Next, we introduce the ARTS (Agent-Oriented Robust Transactional System) platform which allows agent developers to compose recursively-defined, atomically-handled tasks to specify scoped and hierarchically-organized exception-handling plans for a given goal. ARTS then supports automatic selection, execution, and monitoring of appropriate plans in a systematic way, for both normal and recovery executions. Moreover, we propose multiple-step backtracking, which extends the existing step-by-step plan reversal, to serve as the default exception handling and recovery mechanism in ARTS. This mechanism utilizes previous planning results in determining the response to a failure, and allows a substitutable path to start, prior to, or in parallel with, the compensation process, thus allowing an agent to achieve its goals more directly and efficiently. ARTS helps developers to focus on high-level business logic and relaxes them from considering low-level complexity of exception management.
One of the reasons for the occurrence of exceptions in a multi-agent system is that agents are unable to adhere to their commitments. We propose two scheduling algorithms for minimising such exceptions when commitments are unreliable. The first scheduling algorithm is trust-based scheduling, which incorporates the concept of trust, that is, the probability that an agent will comply with its commitments, along with the constraints of system budget and deadline, to improve the predictability and stability of the schedule. Trust-based scheduling supports the runtime adaptation and evolvement of the schedule by interleaving the processes of evaluation, scheduling, execution, and monitoring in the life cycle of a plan. The second scheduling algorithm is commitment-based scheduling, which focuses on the interaction and coordination protocol among agents, and augments agents with the ability to reason about and manipulate their commitments. Commitment-based scheduling supports the refactoring and parallel execution of commitments to maximize the system's overall robustness and performance. While the first scheduling algorithm needs to be performed by a central coordinator, the second algorithm is designed to be distributed and embedded into the individual agent.
Finally, we discuss the integration of our approaches into Internet-based applications, to build flexible but robust systems. Specifically, we discuss the designs of an adaptive business process management system and of robust scientific workflow scheduling.
Naik, Aniket Dilip. "Efficient Conditional Synchronization for Transactional Memory Based System." Thesis, Georgia Institute of Technology, 2006. http://hdl.handle.net/1853/10517.
Full textReid, Elizabeth G. "Design and evaluation of a benchmark for main memory transaction processing systems." Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/53162.
Full textIncludes bibliographical references (p. 63).
We designed a diverse collection of benchmarks for Main Memory Database Systems (MMDBs) to validate and compare entries in a programming contest. Each entrant to the contest programmed an indexing system optimized for multicore multithread execution. The contest framework provided an API for the contestants, and benchmarked their submissions. This thesis describes the test goals, the API, and the test environment. It documents the website used by the contestants, describes the general nature of the tests run on each submission, and summarizes the results for each submission that was able to complete the tests.
by Elizabeth G. Reid.
M.Eng.
Zhu, Huang. "A transaction model for environmental resource dependent Cyber-Physical Systems." Diss., Kansas State University, 2014. http://hdl.handle.net/2097/18122.
Full textDepartment of Computing and Information Sciences
Gurdip Singh
Cyber-Physical Systems (CPSs) represent the next-generation systems characterized by strong coupling of computing, sensing, communication, and control technologies. They have the potential to transform our world with more intelligent and efficient systems, such as Smart Home, Intelligent Transportation System, Energy-Aware Building, Smart Power Grid, and Surgical Robot. A CPS is composed of a computational and a physical subsystem. The computational subsystem monitors, coordinates and controls operations of the physical subsystem to create desired physical effects, while the physical subsystem performs physical operations and gives feedback to the computational subsystem. This dissertation contributes to the research of CPSs by proposing a new transaction model for Environmental Resource Dependent Cyber-Physical Systems (ERDCPSs). The physical operations of such type of CPSs rely on environmental resources, and they are commonly seen in areas such as transportation and manufacturing. For example, an autonomous car views road segments as resources to make movements and a warehouse robot views storage spaces as resources to fetch and place goods. The operating environment of such CPSs, CPS Network, contains multiple CPS entities that share common environmental resources and interact with each other through usages of these resources. We model physical operations of an ERDCPS as a set of transactions of different types that achieve different goals, and each transaction consists of a sequence of actions. A transaction or an action may require environmental resources for its operations, and the usage of an environmental resource is precise in both time and space. Moreover, a successful execution of a transaction or an action requires exclusive access to certain resources. Transactions from different CPS entities of a CPS Network constitute a schedule. Since environmental resources are shared, transactions in the schedule may have conflicts in using these resources. A schedule must remain consistent to avoid unexpected consequences caused by resource usage conflicts between transactions. A two-phase commit algorithm is proposed to process transactions. In the pre-commit phase, a transaction is scheduled by reserving usage times of required resources, and potential conflicts are detected and resolved using different strategies, such as Win-Lose, Win-Win, and Transaction Preemption. Two general algorithms are presented to process transactions in the pre-commit phase for both centralized and distributed resource management environments. In the commit phase, a transaction is executed using reserved resources. An exception occurs when the real-time resource usage is different from what has been predicted. By doing internal and external check before a scheduled transaction is executed, exceptions can be detected and handled properly. A simulation platform (CPSNET) is developed to simulate the transaction model. The simulation platform simulates a CPS Network, where different CPS entities coordinate resource usages of their transactions through a Communication Network. Depending on the resource management environment, a Resource Server may exist in the CPS Network to manage resource usages of all CPS entities. The simulation platform is highly configurable and configuration of the simulation environment, CPS entities and two-phase commit algorithm are supported. Moreover, various statistical information and operation logs are provided to monitor and evaluate the platform itself and the transaction model. Seven groups of simulation experiments are carried out to verify the simulation platform and the transaction model. Simulation results show that the platform is capable of simulating a large load of CPS entities and transactions, and entities and components perform their functions correctly with respect to the processing of transactions. The two-phase commit algorithm is evaluated, and the results show that, compared with traditional cases where no conflict resolving is applied or a conflicting transaction is directly aborted, the proposed conflict resolving strategies improve the schedule productivity by allowing more transactions to be executed and the scheduling throughput by maintaining a higher concurrency level.
Chan, Kinson, and 陳傑信. "An adaptive software transactional memory support for multi-core programming." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2009. http://hub.hku.hk/bib/B43278759.
Full textChan, Kinson. "An adaptive software transactional memory support for multi-core programming." Click to view the E-thesis via HKUTO, 2009. http://sunzi.lib.hku.hk/hkuto/record/B43278759.
Full textAvdic, Adnan, and Albin Ekholm. "Anomaly Detection in an e-Transaction System using Data Driven Machine Learning Models : An unsupervised learning approach in time-series data." Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-18421.
Full textPang, Gene. "Scalable Transactions for Scalable Distributed Database Systems." Thesis, University of California, Berkeley, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=3733329.
Full textWith the advent of the Internet and Internet-connected devices, modern applications can experience very rapid growth of users from all parts of the world. A growing user base leads to greater usage and large data sizes, so scalable database systems capable of handling the great demands are critical for applications. With the emergence of cloud computing, a major movement in the industry, modern applications depend on distributed data stores for their scalable data management solutions. Many large-scale applications utilize NoSQL systems, such as distributed key-value stores, for their scalability and availability properties over traditional relational database systems. By simplifying the design and interface, NoSQL systems can provide high scalability and performance for large data sets and high volume workloads. However, to provide such benefits, NoSQL systems sacrifice traditional consistency models and support for transactions typically available in database systems. Without transaction semantics, it is harder for developers to reason about the correctness of the interactions with the data. Therefore, it is important to support transactions for distributed database systems without sacrificing scalability.
In this thesis, I present new techniques for scalable transactions for scalable database systems. Distributed data stores need scalable transactions to take advantage of cloud computing, and to meet the demands of modern applications. Traditional techniques for transactions may not be appropriate in a large, distributed environment, so in this thesis, I describe new techniques for distributed transactions, without having to sacrifice traditional semantics or scalability.
I discuss three facets to improving transaction scalability and support in distributed database systems. First, I describe a new transaction commit protocol that reduces the response times for distributed transactions. Second, I propose a new transaction programming model that allows developers to better deal with the unexpected behavior of distributed transactions. Lastly, I present a new scalable view maintenance algorithm for convergent join views. Together, the new techniques in this thesis contribute to providing scalable transactions for modern, distributed database systems.
Flodin, Anton. "Leerec : A scalable product recommendation engine suitable for transaction data." Thesis, Mittuniversitetet, Avdelningen för informationssystem och -teknologi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-33941.
Full textBrodin, Anette. "Applying DB-transaction semantics to agent interactions." Thesis, University of Skövde, Department of Computer Science, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-724.
Full textBoth artificial intelligence (AI) and database (DB) communities have advantages in incorporating features from each other's areas. From a DB view, handling of complexity in today's information systems can be aided by incorporating AI features, and database environments could gain in flexibility by entrusting some of their functionality to agent systems. Contemplated from the AI view, agent systems could gain in robustness by being based on DB-systems.
By applying the semantics of database-transaction to interactions between agents in a multiagent-system, and analysing the consequences, this project endeavours to cross borders of the two research areas. In the project, states where drop-out of some agent is severe to the task fulfilment, of the system, have been identified, and examined after applying transaction semantics to the agent interactions. An existing system for multiagent applications, JADE, has been examined in order to investigate how problem situations are handled in practice. The result from this work shows the feasibility to contemplate both type of systems as very similar, but modelled and viewed in different ways.
Helal, Mohammad Rahat. "Efficient Isolation Enabled Role-Based Access Control for Database Systems." University of Toledo / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1501627843916302.
Full textDixon, Eric Richard. "Developing distributed applications with distributed heterogenous databases." Thesis, Virginia Tech, 1993. http://hdl.handle.net/10919/42748.
Full textXie, Wanxia. "Supporting Distributed Transaction Processing Over Mobile and Heterogeneous Platforms." Diss., Georgia Institute of Technology, 2005. http://hdl.handle.net/1853/14073.
Full textShang, Pengju. "Research in high performance and low power computer systems for data-intensive environment." Doctoral diss., University of Central Florida, 2011. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/5033.
Full textID: 030423445; System requirements: World Wide Web browser and PDF reader.; Mode of access: World Wide Web.; Thesis (Ph.D.)--University of Central Florida, 2011.; Includes bibliographical references (p. 119-128).
Ph.D.
Doctorate
Electrical Engineering and Computer Science
Engineering and Computer Science
Computer Science
Yan, Cong S. M. Massachusetts Institute of Technology. "Exploiting fine-grain parallelism in transactional database systems." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/101592.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (pages 47-50).
Current database engines designed for conventional multicore systems exploit a fraction of the parallelism available in transactional workloads. Specifically, database engines only exploit inter-transaction parallelism: they use speculation to concurrently execute multiple, potentially-conflicting database transactions while maintaining atomicity and isolation. However, they do not exploit intra-transaction parallelism: each transaction is executed sequentially on a single thread. While fine-grain intra-transaction parallelism is often abundant, it is too costly to exploit in conventional multicores. Software would need to implement fine-grain speculative execution and scheduling, introducing prohibitive overheads that would negate the benefits of additional intra-transaction parallelism. In this thesis, we leverage novel hardware support to design and implement a database engine that effectively exploits both inter- and intra-transaction parallelism. Specifically, we use Swarm, a new parallel architecture that exploits fine-grained and ordered parallelism. Swarm executes tasks speculatively and out of order, but commits them in order. Integrated hardware task queueing and speculation mechanisms allow Swarm to speculate thousands of tasks ahead of the earliest active task and reduce task management overheads. We modify Silo, a state-of-the-art in-memory database engine, to leverage Swarm's features. The resulting database engine, which we call SwarmDB, has several key benefits over Silo: it eliminates software concurrency control, reducing overheads; it efficiently executes tasks within a database transaction in parallel; it reduces conflicts; and it reduces the amount of work that needs to be discarded and re-executed on each conflict. We evaluate SwarmDB on simulated Swarm systems of up to 64 cores. At 64 cores, SwarmDB outperforms Silo by 6.7x on TPC-C and 6.9x on TPC-E, and achieves near-linear scalability.
by Cong Yan.
S.M.
Goehring, David (David G. ). "Decibel : transactional branched versioning for relational data systems." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/106022.
Full textThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 151-159).
As scientific endeavors and data analysis become increasingly collaborative, there is a need for data management systems that natively support the versioning or branching of datasets to enable concurrent analysis, cleaning, integration, manipulation, or curation of data across teams of individuals. Common practice for sharing and collaborating on datasets involves creating or storing multiple copies of the dataset, one for each stage of analysis, with no provenance information tracking the relationships between these datasets. This results not only in wasted storage, but also makes it challenging to track and integrate modifications made by different users to the same dataset. Transaction management (ACID) for such systems requires additional tools to efficiently handle concurrent changes and ensure transactional consistency of the version graph (concurrent versioned commits, branches, and merges as well as changes to records). Furthermore, a new conflict model is required to describe how versioned operations can interfere with each other while still remaining serializable. Decibel is a new relational storage system with built-in version control and transaction management designed to address these shortcomings. Decibel's natural versioning primitives can also be leveraged to implement versioned transactions. Thorough evaluation of three versioned storage engine designs that focus on efficient query processing with minimal storage overhead via the development of an exhaustive benchmark suggest that Decibel is vastly superior to and enables more cross version analysis and functionality than existing techniques and DVCS software like git. Read only and historical cross-version query transactions are non-blocking and proceed all in parallel with minimal overhead. The benchmark also supports analyzing performance of versioned databases with transactional support. It also enables rigorous testing and evaluation of future versioned storage engine designs.
by David Goehring.
M. Eng.
Yahalom, Raphael. "Managing the order of transactions in widely-distributed data systems." Thesis, University of Cambridge, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.359877.
Full textÅberg, Ludvig. "Parallelism within queue application." Thesis, Mittuniversitetet, Avdelningen för informationssystem och -teknologi, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-31575.
Full textZhao, Haiquan. "Measurement and resource allocation problems in data streaming systems." Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/34785.
Full textWu, Jiang. "CHECKPOINTING AND RECOVERY IN DISTRIBUTED AND DATABASE SYSTEMS." UKnowledge, 2011. http://uknowledge.uky.edu/cs_etds/2.
Full textRocha, Tarcisio da. "Serviços de transação abertos para ambientes dinamicos." [s.n.], 2008. http://repositorio.unicamp.br/jspui/handle/REPOSIP/276015.
Full textTese (doutorado) - Universidade Estadual de Campinas, Instituto de Computação
Made available in DSpace on 2018-08-13T03:59:50Z (GMT). No. of bitstreams: 1 Rocha_Tarcisioda_D.pdf: 1796192 bytes, checksum: 4b25ccccc2fa363f13a02764136f5208 (MD5) Previous issue date: 2008
Resumo: Tecnicas de processamento de transações tem sido de grande importancia no que diz respeito a preservação da correção em diversas areas da computação. Devido a funções como, garantir a consistencia de dados, a recuperação de falhas e o controle de concorrencia, transações são consideradas blocos de construção apropriados para a estruturação de sistemas confiaveis. Contudo, desenvolver tecnicas de apoio a transações para ambientes dinamicos pode ser uma tarefa complexa. O primeiro obstaculo esta no proprio dinamismo - a disponibilidade de recursos pode variar inesperadamente. Isso pode causar dois efeitos diretos: altas taxas de cancelamento de transações e grandes atrasos na execução das tarefas transacionais. O segundo obstaculo esta na crescente flexibilização do conceito de transação. Isso ocorre porque os requisitos transacionais exigidos pelas aplicações atuais estão se tornando mais variados, indo al'em das propriedades tradicionalmente definidas para uma transação. Nesse contexto, esta tese aborda a viabilização de serviços de transações abertos, ou seja, capazes de terem sua estrutura e comportamento configurados pelos programadores de aplicações como um meio de atender a requisitos especificos do dominio de suas aplicações. Como parte desse estudo foi proposto um modelo que abstrai alguns elementos arquiteturais como jumpers, slots e demultiplexadores que podem ser usados na especificação de pontos de configuração em serviços de transação. Esse modelo e implementado como uma camada acima de um modelo de componentes existente. Com isso, desenvolvedores de serviços de transação passam a contar com esses elementos abertos alem daqueles disponibilizados por abordagens tradicionais baseadas em componentes. Para confirmar os beneficios em usabilidade, flexibilidade e extensão, esta tese apresenta dois serviços de transação abertos que foram especificados com base no modelo proposto. O primeiro serviço faz parte de uma plataforma de transações adaptavel para ambientes de computação movel. O segundo serviço faz parte de um sistema que prove adaptação dinamica de protocolos de efetivação (commit) de transações. Segundo os testes realizados, a abordagem apresentada nesta tese trouxe a esses serviços a capacidade de atender requisitos de aplicações de diferentes dominios.
Abstract: Transaction processing techniques are considered important solutions on preserving correctness in several fields of computing. Due their functions such as, failure recovery and concurrency control, transactions are considered appropriated building blocks for structuring reliable systems. Despite its advantages, to develop transaction systems for dynamic environments is not an easy task. The first problem is the dynamism - the resource availability can vary unexpectedly. This can cause the following side effects: high transaction abort rates and relevant delays of transaction operations. The second problem is the flexibilization of the transaction concept. The transactional requirements are becoming more diversified - they extrapolate the bounds of the traditional transactional properties. In this context, this thesis approaches the practicability of open transaction services that can be configured by the application programmers for attending specific requirements of different application domains. This thesis includes a model that abstracts some architectural elements (slots, jumpers and demultiplexers) that can be used for specifying configuration points in transaction services. To confirm its benefits on usability, flexibility and extension, this thesis presents two open transaction services that were specified based on the proposed model. The first service is part of an adaptable transaction platform for mobile computing environments. The second service is part of a system that provides dynamic adaptation of commit protocols. According the accomplished tests, the approach presented in this thesis is able to give to these services the capacity of attending the requirement of applications in different domains.
Doutorado
Sistemas Distribuidos
Doutor em Ciência da Computação
Pant, Shristi. "A SECURE ONLINE PAYMENT SYSTEM." UKnowledge, 2011. http://uknowledge.uky.edu/cs_etds/1.
Full textBlackshaw, Bruce Philip. "Migration of legacy OLTP architectures to distributed systems." Thesis, Queensland University of Technology, 1997. https://eprints.qut.edu.au/36839/1/36839_Blackshaw_1997.pdf.
Full textViana, Giovanni Bogéa 1981. "Agentes no gerenciamento de transações moveis." [s.n.], 2006. http://repositorio.unicamp.br/jspui/handle/REPOSIP/276294.
Full textDissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação
Made available in DSpace on 2018-08-07T04:30:16Z (GMT). No. of bitstreams: 1 Viana_GiovanniBogea_M.pdf: 694479 bytes, checksum: 8f1fc7bfaffc85403fba40175dcebd16 (MD5) Previous issue date: 2006
Resumo: Esta dissertação apresenta um modelo de transações para ambientes de computação móvel que leva em conta o dinamismo e interatividade nesse ambiente. Para lidar com o dinamismo, tanto aplicações como o gerenciador da transação e os objetos participantes de uma transação são executados como agentes e podem se mover a critério da aplicação. As responsabilidades de adaptação ao dinamismo do ambiente são divididas entre aplicações e o sistema de apoio. O sistema monitora o ambiente e envia notificações às aplicações sobre variações no ambiente. As aplicações decidem sobre as políticas para se adaptar às mudanças. Para lidar com a interatividade, as operações de uma transação podem ser submetidas passo-a-passo e o usuário pode adotar as estratégias mais adequadas conforme as necessidades da aplicação e as mudanças no ambiente
Abstract: This dissertation presents a transaction model for mobile computing environments that takes into account their dynamism and interactivity. To deal with dynamism, applications, transaction managers and participating objects are executed as agents which can move as commanded by the application. The responsabilities for adaptation are divided between the applications and the underlying system. The system monitors the environment and sends notifications to applications about variations in the environment. Applications decide about policies to adapt to changes. To deal with interactivity, operations of one transaction may be submitted step-by-step and the user is able to adopt the best strategies considering application requirements and changes in the environment
Mestrado
Mestre em Ciência da Computação
Schricker, Marc. "Extract of reasons which could determine the decision to change from an EDI to a XML transaction processing system." Thesis, University of Skövde, School of Humanities and Informatics, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-932.
Full textEDI as well as XML are the basic communication standards used in B2B e-commerce. With special interest to transaction processing systems. According to the specific attributes EDI and XML provides. There could be some common and merely different features derived. But also there could be the impact of considerations from the business organisation domain with additional issues which determine the use of EDI or XML.
In this study the particular interest is in the finding of a set of reasons, not only in the single sight of the performance of the two techniques. The study surveys the impact of business processes, the found business environment settings and the choose of standards with special reference to the scrutinised area.
Almeida, Fábio Renato de [UNESP]. "Gerenciamento de transação e mecanismo de serialização baseado em Snapshot." Universidade Estadual Paulista (UNESP), 2014. http://hdl.handle.net/11449/122161.
Full textDentre os diversos níveis de isolamento sob os quais uma transação pode executar, Snapshot se destaca pelo fato de lidar com uma visão isolada da base de dados. Uma transação sob o isolamento Snapshot nunca bloqueia e nunca é bloqueada quando solicita uma operação de leitura, permitindo portanto uma maior concorrência quando a mesma é comparada a uma execução sob um isolamento baseado em bloqueios. Entretanto, Snapshot não é imune a todos os problemas decorrentes da concorrência e, portanto, não oferece garantia de serialização. Duas estratégias são comumente empregadas para se obter tal garantia. Na primeira delas o próprio Snapshot é utilizado, mas uma alteração estratégica na aplicação e na base de dados, ou até mesmo a inclusão de um componente de software extra, são empregados como auxiliares para se obter apenas históricos serializáveis. Outra estratégia, explorada nos últimos anos, tem sido a construção de algoritmos fundamentados no protocolo de Snapshot, mas adaptados de modo a impedir as anomalias decorrentes do mesmo e, portanto, garantir serialização. A primeira estratégia traz como vantagem o fato de se aproveitar os benefícios de Snapshot, principalmente no que diz respeito ao monitoramento apenas dos elementos que são escritos pela transação. Contudo, parte da responsabilidade em se lidar com problemas de concorrência é transferida do Sistema Gerenciador de Banco de Dados (SGBD) para a aplicação. Por sua vez, a segunda estratégia deixa apenas o SGBD como responsável pelo controle de concorrência, mas os algoritmos até então apresentados nesta categoria tem exigido também o monitoramento dos elementos lidos. Neste trabalho é desenvolvida uma técnica onde os benefícios de Snapshot são mantidos e a garantia de serialização é obtida sem a necessidade de adaptação do código da aplicação ou da introdução de uma camada de software extra. A técnica proposta é ...
Among the various isolation levels under which a transaction can execute, Snapshot stands out because of its capacity to work on an isolated view of the database. A transaction under the Snapshot isolation never blocks and is never blocked when requesting a read operation, thus allowing a higher level of concurrency when it is compared to an execution under a lock-based isolation. However, Snapshot is not immune to all the problems that arise from the competition, and therefore no serialization warranty exists. Two strategies are commonly employed to obtain such assurance. In the first one Snapshot itself is used, but a strategic change in the application and database, or even the addition of an extra software component, are employed as assistants to get only serializable histories. Another strategy, explored in recent years, has been the coding of algorithms based on the Snapshot protocol, but adapted to prevent the anomalies arising from it, and therefore ensure serialization. The first strategy has the advantage of exploring the benefits of Snapshot, especially with regard to monitoring only the elements that are written by the transaction. However, part of the responsibility for dealing with competition issues is transferred from the Database Management System (DBMS) to the application. In turn, the second strategy leaves only the DBMS as responsible for concurrency control, but the algorithms presented so far in this category also require the monitoring of the elements that the transaction reads. In this work we developed a technique where the benefits of Snapshot use are retained and serialization warranty is achieved without the need for adaptation of application code or the addition of an extra software layer. The proposed technique is implemented in a prototype of a DBMS that has temporal features and has been built to demonstrate the applicability of the technique in systems that employ the object-oriented model. However, the ...
Almeida, Fábio Renato de. "Gerenciamento de transação e mecanismo de serialização baseado em Snapshot /." São José do Rio Preto, 2014. http://hdl.handle.net/11449/122161.
Full textBanca: Elaine Parros Machado de Sousa
Banca: Rogéria Cristiane Gratão de Souza
Resumo: Dentre os diversos níveis de isolamento sob os quais uma transação pode executar, Snapshot se destaca pelo fato de lidar com uma visão isolada da base de dados. Uma transação sob o isolamento Snapshot nunca bloqueia e nunca é bloqueada quando solicita uma operação de leitura, permitindo portanto uma maior concorrência quando a mesma é comparada a uma execução sob um isolamento baseado em bloqueios. Entretanto, Snapshot não é imune a todos os problemas decorrentes da concorrência e, portanto, não oferece garantia de serialização. Duas estratégias são comumente empregadas para se obter tal garantia. Na primeira delas o próprio Snapshot é utilizado, mas uma alteração estratégica na aplicação e na base de dados, ou até mesmo a inclusão de um componente de software extra, são empregados como auxiliares para se obter apenas históricos serializáveis. Outra estratégia, explorada nos últimos anos, tem sido a construção de algoritmos fundamentados no protocolo de Snapshot, mas adaptados de modo a impedir as anomalias decorrentes do mesmo e, portanto, garantir serialização. A primeira estratégia traz como vantagem o fato de se aproveitar os benefícios de Snapshot, principalmente no que diz respeito ao monitoramento apenas dos elementos que são escritos pela transação. Contudo, parte da responsabilidade em se lidar com problemas de concorrência é transferida do Sistema Gerenciador de Banco de Dados (SGBD) para a aplicação. Por sua vez, a segunda estratégia deixa apenas o SGBD como responsável pelo controle de concorrência, mas os algoritmos até então apresentados nesta categoria tem exigido também o monitoramento dos elementos lidos. Neste trabalho é desenvolvida uma técnica onde os benefícios de Snapshot são mantidos e a garantia de serialização é obtida sem a necessidade de adaptação do código da aplicação ou da introdução de uma camada de software extra. A técnica proposta é ...
Abstract: Among the various isolation levels under which a transaction can execute, Snapshot stands out because of its capacity to work on an isolated view of the database. A transaction under the Snapshot isolation never blocks and is never blocked when requesting a read operation, thus allowing a higher level of concurrency when it is compared to an execution under a lock-based isolation. However, Snapshot is not immune to all the problems that arise from the competition, and therefore no serialization warranty exists. Two strategies are commonly employed to obtain such assurance. In the first one Snapshot itself is used, but a strategic change in the application and database, or even the addition of an extra software component, are employed as assistants to get only serializable histories. Another strategy, explored in recent years, has been the coding of algorithms based on the Snapshot protocol, but adapted to prevent the anomalies arising from it, and therefore ensure serialization. The first strategy has the advantage of exploring the benefits of Snapshot, especially with regard to monitoring only the elements that are written by the transaction. However, part of the responsibility for dealing with competition issues is transferred from the Database Management System (DBMS) to the application. In turn, the second strategy leaves only the DBMS as responsible for concurrency control, but the algorithms presented so far in this category also require the monitoring of the elements that the transaction reads. In this work we developed a technique where the benefits of Snapshot use are retained and serialization warranty is achieved without the need for adaptation of application code or the addition of an extra software layer. The proposed technique is implemented in a prototype of a DBMS that has temporal features and has been built to demonstrate the applicability of the technique in systems that employ the object-oriented model. However, the ...
Mestre
Hamilton, Howard Gregory. "An Examination of Service Level Agreement Attributes that Influence Cloud Computing Adoption." NSUWorks, 2015. http://nsuworks.nova.edu/gscis_etd/53.
Full textChandler, Shawn Aaron. "Global Time-Independent Agent-Based Simulation for Transactive Energy System Dispatch and Schedule Forecasting." PDXScholar, 2015. https://pdxscholar.library.pdx.edu/open_access_etds/2212.
Full textLeung, Philip, and Daniel Svensson. "SecuRES: Secure Resource Sharing System : AN INVESTIGATION INTO USE OF PUBLIC LEDGER TECHNOLOGY TO CREATE DECENTRALIZED DIGITAL RESOURCE-SHARING SYSTEMS." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-187348.
Full textProjektet ämnar lösa problemen med oförnekbarhet, integritet och konfidentialitet när man delar känsligt data mellan parter som behöver lita på varandra utan inblanding av betrodd tredje part. Detta diskuteras för att besvara till vilken omfattning digitala resurser kan delas säkert i ett decentraliserat system baserat på publika liggare jämfört med existerande tillitsbaserade alternativ. En undersökning av nuvarande resursdelningslösningar visar att det existerar många tillitsbaserade system men även en växande andel lösningar baserade på publika liggare. En intressant lösning som lyfts fram är Storj som använder sådan teknologi men fokuserar på resurslagring mer är delning. Projektets föreslagna lösning, kallad SecuRES, är ett kommunikationsprotokoll baserat på en publik liggare likt Bitcoin. En prototyp baserad på protokollet har tagits fram som visar att det är möjligt att dela krypterade filer med en eller flera mottagare genom ett decentraliserat nätverk baserat på publika liggare. Slutsatsen som dras är att SecuRES klarar sig utan betrodda tredje parter för att dela resurser medan vissa operationer kan göras mer användarvänliga genom externa autentiseringstjänster. Själva lösningen garanterar integritet av data och medför ytterligare fördelar såsom oförnekbarhet, konfidentialitet och hög transparens då man kan göra källkoden och protocoldokumentation fritt läsbar utan att utsätta systemet för fara. Vidare forskning behövs för att undersöka om systemet kan skalas upp för allmän användning och alltjämt bibehålla säkerhets- samt prestandakrav.
Kendric, Hood A. "Improving Cryptocurrency Blockchain Security and Availability Adaptive Security and Partitioning." Kent State University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=kent1595038779436782.
Full text