To see the other types of publications on this topic, follow the link: Distributed systems.

Dissertations / Theses on the topic 'Distributed systems'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Distributed systems.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Meth, Halli Elaine. "DecaFS: A Modular Distributed File System to Facilitate Distributed Systems Education." DigitalCommons@CalPoly, 2014. https://digitalcommons.calpoly.edu/theses/1206.

Full text
Abstract:
Data quantity, speed requirements, reliability constraints, and other factors encourage industry developers to build distributed systems and use distributed services. Software engineers are therefore exposed to distributed systems and services daily in the workplace. However, distributed computing is hard to teach in Computer Science courses due to the complexity distribution brings to all problem spaces. This presents a gap in education where students may not fully understand the challenges introduced with distributed systems. Teaching students distributed concepts would help better prepare them for industry development work. DecaFS, Distributed Educational Component Adaptable File System, is a modular distributed file system designed for educational use. The goal of the system is to teach distributed computing concepts to undergraduate and graduate level students by allowing them to develop small, digestible portions of the system. The system is broken up into layers, and each layer is broken up into modules so that students can build or modify different components in small, assignment- sized portions. Students can replace modules or entire layers by following the DecaFS APIs and recompiling the system. This allows the behavior of the DFS (Distributed File System) to change based on student implementation, while providing base functionality for students to work from. Our implementation includes a code base of core DecaFS Modules that students can work from and basic implementations of non-core DecaFS Modules. Our basic non-core modules can be modified to implement more complex distribution techniques without modifying core modules. We have shown the feasibility of developing a modular DFS, while adhering to requirements such as configurable sizes (file, stripe, chunk) and support of multiple data replication strategies.
APA, Harvard, Vancouver, ISO, and other styles
2

Perez, Hector Benitez. "Smart distributed systems." Thesis, University of Sheffield, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.310944.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Shanneb, Abdelsalam Carleton University Dissertation Computer Science. "Distributed systems prototyping." Ottawa, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Rutherford, Matthew J. "Adequate system-level testing of distributed systems." Diss., Connect to online resource, 2006. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3219039.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Merritt, John W. "Distributed file systems in an authentication system." Thesis, Kansas State University, 1986. http://hdl.handle.net/2097/9938.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Bennett, John K. "Distributed Smalltalk : inheritance and reactiveness in distributed systems /." Thesis, Connect to this title online; UW restricted, 1988. http://hdl.handle.net/1773/6923.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Conroy, Tracy Michael Buxton Winslow Hurlburt. "Secure distributed file systems /." Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1994. http://handle.dtic.mil/100.2/ADA285525.

Full text
Abstract:
Thesis (M.S. in Information Technology Management) Naval Postgraduate School, September 1994.
Thesis advisor(s): Roger Stemp, Kishore Sengupta. "September 1994." Bibliography: p. 106-108. Also available online.
APA, Harvard, Vancouver, ISO, and other styles
8

Satzger, Benjamin. "Self-healing distributed systems." kostenfrei kostenfrei, 2008. http://d-nb.info/993914381/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Zhou, Wanlei, and mikewood@deakin edu au. "Building reliable distributed systems." Deakin University. School of Computing and Mathematics, 2001. http://tux.lib.deakin.edu.au./adt-VDU/public/adt-VDU20051017.160921.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Mansouri-Samani, Masoud. "Monitoring of distributed systems." Thesis, Imperial College London, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.362441.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Alkarouri, Muhammad Abdulmuneim. "Distributed decision support systems." Thesis, University of Sheffield, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.555644.

Full text
Abstract:
Decision support systems are a class of computer based systems that assist in some or all levels of decision making within an organisation. Recently, the growth of data captured that is useful or even critical to the successful running or conclusion of projects in science and industry has been remarkable. Thus, the development of decision support systems that are scalable in terms of the size of data processed. the number of stakeholders, and their geographical span has become of the essence. This thesis identifies the issues in developing distributed decision support systems. Building on that. an architectural style for the development of scalable and extensible software systems is introduced. Subsequently, a framework for the design of distributed decision support systems is developed. This new architectural style is the Resource Oriented Services Architecture (ROSA). It builds on Representational State Transfer (REST), an architectural style that describes the venerable design of the world wide web. An architectural design based on REST revolves around resources, representations, and hyperlinks. \Vhat it lacks is a standardised way to represent computations as resources in a scalable and extensible manner. For systems that cannot be adequately described as a web of documents, this is a shortcoming. ROSA overcomes this by defining a means of representing executable resources in a manner that is consistent with the statelessness and cacheability constraints of REST. The resulting architecture enables the scalability of the system. Additionally, desirable features such as dynamic discovery of resources and extensibility and loose coupling are attained. To illustrate this framework, two new learning algorithms are introduced and implemented as services. The first is a data structure suitable for proximity queries over large datasets of low intrinsic dimension. The other uses a random projection to carry out novelty detection over high dimensional datasets.
APA, Harvard, Vancouver, ISO, and other styles
12

Beacham, Nigel Adrian. "Distributed performance support systems." Thesis, Teesside University, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.410848.

Full text
Abstract:
The focus of this thesis is on electronic performance support and, in particular, systems which enable performance support facilities to be distributed across networks. A review of the literature was undertaken, followed by a discussion of the rationale for using a distributed performance support system (DPSS) and an evaluation of a number of tools and facilities which can be used during the development and implementation of this type of system. This analysis enabled a model of a distributed performance support system to be developed which illustrates the potential relationships between the embedded support tools and components. Two different types of distributed performance support system are then given. These reflect a number of different architectures such a system can take within academic institutions in order to facilitate teaching and learning. Based upon one of these types of DPSS, an important part of the work described in this thesis has been an investigation into the use of computer-based learning facilities and how these facilities can be effectively used by integrating them within a DPSS. The investigation centred on the use of the BYZANTIUM marginal costing package: a computer-based learning package used within the School of Business and Management, at the University of Teesside, to teach undergraduates the accounting technique of marginal costing. In order to investigate the potential and impact of embedding the marginal costing package within a distributed performance support system within the context of an academic institution, a number of experimental case studies were implemented and evaluated within the School of Business and Management, at the University of Teesside. Each of these case studies relates to the theme of `learning support environments'. The case studies have served to establish a set of principles and guidelines for the creation of distributed performance support systems within an academic setting. In the final part of this thesis the results of the evaluation studies are presented and discussed. This is followed by some concluding remarks and some suggestions for future work.
APA, Harvard, Vancouver, ISO, and other styles
13

Conroy, Tracy Michael, and Winslow Hurlburt Buxton. "Secure distributed file systems." Thesis, Monterey, California. Naval Postgraduate School, 1994. http://hdl.handle.net/10945/30928.

Full text
Abstract:
Secure information distribution is a strategic capability as significant as weapons systems and tactics to military operations. The Department of Defense has recognized the importance of establishing and maintaining secure distributed file systems and explores the practicality and applicability of one such system, Trusted Ficus File System (TRUFFLES), in the DoD infrastructure. Integrated into this research are discussions of Privacy Enhanced Mail (PEM), which is currently an integral part of the TRUFFLES implementation. This thesis concludes with a discussion of the actual installation of a PEM reference implementation, and future requirements for the TRUFFLES installation at the Naval Postgraduate School.
APA, Harvard, Vancouver, ISO, and other styles
14

Fadel, Etimad A. "Distributed systems management service." Thesis, De Montfort University, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.438881.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Md, Mizanur Rahman. "Scanline-based distributed systems." Thesis, London South Bank University, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.570875.

Full text
Abstract:
High-performance computer graphics is becoming more demanding due to the increasing use of large 3D scenes. The visibility determination is one of the fundamental computational problems in computer graphics. In the past couple of decades many researchers developed a variety of visibility determination algorithms such as visibility scanline algorithms, distributed ray tracing and output sensitive visibility algorithms to determine the hidden parts of a complex 3D scene. If hidden surface determination can be solved in a cost-effective way, the overall system performance will be improved dramatically. If not, it may become a bottleneck in terms of performance. One of the main aims of this dissertation is to give a solution to the rendering problem using a scanline-based distributed system to increase rendering power. The features of this system are to use the processing power of idle processors over the network to balance workload and reduce communication overhead. The system does not use any extra resources (memory or graphics processing unit (GPU) chips) to handle large data sets. Using a network of workstations, the proposed system could be more powerful than a GPU and binary swap compositing for high-performance graphics. Visibility computation, clipping and transformation are the basic functions required for rendering a static scene. The visibility computations exhibit a quadratic growth rate. If the number of objects or depth of the scene increases, the computational requirement can easily exceed to the performance of a single processor. Polygon clipping has a growth rate of N log N in the worst case where N is the total number of edges in the scene. The time requirement for transformations grows linearly with the input size. This dissertation presents the requirement of real-time rendering of 3D scenes and a few approaches, including parallel techniques, to speed up the rendering process. All existing visibility scanline algorithms – including the new one proposed within this thesis – are presented and discussed with respect to their potential for use in the context of distributed systems for the visualisation of large data sets. The proposed scanline algorithm has many advantages, including: using real values obtained by the intersection of a 3D scene with the plane of the scanline as input; and running time that does not depend on the resolution over the Z-tree scanline algorithm. Z-tree, Warnock, Priority Queue and proposed algorithm have been implemented in C/C++ and Java program to show how they compute visibility in a 2D scene. Design issues of all scanline algorithms and tuning cache parameters (without knowing them) to minimise cache misses and data movement among multiple levels of caches have been presented in the light of cache-oblivious algorithms. The cache-oblivious scanline algorithms have been optimised for use within a distributed system. The system performance can be increased by the careful consideration of locality of cache reference, instruction-level parallelism, branch prediction and memory hierarchies found in modern computers. A new scanline-based approach for the distribution of workload among several servers and gathering processed data for the visualisation of large data sets is proposed in this thesis. Based on this approach, a prototype scanline-based distributed system has been developed and tested over three servers giving input of line segments from 100 to 3,000 over 500 × 500 and 3,000 × 3,000 pixels visualisation area; and is demonstrated with its system operation and compression techniques in order to handle large data sets. While considering the efficiency of the algorithms asymptotic analysis is often used. However, asymptotic analysis cannot take into consideration the constant factors that could be different in different environments. Therefore, the author developed a portable test bed for the comparative evaluation of the actual performance of the algorithms for the application of geometric algorithms (line-clipping and few scanline algorithms). For this thesis, experimental evaluation of geometric algorithms is important to design, implement and test from different perspectives and then choose the best one for high-performance computer graphics system. Through experimental evaluation of line-clipping algorithm, the author found that clock cycle counting is more accurate than using elapsed-time functions provided by system software. The author also implemented, measured and evaluated running time of Z-tree, Warnock, Priority Queue and new Sorting and Ranking (SR) scanline algorithms for 1 to 5,000 line segments with 500 × 500 pixels, for 1 to 25,000 line segments with 3,000 × 3,000 pixels and for 1 to 250,000 line segments with 100,000 × 100,000 pixels visualisation area. An evaluation technique based on best sustained performance is proposed in the thesis as well. To conclude, this thesis presents new tools and techniques, including a scanline-based distributed systemand its operation for the visualisation of very large data sets without any additional cost.
APA, Harvard, Vancouver, ISO, and other styles
16

BOLDRIN, FABIO. "Web Distributed Computing Systems." Doctoral thesis, Università degli studi di Ferrara, 2011. http://hdl.handle.net/11392/2388764.

Full text
Abstract:
The thesis presents the PhD study about a new approach in distributed computing based on the exploitation of web browsers as clents, using technologies and best practices of Javascript, AJAX and Flex. The described solution has two main advantages: it is client free, so no additional programs have to be installed to perform the computation, and it requires low CPU usage, so clientside computation is no invasive for users. The solution is developed with both AJAX and Adobe® Flex® technologies embedding a pseudoclient into a web page that hosts the computation in the form of a banner. While users browse the hosting web page, client side of the system query the server side part for a subproblem, called crunch, computes the solution(s) and sends back it to the server. All the process is always transparent for the users navigation experience and computer use in general. The thesis shows the feasibility of the system and the good performances that can be achieved, with details over tests and metrics that have been defined to measure the performance indexes. The new architecture has been tested through this performance metrics by implementing two examples of distributed computing, the cracking of the RSA cryptosystem through the factorization of the public key and the Pearson's correlation index between smples in genetic data sets. Results have shown good feasibility of this approach both in a closed environment and also in an Internet environment, in a typical real situation. A mathematical model has been developed over this solution. The main goals of the model are to describe and classify different categories of problems on the basis of the feasibility and o find the limits in the dimensioning of the scheduling systems to have convenience in the use of this approach.
APA, Harvard, Vancouver, ISO, and other styles
17

Dixon, Eric Richard. "Developing distributed applications with distributed heterogenous databases." Thesis, Virginia Tech, 1993. http://hdl.handle.net/10919/42748.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Vat, Nicholas. "Geographically distributed requirements elicitation." Thesis, Rhodes University, 2000. http://hdl.handle.net/10962/d1002764.

Full text
Abstract:
The technology revolution has transformed the way in which many organisations do their business. The resultant information systems have increased the decision making powers of executives, leading to increased effectiveness and ultimately to improved product delivery. The process of information systems development is, however, complex. Furthermore, it has a poor track record in terms of on-time and within-budget delivery, but more significantly in terms of low user acceptance frequently attributable to poor user requirements specification. Consequently, much attention has been given to the process of requirements elicitation, with both researchers and businessmen seeking new, innovative and effective methods. These methods usually involve large numbers of participants who are drawn from within the client and developer organisations. This is a financially costly characteristic of the requirements elicitation process. Besides information systems, the technology revolution has also brought sophisticated communication technologies into the marketplace. These communication technologies allow people to communicate with one another in a variety of different time and space scenarios. An important spin-off of this is the ability for people located in significantly different geographical locations to work collaboratively on a project. It is claimed that this approach to work has significant cost and productivity advantages. This study draws the requirements elicitation process into the realm of collaborative work. Important project management, communication, and collaborative working principles are examined in detail, and a model is developed which represents these issues as they pertain to the requirements elicitation process. An empirical study (conducted in South Africa) is performed in order to examine the principles of the model and the relationships between its constituent elements. A model of geographically distributed requirements elicitation (GDRE) is developed on the basis of the findings of this investigation. The model of GDRE is presented as a 3-phased approach to requirements elicitation, namely planning, implementation, and termination. Significantly, the model suggests the use of interviews, structured workshops, and prototyping as the chief requirements elicitation methods to be adopted in appropriate conditions. Although a detailed study of communications technology was not performed, this thesis suggests that each individual GDRE implementation requires a different mix of communication technologies to support its implementation.
APA, Harvard, Vancouver, ISO, and other styles
19

Aga, Svein. "System Recovery in Large-Scale Distributed Storage Systems." Thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2008. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-9724.

Full text
Abstract:

This report aims to describe and improve a system recovery process in large-scale storage systems. Inevitable, a recovery process results in the system being loaded with internal replication of data, and will extensively utilize several storage nodes. Such internal load can be categorized and generalized into a maintenance workload class. Obviously, a storage system will have external clients which also introduce load into the system. This can be users altering their data, uploading new content, etc. Load generated by clients can be generalized into a production workload class. When both workload classes are actively present in a system, i.e. the system is recovering while users are simultaneously accessing their data, there will be a competition of system resources between the different workload classes. The storage must ensure Quality of Service (QoS) for each workload class so that both are guaranteed system resources. We have created Dynamic Tree with Observed Metrics (DTOM), an algorithm designed to gracefully throttle resources between multiple different workload classes. DTOM can be used to enforce and ensure QoS for the variety of workloads in a system. Experimental results demonstrate that DTOM outperforms another well-known scheduling algorithm. In addition, we have designed a recovery model which aims to improve handling of critical maintenance workload. Although the model is intentionally intended for system recovery, it can also be applied to many other contexts.

APA, Harvard, Vancouver, ISO, and other styles
20

Thulnoon, A. A. T. "Efficient runtime security system for decentralised distributed systems." Thesis, Liverpool John Moores University, 2018. http://researchonline.ljmu.ac.uk/9043/.

Full text
Abstract:
Distributed systems can be defined as systems that are scattered over geographical distances and provide different activities through communication, processing, data transfer and so on. Thus, increasing the cooperation, efficiency, and reliability to deal with users and data resources jointly. For this reason, distributed systems have been shown to be a promising infrastructure for most applications in the digital world. Despite their advantages, keeping these systems secure, is a complex task because of the unconventional nature of distributed systems which can produce many security problems like phishing, denial of services or eavesdropping. Therefore, adopting security and privacy policies in distributed systems will increase the trustworthiness between the users and these systems. However, adding or updating security is considered one of the most challenging concerns and this relies on various security vulnerabilities which existing in distributed systems. The most significant one is inserting or modifying a new security concern or even removing it according to the security status which may appear at runtime. Moreover, these problems will be exacerbated when the system adopts the multi-hop concept as a way to deal with transmitting and processing information. This can pose many significant security challenges especially if dealing with decentralized distributed systems and the security must be furnished as end-to-end. Unfortunately, existing solutions are insufficient to deal with these problems like CORBA which is considered a one-to-one relationship only, or DSAW which deals with end-to-end security but without taking into account the possibility of changing information sensitivity during runtime. This thesis provides a proposed mechanism for enforcing security policies and dealing with distributed systems’ security weakness in term of the software perspective. The proposed solution utilised Aspect-Oriented Programming (AOP), to address security concerns during compilation and running time. The proposed solution is based on a decentralized distributed system that adopts the multi-hop concept to deal with different requested tasks. The proposed system focused on how to achieve high accuracy, data integrity and high efficiency of the distributed system in real time. This is done through modularising the most efficient security solutions, Access Control and Cryptography, by using Aspect-Oriented Programming language. The experiments’ results show the proposed solution overcomes the shortage of the existing solutions by fully integrating with the decentralized distributed system to achieve dynamic, high cooperation, high performance and end-to-end holistic security.
APA, Harvard, Vancouver, ISO, and other styles
21

Thomas, Alex S. M. Massachusetts Institute of Technology. "An analysis of distributed solar fuel systems." Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/76511.

Full text
Abstract:
Thesis (S.M. in Engineering and Management)--Massachusetts Institute of Technology, Engineering Systems Division, System Design and Management Program, 2012.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 85-89).
While solar fuel systems offer tremendous potential to address global clean energy needs, most existing analyses have focused on the feasibility of large centralized systems and applications. Not much research exists on the feasibility of distributed solar fuel systems. This thesis is an attempt to understand the larger context of solar fuel systems, to examine the case for going distributed and to critically analyze a distributed solar fuel system available today in the context of a specific application. In doing so, this thesis seeks to a) provide a baseline analysis for the economic feasibility of a distributed solar fuel system based on state-of-the-art technology b) draw some general conclusions about the nature of such systems in order to provide guidance to those engaged in the development of the next generation of solar fuel systems. This study also compares the chosen baseline solar fuel system with a traditional fossil fuel-based alternative and undertakes a cost-to-emissions trade-off analysis. A key finding of this thesis is that for solar fuel systems to be viable, cost and efficiency improvements in individual sub-systems won't be sufficient. Due attention needs to be given to bring down cost of the entire system. Another key finding is that if carbon emissions are considered as a decision-making criterion in addition to cost, even at current cost levels photovoltaic hydrogen systems compare favorably with existing fossil fuel-based alternatives such as diesel generators.
by Alex Thomas.
S.M.in Engineering and Management
APA, Harvard, Vancouver, ISO, and other styles
22

Silvestre, Daniel. "Fault-tolerant Stochastic Distributed Systems." Doctoral thesis, Instituto Superior Técnico, 2017. http://hdl.handle.net/11144/3395.

Full text
Abstract:
The present doctoral thesis discusses the design of fault-tolerant distributed systems, placing emphasis in addressing the case where the actions of the nodes or their interactions are stochastic. The main objective is to detect and identify faults to improve the resilience of distributed systems to crash-type faults, as well as detecting the presence of malicious nodes in pursuit of exploiting the network. The proposed analysis considers malicious agents and computational solutions to detect faults. Crash-type faults, where the affected component ceases to perform its task, are tackled in this thesis by introducing stochastic decisions in deterministic distributed algorithms. Prime importance is placed on providing guarantees and rates of convergence for the steady-state solution. The scenarios of a social network (state-dependent example) and consensus (time- dependent example) are addressed, proving convergence. The proposed algorithms are capable of dealing with packet drops, delays, medium access competition, and, in particular, nodes failing and/or losing network connectivity. The concept of Set-Valued Observers (SVOs) is used as a tool to detect faults in a worst-case scenario, i.e., when a malicious agent can select the most unfavorable sequence of communi- cations and inject a signal of arbitrary magnitude. For other types of faults, it is introduced the concept of Stochastic Set-Valued Observers (SSVOs) which produce a confidence set where the state is known to belong with at least a pre-specified probability. It is shown how, for an algorithm of consensus, it is possible to exploit the structure of the problem to reduce the computational complexity of the solution. The main result allows discarding interactions in the model that do not contribute to the produced estimates. The main drawback of using classical SVOs for fault detection is their computational burden. By resorting to a left-coprime factorization for Linear Parameter-Varying (LPV) systems, it is shown how to reduce the computational complexity. By appropriately selecting the factorization, it is possible to consider detectable systems (i.e., unobservable systems where the unobservable component is stable). Such a result plays a key role in the domain of Cyber-Physical Systems (CPSs). These techniques are complemented with Event- and Self-triggered sampling strategies that enable fewer sensor updates. Moreover, the same triggering mechanisms can be used to make decisions of when to run the SVO routine or resort to over-approximations that temporarily compromise accuracy to gain in performance but maintaining the convergence characteristics of the set-valued estimates. A less stringent requirement for network resources that is vital to guarantee the applicability of SVO-based fault detection in the domain of Networked Control Systems (NCSs).
APA, Harvard, Vancouver, ISO, and other styles
23

Lundin, Mats. "Building Distributed Control Systems Using Distributed Active Real-Time Databases." Thesis, University of Skövde, Department of Computer Science, 1998. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-234.

Full text
Abstract:

From the field of control theory, we can see that varying communication delays in a control system may be hard or even impossible to handle. From this point of view it is preferable to have these delays bounded and as small and as possible in order to adapt the control process to them. On the other hand, in some cases delays are inevitable and must be handled by the control system.

A control system may for different reasons be distributed, e.g., because of a distributed environment or severe environment demands such as heat or dust at some locations. Information in such a system will suffer from delays due to transportation from one place to another. These delays often show up in a random fashion, especially if a general network is used for transportation. Another source of delays is the system environment itself. For predictability reasons a real-time database is preferable if the delays are to be controlled.

A straightforward way of handling delays in a control system is to build the system such that delays are constant, i.e., to build a time invariant system. The time from sensor reading to actuation is made constant either by adding a suitable delay to achieve a total constant delay or by using time-triggered reading and actuation. These are simple ways of controlling the delays, but may be very inefficient because worst-case execution time must always be used. Other ways of handling varying delays are by using more tolerant control algorithms. There are two suitable control models proposed by Nilsson (1998) for this purpose. The tolerant algorithm approach is assumed in this work.

This thesis uses a distributed active real-time database system as a basis for building control systems. One of the main objectives is to determine how active functionality can be used to express the control system, i.e., how rules in the database can be used to express the control algorithm and for handling propagation of information. Another objective is to look at how the choice of consistency level in the database affects the result of the control system, i.e. how different consistency level affects the delays. Of interest is also to characterize what type of applications each level is suited for.

APA, Harvard, Vancouver, ISO, and other styles
24

Dyer, Matthias. "Distributed embedded systems : validation strategies /." Aachen : Shaker Verlag, 2007. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=17189.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Tewatia, Rohit. "Security in Distributed Embedded Systems." Thesis, Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-1379.

Full text
Abstract:

Communication in a sensor network needs guaranteed reception of data without fail and providing security to it. The authenticity and confidentiality of the data has to be ensured as sensors have limited hardware resources as well as the bandwidth. This thesis addresses the security aspects in wireless sensor networks. The main task of the project is to identify the critical security parameters for these distributed embedded systems. The sensors have extremely limited resources: small amount of memory, low computation capability and poor bandwidth. For example, a sensor platform can have 8KB of flash memory, a 4MHz 8-bit Atmel processor, and a 900MHz radio interface. Various security threats posed to these small wireless sensor networks has been made and solutions proposed. Secure communication between these communicating partners is to be achieved using cryptography.

APA, Harvard, Vancouver, ISO, and other styles
26

Magureanu, Stefan. "Distributed Trust-Aware Recommender Systems." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-108326.

Full text
Abstract:
Collaborative filtering(CF) recommender systems are among the most popular approaches to solving the information overload problem in social networks by generating accurate predictions based on the ratings of similar users. Traditional CF recommenders suffer from lack of scalability while decentralized CF recommenders (DHT based, gossip based etc.) have promised to alleviate this problem. Thus, in this thesis we propose a decentralized approach to CF recommender systems that uses the T-Man algorithm to create and maintain an overlay network that in turn would facilitate the generation of recommendations based on local information of a node. We analyze the influence of the number of rounds and neighbors on the accuracy of prediction and item coverage and we propose a new approach to inferring trust values between a user and its neighbors. Our experiments on three important datasets show an improvement of prediction accuracy relative to previous approaches while using a highly scalable, decentralized paradigm. We also analyze item coverage and show that our system is able to generate predictions for significant fraction of the users, which is comparable with the centralized approaches.
APA, Harvard, Vancouver, ISO, and other styles
27

Wilund, Torbjörn. "Robust Distributed Embedded Systems : Communication." Thesis, KTH, Maskinkonstruktion (Inst.), 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-100998.

Full text
Abstract:
Detta examensarbete diskuterar i sin teoretiska del vad begreppet robusthet betyder för distribuerade inbyggda system. Utvecklingen går idag generellt emot att byta mekaniska och elektriska lösningar mot mekatroniska styrsystem. Det finns sedan mycket att vinna på att göra dessa inbyggda styrsystem distribuerade, bland annat beroende på oberoende mellan noder för intern övervakning och feldetektering. Delresultatet om robusthet säger att det beror på integritet för att uppnå tillgänglighet och tillförlitlighet, det vill säga att informationen är den samma i alla berörda noder. Tillförlitligheten beror på hur fel i tid och värde hanteras under systemexekvering. Informationen skyddas i samtliga undersökta protokoll genom redundans av diverse algoritmer, exempelvis CRC vilket ger möjlighet till feldetektering. Skillnad finns dock i hur tidssynkronisering hanteras om det ens finns hantering av för detta. Ur detta perspektiv är endast tidsstyrda nätverksprotokoll av nuvarande alternativ mest tekniskt lämpliga. Den praktiska delen försöker utvärdera GAST levererad hårdvara. Arbetet med att få igång någon kommunikation på TTP/C misslyckades, detta beroende på brister i den sammansatta plattformen. Mer specifikt beror detta troligtvis på konstruktionsfel i det fysiska lagret av plattformen. Med valt bakplan blir ej signalerna som förväntat vilket gör att informationen tolkas fel av externa kommunikations kretsar. Därför rekommenderas närmare analys av signalen i databussen och eventuell konstruktion av ett filter till bakplanet om nuvarande hopsättning skall användas. Plattformen har med sin öppenhet en framtid för utveckling och forskning kring distribuerade inbyggda system, dock saknas färdiga drivrutiner för de i plattformen integrerade protokollen. Detta är något som måste åtgärdas ifall plattformen skall få någon form av betydelse innan tekniken är för gammal. Undersökningen av den TTP Plan genererade koden visar att det går att extrahera information från verktyget och på så sätt utnyttja verktyget för konfiguration. För att detta skall fungera bör man tillverka någon form av verktyg för detta.
This thesis tries in its theoretical part to discuss what the term of robustness means for distributed embedded systems. Development of today generally tries to exchange mechanical and electrical solutions for embedded control systems. There are a lot of benefits to gain by designing in a distributed way, this due to internal independencies between nodes for monitoring and error detection. The outcome about robustness suggests that it depends on integrity to achieve availability and reliability, or loss of alternations of information in the different nodes in the distributed system. Reliability depends on how faults in terms of time and value are treated during execution. All studied network protocols have protection of information by different redundancy algorithms such as CRC, which gives the opportunity for fault and error detection. However there are differences in how time is handled, if there is any time handling, and possibility for time synchronization in the hardware. From this perspective the best alternatives are time triggered architectures of current technology. The experimental part tries to evaluate the GAST delivered platform. The work to achieve communication on TTP/C controllers failed due to shortage in the assembled platform. More specifically this is probably the case of bugs in the design of the physical layer of the GAST hardware (not the TTP controllers themselves). By use of recommended backplane, the sent signals are not appearing as expected, and information sent is misinterpreted by external communication devices. A suggestion and recommendation for future work is analysis of the signal in the data bus, and possible design of a filter if current assembly shall be used. The platform has a future for development and research in the field of distributed embedded systems due to its openness, however there is a lack of drivers for the platform integrated network protocols. This must be attended if the platform shall have any significance. The evaluation of TTP Plan generated code shows, that there are possibilities to extract configuration information form tool for configuration of platform. To achieve this some kind of extraction script must be developed.
APA, Harvard, Vancouver, ISO, and other styles
28

Jiang, Yexi. "Temporal Mining for Distributed Systems." FIU Digital Commons, 2015. http://digitalcommons.fiu.edu/etd/1909.

Full text
Abstract:
Many systems and applications are continuously producing events. These events are used to record the status of the system and trace the behaviors of the systems. By examining these events, system administrators can check the potential problems of these systems. If the temporal dynamics of the systems are further investigated, the underlying patterns can be discovered. The uncovered knowledge can be leveraged to predict the future system behaviors or to mitigate the potential risks of the systems. Moreover, the system administrators can utilize the temporal patterns to set up event management rules to make the system more intelligent. With the popularity of data mining techniques in recent years, these events grad- ually become more and more useful. Despite the recent advances of the data mining techniques, the application to system event mining is still in a rudimentary stage. Most of works are still focusing on episodes mining or frequent pattern discovering. These methods are unable to provide a brief yet comprehensible summary to reveal the valuable information from the high level perspective. Moreover, these methods provide little actionable knowledge to help the system administrators to better man- age the systems. To better make use of the recorded events, more practical techniques are required. From the perspective of data mining, three correlated directions are considered to be helpful for system management: (1) Provide concise yet comprehensive summaries about the running status of the systems; (2) Make the systems more intelligence and autonomous; (3) Effectively detect the abnormal behaviors of the systems. Due to the richness of the event logs, all these directions can be solved in the data-driven manner. And in this way, the robustness of the systems can be enhanced and the goal of autonomous management can be approached. This dissertation mainly focuses on the foregoing directions that leverage tem- poral mining techniques to facilitate system management. More specifically, three concrete topics will be discussed, including event, resource demand prediction, and streaming anomaly detection. Besides the theoretic contributions, the experimental evaluation will also be presented to demonstrate the effectiveness and efficacy of the corresponding solutions.
APA, Harvard, Vancouver, ISO, and other styles
29

McGregor, Anthony James. "Block-Based Distributed File Systems." The University of Waikato, 1997. http://hdl.handle.net/10289/2607.

Full text
Abstract:
Distributed file systems have become popular because they allow information to be shared be between computers in a natural way. A distributed file system often forms a central building block in a distributed system. Currently most distributed file systems are built using a communications interface that transfers messages about files between machines. This thesis proposes a different, lower level, communications interface. This `block-based' interface exchanges information about the blocks that make up the file but not about the files themselves. No other distributed file system is built this way. By demonstrating that a distributed file system can be implemented in a block-based manner, this thesis opens the way for many advances in distributed file systems. These include a reduction of the processing required at the server, uniformity in managing file blocks and fine-grained placement and replication of data. The simple communications model also lends itself to efficient implementation both at the server and in the communications protocols that support the interface. These advantages come at the cost of a more complex client implementation and the need for a lower level consistency mechanism. A block-based distributed file system (BB-NFS) has been implemented. BB-NFS provides the Unix file system interface and demonstrates the feasibility and implementability of the block-based approach. Experience with the implementation lead to the development of a lock cache mechanism which gives a large improvement in the performance of the prototype. Although it has not been directly measured it is plausible that the prototype will perform better than the file based approach. The block-based approach has much to offer future distributed file system developers. This thesis introduces the approach and its advantages, demonstrates its feasibility and shows that it can be implemented in a way that performs well.
APA, Harvard, Vancouver, ISO, and other styles
30

Daniel, Edmund. "Performance management of distributed systems." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0015/MQ58023.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Gunaseelan, L. "Debugging of Distributed object systems." Diss., Georgia Institute of Technology, 1994. http://hdl.handle.net/1853/9219.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Gong, Li. "Cryptographic protocols for distributed systems." Thesis, University of Cambridge, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.357767.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Coffield, D. T. "Network and distributed systems management." Thesis, Lancaster University, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.380320.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Berger, Martin Friedrich. "Towards abstractions for distributed systems." Thesis, Imperial College London, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.404798.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Pryce, Nathaniel Graham. "Component interaction in distributed systems." Thesis, Imperial College London, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.313411.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Crane, John Stephen. "Dynamic binding for distributed systems." Thesis, Imperial College London, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.484185.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Wiseman, Simon Robert. "Garbage collection in distributed systems." Thesis, University of Newcastle Upon Tyne, 1988. http://hdl.handle.net/10443/1980.

Full text
Abstract:
The provision of system-wide heap storage has a number of advantages. However, when the technique is applied to distributed systems automatically recovering inaccessible variables becomes a serious problem. This thesis presents a survey of such garbage collection techniques but finds that no existing algorithm is entirely suitable. A new, general purpose algorithm is developed and presented which allows individual systems to garbage collect largely independently. The effects of these garbage collections are combined, using recursively structured control mechanisms, to achieve garbage collection of the entire heap with the minimum of overheads. Experimental results show that new algorithm recovers most inaccessible variables more quickly than a straightforward garbage collection, giving an improved memory utilisation.
APA, Harvard, Vancouver, ISO, and other styles
38

Rhead, Donovan Ross. "Security considerations in distributed systems." Thesis, Monterey, California. Naval Postgraduate School, 1991. http://hdl.handle.net/10945/26754.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Hughes, Frank Lawrence Kingsman. "Multicast communications in distributed systems." Thesis, University of Newcastle Upon Tyne, 1986. http://hdl.handle.net/10443/2007.

Full text
Abstract:
One of the numerous results of recent developments in communication networks and distributed systems has been an increased interest in the study of applications and protocolsfor communications between multiple, as opposed to single, entities such as processes and computers. For example, in replicated file storage, a process attempts to store a file on several file servers, rather than one. MUltiple entity communications, which allow one-to-many and many-to-one communications, are known as multicast communications. This thesis examines some of the ways in which the architectures of computer networks and distributed systems can affect the design and development of multicast communication applications and protocols. To assist in this examination, the thesis presents three contributions. First, a set of classification schemes are developed for use in the description and analysis of various multicast communication strategies. Second, a general set of multicast communication primitives are presented, unrelated to any specific network or distributed system, yet efficiently implementable on a variety of networks. Third, the primitives are used to obtain experimental results for a study ofintranetwork and internetwork multicast communications.
APA, Harvard, Vancouver, ISO, and other styles
40

Kotsakis, Evangelos Grigorios. "Replication in distributed management systems." Thesis, University of Salford, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.265522.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Spiliopoulou, Eleni. "Concurrent and distributed functional systems." Thesis, University of Bristol, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.310807.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

ABIBE, ALEXANDRE DE REZENDE. "FAULT TOLERANCE IN DISTRIBUTED SYSTEMS." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 1988. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=9411@1.

Full text
Abstract:
Esta dissertação aborda o problema da recuperação de erros em sistemas distribuídos. Inicialmente, é feita uma breve análise sobre a origem deste problema e as soluções encontradas. Alguns métodos de resolução são então apresentados. Para a simulação do sistema distribuído foi desenvolvido um núcleo multi-tarefa numa máquina compatível com o PC-IBM-XT, utilizando o MS-DOS (versão 3.0 ou acima) como servidor. Finalmente, são apresentadas duas propostas. A primeira visa fornecer a um processo recursos que possibilitem a recuperação por retorno. A segunda utiliza redundância em um conjunto de processos em diferentes estações para garantir que o sistema como um todo continue operativo, mesmo com uma estação de falha
This dissertation deals with the problem of fault tolerance in distributed systems. Initially, a brief analysis on the origins of this problem and its solutions is made. Some of the resolutions methods are then presented. In order to simulate a distributed system, a multi tasking operating system kernel was developed in an IBM PC-XT compatible machine, making use of the MS-DOS (version 3.0 or above) as a server. Finally, two proposals are presented. The first, is intended to supply a process with resources that allow recovery in case of algorithmic faults, making use of the backward error recovery method. The second, uses redundancy in a set of processes over different stations in order to warrant that the system as a whole keeps operative, even with a faulty s
APA, Harvard, Vancouver, ISO, and other styles
43

SZTAJNBERG, ALEXANDRE. "FLEXIBILITY OF CONFIGURABLE DISTRIBUTED SYSTEMS." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 1995. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=14555@1.

Full text
Abstract:
CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO
A demanda por serviços diversificados em aplicações distribuídas impõe a necessidade de um modelo de comunicação flexível e modular, que possa acomodar continuamente novas funções e requisitos. Neste trabalho apresenta-se o modelo de comunicação adotado no ambiente RIO, que inclui uma metodologia para a construção de aplicações distribuídas e permite selecionar a forma mais adequada para comunicação entre seus módulos. Isto requer o suporte de um sistema de comunicação multiprotocolo, que se adapte a requisitos heterogêneos. São discutidos também a implementação, pontos relevantes ao desempenho do sistema e propostas de otimização.
The distributed applications demand diversified services leads to the necessity of a modular and flexible communication model, that can host, continuously, new communications functions and requirements. The communication model adopted by the RIO environment is presented, which includes a distributed application construction methology and permits the selection of the most adequate form of communication between its modules. This feature demands the support of a multiprotocol communication sysstem, that can fulfill heterogeneous requirements. Implementation, performance aspects and optimization proposals are also discussed.
APA, Harvard, Vancouver, ISO, and other styles
44

Andriopoulos, X. "Databases for distributed realtime systems." Thesis, Imperial College London, 1986. http://hdl.handle.net/10044/1/37926.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Behrens, Diogo. "Error isolation in distributed systems." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-203428.

Full text
Abstract:
In distributed systems, if a hardware fault corrupts the state of a process, this error might propagate as a corrupt message and contaminate other processes in the system, causing severe outages. Recently, state corruptions of this nature have been observed surprisingly often in large computer populations, e.g., in large-scale data centers. Moreover, since the resilience of processors is expected to decline in the near future, the likelihood of state corruptions will increase even further. In this work, we argue that preventing the propagation of state corruption should be a first-class requirement for large-scale fault-tolerant distributed systems. In particular, we propose developers to target error isolation, the property in which each correct process ignores any corrupt message it receives. Typically, a process cannot decide whether a received message is corrupt or not. Therefore, we introduce hardening as a class of principled approaches to implement error isolation in distributed systems. Hardening techniques are (semi-)automatic transformations that enforce that each process appends an evidence of good behavior in the form of error codes to all messages it sends. The techniques “virtualize” state corruptions into more benign failures such as crashes and message omissions: if a faulty process fails to detect its state corruption and abort, then hardening guarantees that any corrupt message the process sends has invalid error codes. Correct processes can then inspect received messages and drop them in case they are corrupt. With this dissertation, we contribute theoretically and practically to the state of the art in fault-tolerant distributed systems. To show that hardening is possible, we design, formalize, and prove correct different hardening techniques that enable existing crash-tolerant designs to handle state corruption with minimal developer intervention. To show that hardening is practical, we implement and evaluate these techniques, analyzing their effect on the system performance and their ability to detect state corruptions in practice.
APA, Harvard, Vancouver, ISO, and other styles
46

Costello, Zachary Kohl. "Distributed computation in networked systems." Diss., Georgia Institute of Technology, 2016. http://hdl.handle.net/1853/54924.

Full text
Abstract:
The objective of this thesis is to develop a theoretical understanding of computation in networked dynamical systems and demonstrate practical applications supported by the theory. We are interested in understanding how networks of locally interacting agents can be controlled to compute arbitrary functions of the initial node states. In other words, can a dynamical networked system be made to behave like a computer? In this thesis, we take steps towards answering this question with a particular model class for distributed, networked systems which can be made to compute linear transformations.
APA, Harvard, Vancouver, ISO, and other styles
47

Zeng, Wen. "Quantitative analysis of distributed systems." Thesis, University of Newcastle upon Tyne, 2014. http://hdl.handle.net/10443/2638.

Full text
Abstract:
Computing Science addresses the security of real-life systems by using various security-oriented technologies (e.g., access control solutions and resource allocation strategies). These security technologies signficantly increase the operational costs of the organizations in which systems are deployed, due to the highly dynamic, mobile and resource-constrained environments. As a result, the problem of designing user-friendly, secure and high efficiency information systems in such complex environment has become a major challenge for the developers. In this thesis, firstly, new formal models are proposed to analyse the secure information flow in cloud computing systems. Then, the opacity of work flows in cloud computing systems is investigated, a threat model is built for cloud computing systems, and the information leakage in such system is analysed. This study can help cloud service providers and cloud subscribers to analyse the risks they take with the security of their assets and to make security related decision. Secondly, a procedure is established to quantitatively evaluate the costs and benefits of implementing information security technologies. In this study, a formal system model for data resources in a dynamic environment is proposed, which focuses on the location of different classes of data resources as well as the users. Using such a model, the concurrent and probabilistic behaviour of the system can be analysed. Furthermore, efficient solutions are provided for the implementation of information security system based on queueing theory and stochastic Petri nets. This part of research can help information security officers to make well judged information security investment decisions.
APA, Harvard, Vancouver, ISO, and other styles
48

Gater, Christian. "Fault-tolerant distributed measurement systems." Thesis, University of Edinburgh, 1987. http://hdl.handle.net/1842/16990.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Tuz, V. V., D. Ph, K. V. Bazilo, D. Ph, and Y. M. Tuz. "Distributed Computerized Educational Systems Investigations." Thesis, Sumy State University, 2015. http://essuir.sumdu.edu.ua/handle/123456789/40910.

Full text
Abstract:
The learning process can be interpreted as the management of knowledge mastering. This process is actualized in a closed system and (as any closed management system) is characterized by the aim of management.
APA, Harvard, Vancouver, ISO, and other styles
50

Dao, Darren Duc. "Live debugging of distributed systems." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2008. http://wwwlib.umi.com/cr/ucsd/fullcit?p1457315.

Full text
Abstract:
Thesis (M.S.)--University of California, San Diego, 2008.
Title from first page of PDF file (viewed November 6, 2008). Available via ProQuest Digital Dissertations. Includes bibliographical references (p. 43-44).
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography