Dissertations / Theses on the topic 'Distributed systems and algorithms'

To see the other types of publications on this topic, follow the link: Distributed systems and algorithms.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Distributed systems and algorithms.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Pandit, Saurav. "Approximation algorithms for distributed systems." Diss., University of Iowa, 2010. https://ir.uiowa.edu/etd/870.

Full text
Abstract:
Distributed Approximation is a new and rapidly developing discipline that lies at the crossroads of various well-established areas of Computer Science - Distributed Computing, Approximation Algorithms, Graph Theory and often, Computational Geometry. This thesis focuses on the design and analysis of distributed algorithms to solve optimization problems that usually arise in large-scale, heavily dynamic, resource constrained networks, e.g. wireless ad-hoc and sensor networks, P2P systems, mobile networks etc. These problems can often be abstracted by variations of well-known combinatorial optimization problems, such as topology control, clustering etc. Many of these problems are known to be hard (NP-complete). But we need fast and light-weight distributed algorithms for these problems, that yield near-optimal solutions. The results presented in this thesis can be broadly divided in two parts. The first part contains a set of results that obtain improved solutions to the classic problem of computing a sparse "backbone" for Wireless Sensor Networks (WSNs). In graph-theoretic terms, the goal is to compute a spanning subgraph of the input graph, that is sparse, lightweight and has low stretch. The term "low stretch" indicates that in spite of dropping many edges, the distance between any two nodes in the graph is not increased by much. We model WSNs as geometric graphs - unit ball graphs, quasi-unit ball graphs etc. in Euclidean spaces, as well as in more general metric spaces of low doubling dimension. We identify and exploit a variety of geometric features of those models to obtain our results. In the second part of the thesis we focus on distributed algorithms for clustering problems. We present several distributed approximation algorithms for clustering problems (e.g., minimum dominating set, facility location problems) that improve on best known results so far. The main contribution here is the design of distributed algorithms where the running time is a "tunable" parameter. The advent of distributed systems of unprecedented scale and complexity motivates the question of whether it is possible to design algorithms that can provide non-trivial approximation guarantees even after very few rounds of computation and message exchanges. We call these algorithms "k-round algorithms". We design k-round algorithms for various clustering problems that yield non-trivial approximation factors even if k is a constant. Additionally, if k assumes poly-logarithmic values, our algorithms match or improve on the best-known approximation factors for these problems.
APA, Harvard, Vancouver, ISO, and other styles
2

Bernabéu-Aubán, José Manuel. "Location finding algorithms for distributed systems." Diss., Georgia Institute of Technology, 1988. http://hdl.handle.net/1853/32951.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Cornejo, Collado Alejandro. "Local distributed algorithms for multi-robot systems." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/79220.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2013.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 165-173) and index.
The field of swarm robotics focuses on controlling large populations of simple robots to accomplish tasks more effectively than what is possible using a single robot. This thesis develops distributed algorithms tailored for multi-robot systems with large populations. Specifically we focus on local distributed algorithms since their performance depends primarily on local parameters on the system and are guaranteed to scale with the number of robots in the system. The first part of this thesis considers and solves the problem of finding a trajectory for each robot which is guaranteed to preserve the connectivity of the communication graph, and when feasible it also guarantees the robots advanced towards a goal defined by an arbitrary motion planner. We also describe how to extend our proposed approach to preserve the k-connectivity of a communication graph. Finally, we show how our connectivity-preserving algorithm can be combined with standard averaging procedures to yield a provably correct flocking algorithm. The second part of this thesis considers and solves the problem of having each robot localize an arbitrary subset of robots in a multi-robot system relying only on sensors at each robot that measure the angle, relative to the orientation of each robot, towards neighboring robots in the communication graph. We propose a distributed localization algorithm that computes the relative orientations and relative positions, up to scale, of an arbitrary subset of robots. For the case when the robots move in between rounds we show how to use odometry information to allow each robot to compute the relative positions complete with scale, of an arbitrary subset of robots. Finally we describe how to use the our localization algorithm to design a variety of multi-robot tasks.
by Alejandro Cornejo.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
4

Saia, Jared. "Algorithms for managing data in distributed systems /." Thesis, Connect to this title online; UW restricted, 2002. http://hdl.handle.net/1773/6941.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Wilhelm, Daniel. "Ordered broadcast algorithms in dynamic distributed systems." Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS135.

Full text
Abstract:
La diffusion causale est un élément fondamental de nombreux systèmes distribués et parallèles, où les processus collaborent pour effectuer des tâches communes, telles que le calcul à haute performance, les bases de données distribuées, les conférences, les réseaux sociaux ou d'autres services fournissant un service à de nombreux utilisateurs. Dans de tels systèmes, les processus ont souvent besoin d'une primitive de diffusion pour partager des informations qui doivent être ordonnées pour avoir un sens, et l'ordre causal a été prouvé être l'ordre le plus fort qui peut être implémenté dans des systèmes dans lesquels des partitionnements peuvent se produire. Les algorithmes de diffusion causale existants ne sont soit pas évolutifs, soit ils ne tolèrent pas toutes les dynamiques introduites par les processus qui rejoignent et quittent le système ou qui sont fautifs pendant l'exécution. Dans cette thèse, nous proposons des algorithmes de diffusion causale qui passent à l'échelle et qui tolèrent les dynamiques des systèmes distribués. Nous proposons d'abord des algorithmes de diffusion causale pour les réseaux mobiles. Ces réseaux ont des caractéristiques spécifiques : des capacités limitées des nœuds (calcul, stockage, énergie), des canaux de communication peu fiables et la dynamique des connexions due à la mobilité des nœuds, à l'échec des nœuds et aux nœuds qui rejoignent et quittent le système durant l'exécution. Dans la deuxième partie, nous abordons la diffusion causale fournie avec des horloges de taille constante. Les horloges de taille constante tolèrent les processus qui rejoignent et quittent le système, et ils ont une taille qui ne dépend pas du nombre de processus. Cependant, elles ne caractérisent pas la causalité et les algorithmes les utilisant ne garantissent l'ordre causal que de manière probabiliste. Nous proposons d'abord un détecteur d'erreurs, basé sur le hachage, qui analyse les horloges de taille constante des messages avant de les livrer afin de détecter les messages qui ont des dépendances causales que le processus n'a pas encore livré. Deuxièmement, nous proposons un algorithme pour récupérer les dépendances causales des messages, que nous utilisons pour assurer la livraison causale des messages marqués par le détecteur d'erreurs. Troisièmement, nous proposons une nouvelle horloge, dénommée horloge de taille dynamique, composée d'horloges probabilistes, et dont la taille peut être modifiée durant l'exécution. La taille de l'horloge peut plus particulièrement s'adapter à la quantité de messages concurrents à l'intérieur du système. Nous avons implémentés les contributions sur le simulateur OMNeT++. Les deux algorithmes de diffusion causale pour réseaux mobiles ont été implémentés dans le framework INET, qui est un simulateur de réseau réaliste implémenté sur le simulateur OMNeT++, et qui implémente entre autres les interférences sur le réseau sans fil, les couches de réseau et la mobilité des nœuds. Les résultats confirment que les algorithmes de diffusion causale présentés surpassent les algorithmes existants pour les réseaux mobiles tout en faisant des hypothèses de réseau plus réalistes que la majorité d'entre eux. Les contributions aux horloges de taille constante ont été implémentées sur le simulateur OMNeT++. Les résultats montrent que le détecteur d'erreurs basé sur le hachage a détecté tous les messages délivrés en dehors de l'ordre causal. En combinant le détecteur d'erreurs basé sur le hachage avec l'algorithme pour récupérer les dépendances causales des messages, nous avons pu livrer tous les messages dans l'ordre causal. Nous avons analysé les limites du détecteur d'erreurs basé sur le hachage et la récupération des dépendances causales. Enfin, les résultats montrent que la nouvelle horloge proposée s'adapte bien au nombre de messages concurrents à l'intérieur du système
Causal broadcast is a fundamental building block of many distributed and parallel applications, where processes collaborate to perform common tasks, such as high performance computing, distributed databases, conferencing, social networks or other services providing a service to many users. In such systems, processes often require a broadcast primitive to share information that must be ordered to be meaningful, and causal order has been proven to be the strongest order that can be implemented in systems where partitioning can occur. Existing causal broadcast algorithms are either not scalable or they do not tolerate all the dynamics introduced by processes that join and leave the system or fail during execution. Some works append on messages all the information required to causally order them at destination. However, it has been proved that a structure with one entry per process in the system is the minimal structure required to ensure the causal delivery of broadcast messages. Hence, algorithms that append all the causal information on messages do not scale. Some other works make assumptions on the system, such as the network topology or the FIFO property of the communication channels. Such works do not handle well the dynamics caused by processes that join and leave the system or fail. Hence, these works do not handle all the possible dynamics of distributed systems. In this thesis, we provide causal broadcast algorithms that do scale and tolerate the dynamics of distributed systems. We first provide causal broadcast algorithms for Mobile Networks. Such networks have specific features: limited capacities of nodes (computation, storage, energy), unreliable communication channels, and the dynamics of connections due to node mobility, node failure, and join/leave operations of nodes. In the second part, we address causal broadcast provided with constant size clocks. Constant size clocks tolerate process churn and have a size that does not depend on the number of processes. However, they do not characterize causality and algorithms using them only ensure causal order probabilistically. We first propose an error detector, based on hashes, which analyzes the constant size clocks of messages before delivering them in order to detect messages which have causal dependencies that the process did not deliver yet. Second, we propose an algorithm to retrieve the causal dependencies of messages, which we use to ensure the causal delivery of messages tagged by the error detector. Third, we propose a new clock build with constant size clocks and which adapts its size to the number of concurrent messages inside the system. We implemented the contributions on the OMNeT++ simulator. Both causal broadcast algorithms were implemented on the framework INET, which is a realistic network simulator implementing interferences on the wireless network, network layers and node mobility among others. Results confirm that the presented causal broadcast algorithms outperform existing algorithms done for Mobile Networks while making realistic network assumptions. The contributions to constant size clocks were implemented on the OMNeT++ simulator. Results show that the hash-based error detector detected all messages whose causal dependencies have not been delivered yet. Combining the hash-based error detector with the algorithm to retrieve the causal dependencies of messages allowed to deliver all messages in causal order. We analyzed the limits of the hash-based error detector and the retrieval of causal dependencies. Finally, results show that the proposed clock adapts itself well to the number of concurrent messages inside the system
APA, Harvard, Vancouver, ISO, and other styles
6

Huq, Sikder Rezwanul. "Locally self-adjusting distributed algorithms." Diss., University of Iowa, 2018. https://ir.uiowa.edu/etd/6594.

Full text
Abstract:
In this dissertation, we study self-adjusting algorithms for large-scale distributed systems. Self-adjusting algorithms enable distributed systems to adjust their properties dynamically as the input pattern changes. Self-adjustment is an attractive tool as it has the potential to significantly improve the performance of distributed systems, especially when the input patterns are skewed. We start with a distributed self-adjusting algorithm for skip graphs that minimizes the average routing costs between arbitrary communication pairs by performing topological adaptation to the communication pattern. Our algorithm is fully decentralized, conforms to the CONGEST model (i.e. uses O(log n) bit messages), and requires O(log n) bits of memory for each node, where n is the total number of nodes. Upon each communication request, our algorithm first establishes communication by using the standard skip graph routing, and then locally and partially reconstructs the skip graph topology to perform topological adaptation. We propose a computational model for such algorithms, as well as a yardstick (working set property) to evaluate them. Our working set property can also be used to evaluate self-adjusting algorithms for other graph classes where multiple tree-like subgraphs overlap (e.g. hypercube networks). We derive a lower bound of the amortized routing cost for any algorithm that follows our model and serves an unknown sequence of communication requests. We show that the routing cost of our algorithm is at most a constant factor more than the amortized routing cost of any algorithm conforming to our computational model. We also show that the expected transformation cost for our algorithm is at most a logarithmic factor more than the amortized routing cost of any algorithm conforming to our computational model. As a follow-up work, we present a distributed self-adjusting algorithm (referred to as DyHypes) for topological adaption in hypercubic networks. One of the major differences between hypercubic networks and skip graphs is that hypercubic networks are more rigid in structure than that of skip graphs. This property of hypercubic networks makes self-adjustment significantly different compared to skip graphs. Upon a communication between an arbitrary pair of nodes, DyHypes transforms the network to place frequently communicating nodes closer to each other to maximize communication efficiency, and uses randomization in the transformation process to speed up the transformation and reduce message complexity. We show that, as compared to DSG, DyHypes reduces the transformation cost by a factor of O(log n), where n is the number of nodes involved in the transformation. Moreover, despite achieving faster transformation with lower message complexity, the combined cost (routing and transformation) of DyHypes is at most a log log n factor more than that of any algorithm that conforms to the computational model adopted for this work. Similar to DSG, DyHypes is fully decentralized, conforms to the CONGEST model, and requires O(log n) bits of memory for each node, where N is the total number of nodes. Finally, we present a novel distributed load balancing algorithm called Meezan to address the load imbalance among large-scale networked cache servers. Modern web services rely on a network of distributed cache servers to efficiently deliver content to users. Load imbalance among cache servers can substantially degrade content delivery performance. Due to the skewed and dynamic nature of real-world workloads, cache servers that serve viral content experience higher load as compared to other cache servers. Our algorithm Meezan replicates popular objects to mitigate skewness and adjusts hash space boundaries in response to load dynamics in a novel way. Our theoretical analysis shows that Meezan achieves near perfect load balancing for a wide range of operating parameters. Our trace driven simulations shows that Meezan reduces load imbalance by up to 52% as compared to prior solutions.
APA, Harvard, Vancouver, ISO, and other styles
7

Ghodsi, Ali. "Distributed k-ary System: Algorithms for Distributed Hash Tables." Doctoral thesis, KTH, Mikroelektronik och Informationsteknik, IMIT, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4186.

Full text
Abstract:
This dissertation presents algorithms for data structures called distributed hash tables (DHT) or structured overlay networks, which are used to build scalable self-managing distributed systems. The provided algorithms guarantee lookup consistency in the presence of dynamism: they guarantee consistent lookup results in the presence of nodes joining and leaving. Similarly, the algorithms guarantee that routing never fails while nodes join and leave. Previous algorithms for lookup consistency either suffer from starvation, do not work in the presence of failures, or lack proof of correctness. Several group communication algorithms for structured overlay networks are presented. We provide an overlay broadcast algorithm, which unlike previous algorithms avoids redundant messages, reaching all nodes in O(log n) time, while using O(n) messages, where n is the number of nodes in the system. The broadcast algorithm is used to build overlay multicast. We introduce bulk operation, which enables a node to efficiently make multiple lookups or send a message to all nodes in a specified set of identifiers. The algorithm ensures that all specified nodes are reached in O(log n) time, sending maximum O(log n) messages per node, regardless of the input size of the bulk operation. Moreover, the algorithm avoids sending redundant messages. Previous approaches required multiple lookups, which consume more messages and can render the initiator a bottleneck. Our algorithms are used in DHT-based storage systems, where nodes can do thousands of lookups to fetch large files. We use the bulk operation algorithm to construct a pseudo-reliable broadcast algorithm. Bulk operations can also be used to implement efficient range queries. Finally, we describe a novel way to place replicas in a DHT, called symmetric replication, that enables parallel recursive lookups. Parallel lookups are known to reduce latencies. However, costly iterative lookups have previously been used to do parallel lookups. Moreover, joins or leaves only require exchanging O(1) messages, while other schemes require at least log(f) messages for a replication degree of f. The algorithms have been implemented in a middleware called the Distributed k-ary System (DKS), which is briefly described.
QC 20100824
APA, Harvard, Vancouver, ISO, and other styles
8

AGATE, Vincenzo. "REPUTATION MANAGEMENT ALGORITHMS IN DISTRIBUTED APPLICATIONS." Doctoral thesis, Università degli Studi di Palermo, 2020. http://hdl.handle.net/10447/395198.

Full text
Abstract:
Nowadays, several distributed systems and applications rely on interactions between unknown agents that cooperate in order to exchange resources and services. The distributed nature of these systems, and the consequent lack of a single centralized point of control, let agents to adopt selfish and malicious behaviors in order to maximize their own utility. To address such issue, many applications rely on Reputation Management Systems (RMSs) to estimate the future behavior of unknown agents before establishing actual interactions. The relevance of these systems is even greater if the malicious or selfish behavior exhibited by a few agents may reduce the utility perceived by cooperative agents, leading to a damage to the whole community. RMSs allow to estimate the expected outcome of a given interaction, thus providing relevant information that can be exploited to take decisions about the convenience of interacting with a certain agent. Agents and their behavior are constantly evolving and becoming even more complex, so it is increasingly difficult to successfully develop the RMS, able to resist the threats presented. A possible solution to this problem is the use of agent-based simulation software designed to support researchers in evaluating distributed reputation management systems since the design phase. This dissertation presents the design and the development of a distributed simulation platform based on HPC technologies called DRESS. This solution allows researchers to assess the performance of a generic reputation management system and provides a comprehensive assessment of its ability to withstand security attacks. In the scientific literature, a tool that allows the comparison of distinct RMS and different design choices through a set of defined metrics, also supporting large-scale simulations, is still missing. The effectiveness of the proposed approach is demonstrated by the application scenario of user energy sharing systems within smart-grids and by considering user preferences differently from other work. The platform has proved to be useful for the development of an energy sharing system among users, which with the aim of maximizing the amount of energy transferred has exploited the reputation of users once learned their preferences.
APA, Harvard, Vancouver, ISO, and other styles
9

Obrovac, Marko. "Chemical Computing for Distributed Systems: Algorithms and Implementation." Phd thesis, Université Rennes 1, 2013. http://tel.archives-ouvertes.fr/tel-00925257.

Full text
Abstract:
Avec l'émergence de plates-formes distribuées très hétérogènes, dynamiques et à large échelle, la nécessité d'un moyen de les programmer efficacement et de les gérer est apparu. Le concept de l'informatique autonomique propose de créer des systèmes auto-gérés c'est-à-dire des systèmes qui sont conscients de leurs composants et de leur environnement, et peuvent se configurer, s'optimiser, se réparer et se protéger. Dans le cadre de la réalisation de tels systèmes, la programmation déclarative, dont l'objectif est de faciliter la tâche du programmeur en séparant le contrôle de la logique du calcul, a retrouvé beaucoup d'intérêt ces derniers temps. En particulier, la programmation à base de des règles est considérée comme un modèle prometteur dans cette quête d'abstractions de programmation adéquates pour ces plates-formes. Cependant, bien que ces modèles gagnent beaucoup d'attention, ils créent une demande pour des outils génériques capables de les exécuter à large échelle. Le modèle de programmation chimique, qui a été conçu suivant la métaphore chimique, est un modèle de programmation à bas de règles et d'ordre supérieur, avec une exécution non-déterministe, où les règles sont appliquées de façon concurrente sur un multi ensemble de données. Dans cette thèse, nous proposons la conception, le développement et l'expérimentation d'un intergiciel distribué pour l'exécution de programmes chimique sur des plates-formes à large échelle et génériques. L'architecture proposée combine une couche de communication pair-à-pair avec un protocole de capture atomique d'objets sur lesquels les règles doivent être appliquées, et un système efficace de détection de terminaison. Nous décrivons le prototype d'intergiciel mettant en oeuvre cette architecture. En s'appuyant sur son déploiement sur une plate-forme expérimentale à large échelle, nous présentons les résultats de performance, qui confirment les complexités analytiques obtenues et montrons expérimentalement la viabilité d'un tel modèle de programmation.
APA, Harvard, Vancouver, ISO, and other styles
10

Divi, Vijay 1980. "Estimation and calibration algorithms for distributed sampling systems." Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/45874.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.
Includes bibliographical references (p. 153-157).
Traditionally, the sampling of a signal is performed using a single component such as an analog-to-digital converter. However, many new technologies are motivating the use of multiple sampling components to capture a signal. In some cases such as sensor networks, multiple components are naturally found in the physical layout; while in other cases like time-interleaved analog-to-digital converters, additional components are added to increase the sampling rate. Although distributing the sampling load across multiple channels can provide large benefits in terms of speed, power, and resolution, a variety mismatch errors arise that require calibration in order to prevent a. degradation in system performance.In this thesis, we develop low-complexity, blind algorithms for the calibration of distributed sampling systems. In particular, we focus on recovery from timing skews that cause deviations from uniform timing. Methods for bandlimited input reconstruction from nonuniform recurrent samples are presented for both the small-mismatch and the low-SNR domains. Alternate iterative reconstruction methods are developed to give insight into the geometry of the problem.From these reconstruction methods, we develop time-skew estimation algorithms that have high performance and low complexity even for large numbers of components. We also extend these algorithms to compensate for gain mismatch between sampling components. To understand the feasibility of implementation, analysis is also presented for a sequential implementation of the estimation algorithm.In distributed sampling systems, the minimum input reconstruction error is dependent upon the number of sampling components as well as the sample times of the components. We develop bounds on the expected reconstruction error when the time-skews are distributed uniformly.
(cont) Performance is compared to systems where input measurements are made via projections onto random bases, an alternative to the sinc basis of time-domain sampling. From these results, we provide a framework on which to compare the effectiveness of any calibration algorithm.Finally, we address the topic of extreme oversampling, which pertains to systems with large amounts of oversampling due to redundant sampling components. Calibration algorithms are developed for ordering the components and for estimating the input from ordered components. The algorithms exploit the extra samples in the system to increase estimation performance and decrease computational complexity.
by Vijay Divi.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
11

To, Toan. "Distributed opportunistic scheduling algorithms for wireless communications." Thesis, Swansea University, 2012. https://cronfa.swan.ac.uk/Record/cronfa42588.

Full text
Abstract:
In this thesis, we propose a number of distributed schemes for wireless communications in the cross layer design context, considering an uplink random access network in which multiple users communicate with a common base station. In addition, we perform a comprehensive study on a splitting based multiuser selection algorithm which is simple, effective, and scales with the network size. First, we investigate a reservation-type protocol in a channel aware ALOHA system. Various Markovian models are used to describe the system and to capture the temporal correlation of the channel evolution. The average throughput of the system is obtained using the Markov Analysis technique and we show that the reservation protocol can achieve better performance than the original channel-aware ALOHA by reducing the collision probability. Second, for better resource utilization in the Opportunistic Multichannel ALOHA scheme, we propose a simple extension to the transmission policy that exploits the idle channels. Performance analysis shows that, theoretically, the maximum system throughput can be improved by up to 63% in the asymptotic case. Through numerical results, it can be seen that a significant gain is achieved even when the system consists of a small number of users. Third, we consider a splitting based multiuser selection algorithm in a probabilistic view. Asymptotic analysis leads to a functional equation, similar to that encountered in the analysis of the collision resolution algorithm. Subject to some conditions, the solution of the functional equation can be obtained, which provides the approximations for the expected number of slots and the expected number of transmissions required by the algorithm in a large system. These results shed light on open design problems in choosing parameters for the algorithm when considering the delay and the overhead jointly. A typical example is to optimize the parameters that minimize the weighted sum of these measures of interest.
APA, Harvard, Vancouver, ISO, and other styles
12

Torres-Rojas, Francisco Jose. "Efficient time representation in distributed systems." Thesis, Georgia Institute of Technology, 1995. http://hdl.handle.net/1853/8301.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Abou, Jaoude Dany. "Computationally Driven Algorithms for Distributed Control of Complex Systems." Diss., Virginia Tech, 2018. http://hdl.handle.net/10919/85965.

Full text
Abstract:
This dissertation studies the model reduction and distributed control problems for interconnected systems, i.e., systems that consist of multiple interacting agents/subsystems. The study of the analysis and synthesis problems for interconnected systems is motivated by the multiple applications that can benefit from the design and implementation of distributed controllers. These applications include automated highway systems and formation flight of unmanned aircraft systems. The systems of interest are modeled using arbitrary directed graphs, where the subsystems correspond to the nodes, and the interconnections between the subsystems are described using the directed edges. In addition to the states of the subsystems, the adopted frameworks also model the interconnections between the subsystems as spatial states. Each agent/subsystem is assumed to have its own actuating and sensing capabilities. These capabilities are leveraged in order to design a controller subsystem for each plant subsystem. In the distributed control paradigm, the controller subsystems interact over the same interconnection structure as the plant subsystems. The models assumed for the subsystems are linear time-varying or linear parameter-varying. Linear time-varying models are useful for describing nonlinear equations that are linearized about prespecified trajectories, and linear parameter-varying models allow for capturing the nonlinearities of the agents, while still being amenable to control using linear techniques. It is clear from the above description that the size of the model for an interconnected system increases with the number of subsystems and the complexity of the interconnection structure. This motivates the development of model reduction techniques to rigorously reduce the size of the given model. In particular, this dissertation presents structure-preserving techniques for model reduction, i.e., techniques that guarantee that the interpretation of each state is retained in the reduced order system. Namely, the sought reduced order system is an interconnected system formed by reduced order subsystems that are interconnected over the same interconnection structure as that of the full order system. Model reduction is important for reducing the computational complexity of the system analysis and control synthesis problems. In this dissertation, interior point methods are extensively used for solving the semidefinite programming problems that arise in analysis and synthesis.
Ph. D.
The work in this dissertation is motivated by the numerous applications in which multiple agents interact and cooperate to perform a coordinated task. Examples of such applications include automated highway systems and formation flight of unmanned aircraft systems. For instance, one can think of the hazardous conditions created by a fire in a building and the benefits of using multiple interacting multirotors to deal with this emergency situation and reduce the risks on humans. This dissertation develops mathematical tools for studying and dealing with these complex systems. Namely, it is shown how controllers can be designed to ensure that such systems perform in the desired way, and how the models that describe the systems of interest can be systematically simplified to facilitate performing the tasks of mathematical analysis and control design.
APA, Harvard, Vancouver, ISO, and other styles
14

Benmohammed-Mahieddine, Kouider. "An evaluation of load balancing algorithms for distributed systems." Thesis, University of Leeds, 1991. http://etheses.whiterose.ac.uk/4395/.

Full text
Abstract:
Distributed systems are gradually being accepted as the dominant computing paradigm of the future. However, due to the diversity and multiplicity of resources, and the need for transparency to users, global resource management raises many questions. On the performance level the potential benefits of the load balancing in resolving the occasional congestion experienced by some nodes while others are idle or lightly loaded are commonly accepted. It is also acknowledged that no single load balancing algorithm deals satisfactorily with the changing system characteristics and dynamic workload environment. In modelling distributed systems for load balancing, optimistic assumptions of system characteristics are commonly made, with no evaluation of alternative system design options such as communications protocols. When realistic assumptions are made on system attributes such as communication bandwidth, load balancing overheads, and workload model, doubts are cast on the capability of load balancing to improve the performance of distributed systems significantly. A taxonomy is developed for the components as well as the attributes aspects of load balancing algorithms to provide a common terminology and a comprehensive view to load balancing in distributed systems. For adaptive algorithms the taxonomy is extended to identify the issues involved and the ways of adding adaptability along different dimensions. A design methodology is also outlined. A review of related work is used to identify the most promising load balancing strategies and the modelling assumptions made in previous load balancing studies. Subsequently the research problems addressed in this thesis and the design of new algorithms are detailed. A simulated system developed to allow an experimentation with various load balancing algorithms under different workload models and system attributes is described. Based on the nature of the file system structure and the classes of nodes processing speed involved, different models of loosely-coupled distributed systems can be defined. Four models are developed: disk-based homogeneous nodes, diskless homogeneous nodes, diskless heterogeneous nodes, and disk-based heterogeneous nodes. The nodes are connected through a broadcast transfer device. A set of representative load balancing algorithms covering a range of strategies are evaluated and compared for the four models of distributed systems. The algorithms developed include a new algorithm called Diffuse based on explicit adaptability for the homogeneous systems. In the case of heterogeneous systems, novel modifications are made to a number of algorithms to take into account the heterogeneity of nodes speed. The evaluation on homogeneous systems is two-fold: an assessment of the effect of system attributes on the performance of the distributed system subject to these algorithms, and a comparison of the relative merits of the algorithms using different performance metrics, and in particular a classification of the performance of the Diffuse algorithm with regard to others in the literature. For the heterogeneous systems the performance of the adapted algorithms is compared to that of the standard versions and to the no load balancing case. As a result of this evaluation, for a set of combinations of performance objectives, distributed system attributes, and workload environment, we identify the most . appropriate load balancing algorithm and optimal values for adjustable parameters of the algorithm.
APA, Harvard, Vancouver, ISO, and other styles
15

Chua, Kee Koon 1965. "Deadlock avoidance: Improved algorithms for centralized and distributed systems." Thesis, The University of Arizona, 1992. http://hdl.handle.net/10150/291335.

Full text
Abstract:
A deadlock avoidance algorithm for a centralized resource allocation system is presented. Unlike the Banker's algorithm, this proposed algorithm makes use of the state of the previous safe sequence to construct a new safe sequence. The performance of this proposed algorithm is compared to that of both the Banker's algorithm and an efficient algorithm proposed by Belik. The simulation results show that our algorithm's execution time is significantly better than the Banker's algorithm and is very competitive with Belik's algorithm. In addition, our Modified Banker's Algorithm produces optimal results unlike Belik's approach which sometimes deems a safe allocation request unsafe. This centralized algorithm combined with an algorithm by Moser, is extended for use in distributed systems. Compared with Moser's algorithm, this algorithm is less restrictive in acquiring resources from other processes and allowing increase in its maximum resource requirement as confirmed by our analysis and simulation results.
APA, Harvard, Vancouver, ISO, and other styles
16

Nanongkai, Danupon. "Graph and geometric algorithms on distributed networks and databases." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/41056.

Full text
Abstract:
In this thesis, we study the power and limit of algorithms on various models, aiming at applications in distributed networks and databases. In distributed networks, graph algorithms are fundamental to many applications. We focus on computing random walks which are an important primitive employed in a wide range of applications but has always been computed naively. We show that a faster solution exists and subsequently develop faster algorithms by exploiting random walk properties leading to two immediate applications. We also show that this algorithm is optimal. Our technique in proving a lower bound show the first non-trivial connection between communication complexity and lower bounds of distributed graph algorithms. We show that this technique has a wide range of applications by proving new lower bounds of many problems. Some of these lower bounds show that the existing algorithms are tight. In database searching, we think of the database as a large set of multi-dimensional points stored in a disk and want to help the users to quickly find the most desired point. In this thesis, we develop an algorithm that is significantly faster than previous algorithms both theoretically and experimentally. The insight is to solve the problem on the streaming model which helps emphasize the benefits of sequential access over random disk access. We also introduced the randomization technique to the area. The results were complemented with a lower bound. We also initiat a new direction as an attempt to get a better query. We are the first to quantify the output quality using "user satisfaction" which is made possible by borrowing the idea of modeling users by utility functions from game theory and justify our approach through a geometric analysis.
APA, Harvard, Vancouver, ISO, and other styles
17

Kolesnikov, Valeriy. "InDiGo : an infrastructure for optimization of distributed algorithms." Diss., Manhattan, Kan. : Kansas State University, 2008. http://hdl.handle.net/2097/1007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Chong, Kheng Huat. "Dynamic routing algorithms for distributed packet-switched data networks." Thesis, Imperial College London, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.391256.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Wong, Wing-ki Vicky. "An immunity-based distributed multiagent control framework." Click to view the E-thesis via HKUTO, 2006. http://sunzi.lib.hku.hk/hkuto/record/B37314348.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Sun, Qiong, and 孙琼. "Topology-transparent distributed scheduling in wireless networks." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2010. http://hub.hku.hk/bib/B44904101.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Mfoumboulou, Yohan Darcy. "Development of nonlinear control algorithms for implementation in distributed systems." Thesis, Cape Peninsula University of Technology, 2014. http://hdl.handle.net/20.500.11838/1187.

Full text
Abstract:
Thesis submitted in fulfilment of the requirements for the degree Master of Technology: Electrical Engineering in the Faculty of Engineering at the Cape Peninsula University of Technology
In the past decade, the need for flexibility and reconfigurability in automation has contributed to the rise of the distributed concept in control systems engineering. The IEC 61499 standard is used to define a distributed model for dividing various components of an industrial application in automation process and complicated control of machinery into function blocks. Such function blocks have the flexibility to be distributed and interconnected across a number of controllers. However, this new standard for automation faces two main challenges: the complexity in designs of distributed systems and the lack of utilization of the standard in industry. Most applications of controllers based on functional block programming are for linear systems. As most of industrial processes are nonlinear there is a need to extend the functional block approach for implementation of nonlinear controllers. Design complexity involves the exact modeling of the system in function blocks to obtain its accurate behaviour and the lack of utilization of the standard is understandable because new technologies are not easily accepted in industry due to their high prices and risks of compromising the performance at the production level. The thesis describes a methodology for design and implementation of nonlinear controllers for nonlinear plants in IEC 61499 standard compliant real-time environment of TwinCAT 3 and Beckhoff Programmable Logic Controller (PLC). The first step is to design the nonlinear controllers and simulate the closed-loop system in MATLAB/SIMULINK software. Then the new engineering based concepts to transform the obtained closed-loop system model to an IEC 61499 Function Block Model. This is accomplished by applying one method which involves a complete model transformation between two block-diagram languages: Simulink and TwinCAT 3. The development tools that support the transformation algorithm in the thesis sets the foundation stone of the verification and validation structure for IEC 61499 function blocks approach. The transformed model of the closed-loop system is downloaded to the Beckhoff PLC and is simulated in real-time. The obtained results demonstrate that the developed methodology allows complex nonlinear controllers to be successfully transformed to IEC 61499 standard compliant environment and to be applied for real-time PLC control of complex plants.
APA, Harvard, Vancouver, ISO, and other styles
22

Ezhilchelvan, Paul Davadoss. "Design and development of algorithms for fault tolerant distributed systems." Thesis, University of Newcastle Upon Tyne, 1989. http://hdl.handle.net/10443/2069.

Full text
Abstract:
This thesis describes the design and development of algorithms for fault tolerant distributed systems. The development of such algorithms requires making assumptions about the types of component faults for which toler- ance is to be provided. Such assumptions must be specified accurately. To this end, this thesis develops a classification of faults in systems. This fault classification identifies a range of fault types from the most restricted to the least restricted. For each fault type, an algorithm for reaching distributed agreement in the presence of a bounded number of faulty processors is developed, and thus a family of agreement algorithms is presented. The influence of the various fault types on the complexities of these algorithms is discussed. Early stopping algorithms are also developed for selected fault types and the influence of fault types on the early stopping conditions of the respective algorithms is analysed. The problem of evaluating the perfor- mance of distributed replicated systems which will require agreement algo- rithms is considered next. As a first step in the direction of meeting this challenging task, a pipeline triple modular redundant system is considered and analytical methods are derived to evaluate the performance of such a system. Finally, the accuracy of these methods is examined using computer simulations.
APA, Harvard, Vancouver, ISO, and other styles
23

McLurkin, James D. (James Dwight) 1972. "Analysis and implementation of distributed algorithms for multi-robot systems." Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/44716.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.
Includes bibliographical references (p. 159-166).
Distributed algorithms for multi-robot systems rely on network communications to share information. However, the motion of the robots changes the network topology, which affects the information presented to the algorithm. For an algorithm to produce accurate output, robots need to communicate rapidly enough to keep the network topology correlated to their physical configuration. Infrequent communications will cause most multirobot distributed algorithms to produce less accurate results, and cause some algorithms to stop working altogether. The central theme of this work is that algorithm accuracy, communications bandwidth, and physical robot speed are related. This thesis has three main contributions: First, I develop a prototypical multi-robot application and computational model, propose a set of complexity metrics to evaluate distributed algorithm performance on multi-robot systems, and introduce the idea of the robot speed ratio, a dimensionless measure of robot speed relative to message speed in networks that rely on multi-hop communication. The robot speed ratio captures key relationships between communications bandwidth, mobility, and algorithm accuracy, and can be used at design time to trade off between them. I use this speed ratio to evaluate the performance of existing distributed algorithms for multi-hop communication and navigation. Second, I present a definition of boundaries in multi-robot systems, and develop new distributed algorithms to detect and characterize them. Finally, I define the problem of dynamic task assignment, and present four distributed algorithms that solve this problem, each representing a different trade-off between accuracy, running time, and communication resources. All the algorithms presented in this work are provably correct under ideal conditions and produce verifiable real-world performance.
(cont.) They are self-stabilizing and robust to communications failures, population changes, and other errors. All the algorithms were tested on a swarm of 112 robots.
by James Dwight McLurkin, IV.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
24

Jeong, Dong Hwa. "DISTRIBUTED WIRELESS SENSOR NETWORK SYSTEMS: THEORETICAL FRAMEWORK, ALGORITHMS, AND APPLICATIONS." Case Western Reserve University School of Graduate Studies / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=case1436541959.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Chang, Ye-In. "Design and analysis of mutual exclusion algorithms for distributed systems /." The Ohio State University, 1991. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487687959967532.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Johnson, Ian D. "A study of adaptive load balancing algorithms for distributed systems." Thesis, Aston University, 1988. http://publications.aston.ac.uk/10660/.

Full text
Abstract:
With the advent of distributed computer systems with a largely transparent user interface, new questions have arisen regarding the management of such an environment by an operating system. One fertile area of research is that of load balancing, which attempts to improve system performance by redistributing the workload submitted to the system by the users. Early work in this field concentrated on static placement of computational objects to improve performance, given prior knowledge of process behaviour. More recently this has evolved into studying dynamic load balancing with process migration, thus allowing the system to adapt to varying loads. In this thesis, we describe a simulated system which facilitates experimentation with various load balancing algorithms. The system runs under UNIX and provides functions for user processes to communicate through software ports; processes reside on simulated homogeneous processors, connected by a user-specified topology, and a mechanism is included to allow migration of a process from one processor to another. We present the results of a study of adaptive load balancing algorithms, conducted using the aforementioned simulated system, under varying conditions; these results show the relative merits of different approaches to the load balancing problem, and we analyse the trade-offs between them. Following from this study, we present further novel modifications to suggested algorithms, and show their effects on system performance.
APA, Harvard, Vancouver, ISO, and other styles
27

Leslie, Robert. "An evaluation of load sharing algorithms for heterogeneous distributed systems." Thesis, University of Greenwich, 1997. http://gala.gre.ac.uk/6224/.

Full text
Abstract:
Distributed systems offer the ability to execute a job at other nodes than the originating one. Load sharing algorithms use this ability to distribute work around the system in order to achieve greater efficiency. This is reflected in substantially reduced response times. In the majority of studies the systems on which load sharing has been evaluated have been homogeneous in nature. This thesis considers load sharing in heterogeneous systems, in which the heterogeneity is exhibited in the processing power of the constituent nodes. Existing algorithms are evaluated and improved ones proposed. Most of the performance analysis is done through simulation. A model of diskless workstations communicating and transferring jobs by Remote Procedure Call is used. All assumptions about the overheads of inter-node communication are based upon measurements made on the university networks. The comparison of algorithms identifies those characteristics that offer improved performance in heterogeneous systems. The level of system information required for transfer is investigated and an optimum found. Judicious use of the collected information via algorithm design is shown to account for much of the improvement. However detailed examination of algorithm behaviour compared with that of a 'optimum' load sharing scenario reveals that there are occasions when full use of all the information available is not beneficial. Investigations are carried out on the most promising algorithms to assess their adaptability, scalability and stability under a variety of differing conditions. The standard definitions of load balancing and load sharing are shown not to apply when considering heterogeneous systems. To validate the assumptions in the simulation model a load sharing scenario was implemented on a network of Sun workstations at the University. While the scope of the implementation was somewhat limited by lack of resources, it does demonstrate the relative ease with which the algorithms can be implemented without alteration of the operating system code or modification at the kernel level.
APA, Harvard, Vancouver, ISO, and other styles
28

Bhola, Sumeer Kumar. "Replication in interactive distributed applications : abstractions, algorithms and evaluation." Diss., Georgia Institute of Technology, 1999. http://hdl.handle.net/1853/8138.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Kassabalidis, Ioannis N. "Applications of biologically inspired algorithms to complex systems /." Thesis, Connect to this title online; UW restricted, 2002. http://hdl.handle.net/1773/5915.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Deorukhkar, Mayuresh. "Deadlock probability prediction and detection in distributed systems /." free to MU campus, to others for purchase, 2004. http://wwwlib.umi.com/cr/mo/fullcit?p1421130.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Bogdanov, Kirill. "Reducing Long Tail Latencies in Geo-Distributed Systems." Licentiate thesis, KTH, Network Systems Laboratory (NS Lab), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-194729.

Full text
Abstract:
Computing services are highly integrated into modern society. Millions of people rely on these services daily for communication, coordination, trading, and accessing to information. To meet high demands, many popular services are implemented and deployed as geo-distributed applications on top of third party virtualized cloud providers. However, the nature of such deployment provides variable performance characteristics. To deliver high quality of service, such systems strive to adapt to ever-changing conditions by monitoring changes in state and making run-time decisions, such as choosing server peering, replica placement, and quorum selection. In this thesis, we seek to improve the quality of run-time decisions made by geo-distributed systems. We attempt to achieve this through: (1) a better understanding of the underlying deployment conditions, (2) systematic and thorough testing of the decision logic implemented in these systems, and (3) by providing a clear view into the network and system states which allows these services to perform better-informed decisions. We performed a long-term cross datacenter latency measurement of the Amazon EC2 cloud provider. We used this data to quantify the variability of network conditions and demonstrated its impact on the performance of the systems deployed on top of this cloud provider. Next, we validate an application’s decision logic used in popular storage systems by examining replica selection algorithms. We introduce GeoPerf, a tool that uses symbolic execution and lightweight modeling to perform systematic testing of replica selection algorithms. We applied GeoPerf to test two popular storage systems and we found one bug in each. Then, using traceroute and one-way delay measurements across EC2, we demonstrated persistent correlation between network paths and network latency. We introduce EdgeVar, a tool that decouples routing and congestion based changes in network latency. By providing this additional information, we improved the quality of latency estimation, as well as increased the stability of network path selection. Finally, we introduce Tectonic, a tool that tracks an application’s requests and responses both at the user and kernel levels. In combination with EdgeVar, it provides a complete view of the delays associated with each processing stage of a request and response. Using Tectonic, we analyzed the impact of sharing CPUs in a virtualized environment and can infer the hypervisor’s scheduling policies. We argue for the importance of knowing these policies and propose to use them in applications’ decision making process.
Databehandlingstjänster är en välintegrerad del av det moderna samhället. Miljontals människor förlitar sig dagligen på dessa tjänster för kommunikation, samordning, handel, och åtkomst till information. För att möta höga krav implementeras och placeras många populära tjänster som geo-fördelning applikationer ovanpå tredje parters virtuella molntjänster. Det ligger emellertid i sakens natur att sådana utplaceringar resulterar i varierande prestanda. För att leverera höga servicekvalitetskrav behöver sådana system sträva efter att ständigt anpassa sig efter ändrade förutsättningar genom att övervaka tillståndsändringar och ta realtidsbeslut, som till exempel val av server peering, replika placering, och val av kvorum. Den här avhandlingen avser att förbättra kvaliteten på realtidsbeslut tagna av geo-fördelning system. Detta kan uppnås genom: (1) en bättre förståelse av underliggande utplaceringsvillkor, (2) systematisk och noggrann testning av beslutslogik redan implementerad i dessa system, och (3) en tydlig inblick i nätverket och systemtillstånd som tillåter dessa tjänster att utföra mer informerade beslut. Vi utförde en långsiktig korsa datacenter latensmätning av Amazons EC2 molntjänst. Mätdata användes sedan till att kvantifiera variationen av nätverkstillstånd och demonstrera dess inverkan på prestanda för system placerade ovanpå denna molntjänst. Därnäst validerades en applikations beslutslogik vanlig i populära lagringssystem genom att undersöka replika valalgoritmen. GeoPerf, ett verktyg som tillämpar symbolisk exekvering och lättviktsmodellering för systematisk testning av replika valalgoritmen, användes för att testa två populära lagringssystem och vi hittade en bugg i båda. Genom traceroute och envägslatensmätningar över EC2 demonstrerar vi ihängande korrelation mellan nätverksvägar och nätverkslatens. Vi introducerar också EdgeVar, ett verktyg som frikopplar dirigering och trängsel baserat på förändringar i nätverkslatens. Genom att tillhandahålla denna ytterligare information förbättrade vi kvaliteten på latensuppskattningen och stabiliteten på nätverkets val av väg. Slutligen introducerade vi Tectonic, ett verktyg som följer en applikations begäran och gensvar på både användare-läge och kernel-läge. Tillsammans med EdgeVar förses en komplett bild av fördröjningar associerade med varje beräkningssteg av begäran och gensvar. Med Tectonic kunde vi analysera inverkan av att dela CPUer i en virtuell miljö och kan avslöja hypervisor schemaläggningsprinciper. Vi argumenterar för betydelsen av att känna till dessa principer och föreslå användningen av de i beslutsprocessen.

QC 20161101

APA, Harvard, Vancouver, ISO, and other styles
32

Wong, Wing-ki Vicky, and 黃穎琪. "An immunity-based distributed multiagent control framework." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2006. http://hub.hku.hk/bib/B37314348.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Lenharth, Andrew. "Algorithms for Stable Allocations in Distributed Real-Time Resource Management Systems." Ohio University / OhioLINK, 2004. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1102697777.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Lenharth, Andrew D. "Algorithms for stable allocations in distributed real-time resource management systems." Ohio : Ohio University, 2004. http://www.ohiolink.edu/etd/view.cgi?ohiou1102697777.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Cheung, Victor. "Distributed position estimation for wireless sensor networks /." View abstract or full-text, 2006. http://library.ust.hk/cgi/db/thesis.pl?COMP%202006%20CHEUNG.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Lu, Yapeng. "An integrated algorithm for distributed optimization in networked systems." Click to view the E-thesis via HKUTO, 2009. http://sunzi.lib.hku.hk/hkuto/record/B43224234.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Jones, Malachi Gabriel. "Design and implementation of a multi-agent systems laboratory." Thesis, Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/29617.

Full text
Abstract:
Thesis (M. S.)--Electrical and Computer Engineering, Georgia Institute of Technology, 2009.
Committee Chair: Jeff Shamma; Committee Member: Eric Feron; Committee Member: Magnus Egerstedt. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
38

Bhattacharya, Shaondip. "Multi-agent System Distributed Sensor Fusion Algorithms." Thesis, Luleå tekniska universitet, Rymdteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-65839.

Full text
Abstract:
The concept of consensus filters for sensor fusion is not an entirely new proposition but one with an internally implemented Bayesian fusion is. This work documents a novel state update algorithm for sensor fusion which works using the principle of Bayesian fusion of data with variance implemented on a single integrator consensus algorithm. Comparative demonstrations of how consensus over a pinning network is reached are presented along with a weighted Bayesian Luenberger type observer and a ’Consensus on estimates’ algorithm. This type of a filter is something that is novel and has not been encountered in previous literature related to this topic to the best of our knowledge. In this work, we also extend the proof for a distributed Luenberger type observer design to include the case where the network being considered is a strongly connected digraph.
APA, Harvard, Vancouver, ISO, and other styles
39

Lu, Yapeng, and 呂亞鵬. "An integrated algorithm for distributed optimization in networked systems." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2009. http://hub.hku.hk/bib/B43224234.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Gujrati, Sumeet. "Models and algorithms for cyber-physical systems." Diss., Kansas State University, 2013. http://hdl.handle.net/2097/16922.

Full text
Abstract:
Doctor of Philosophy
Department of Computing and Information Sciences
Gurdip Singh
In this dissertation, we propose a cyber-physical system model, and based on this model, present algorithms for a set of distributed computing problems. Our model specifies a cyber-physical system as a combination of cyber-infrastructure, physical-infrastructure, and user behavior specification. The cyber-infrastructure is superimposed on the physical-infrastructure and continuously monitors its (physical-infrastructure's) changing state. Users operate in the physical-infrastructure and interact with the cyber-infrastructure using hand-held devices and sensors; and their behavior is specified in terms of actions they can perform (e.g., move, observe). While in traditional distributed systems, users interact solely via the underlying cyber-infrastructure, users in a cyber-physical system may interact directly with one another, access sensor data directly, and perform actions asynchronously with respect to the underlying cyber-infrastructure. These additional types of interactions have an impact on how distributed algorithms for cyber-physical systems are designed. We augment distributed mutual exclusion and predicate detection algorithms so that they can accommodate user behavior, interactions among them and the physical-infrastructure. The new algorithms have two components - one describing the behavior of the users in the physical-infrastructure and the other describing the algorithms in the cyber-infrastructure. Each combination of users' behavior and an algorithm in the cyber-infrastructure yields a different cyber-physical system algorithm. We have performed extensive simulation study of our algorithms using OMNeT++ simulation engine and Uppaal model checker. We also propose Cyber-Physical System Modeling Language (CPSML) to specify cyber-physical systems, and a centralized global state recording algorithm.
APA, Harvard, Vancouver, ISO, and other styles
41

Qi, Dehu. "Multi-agent systems : integrating reinforcement learning, bidding and genetic algorithms /." free to MU campus, to others for purchase, 2002. http://wwwlib.umi.com/cr/mo/fullcit?p3060133.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Gubba, Ravikumar Krishnanjan. "Distributed simulation of power systems using real time digital simulator." Master's thesis, Mississippi State : Mississippi State University, 2009. http://library.msstate.edu/etd/show.asp?etd=etd-06152009-222641.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Zhuo, Ling, and 卓玲. "Document replication and distribution algorithms for load balancing ingeographically distributed web server systems." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2002. http://hub.hku.hk/bib/B31228148.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Strothmann, Thim Frederik [Verfasser]. "Self-* Algorithms for distributed systems : programmable matter & overlay networks / Thim Frederik Strothmann." Paderborn : Universitätsbibliothek, 2017. http://d-nb.info/1137554843/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Jiang, Qiangfeng. "ALGORITHMS FOR FAULT TOLERANCE IN DISTRIBUTED SYSTEMS AND ROUTING IN AD HOC NETWORKS." UKnowledge, 2013. http://uknowledge.uky.edu/cs_etds/16.

Full text
Abstract:
Checkpointing and rollback recovery are well-known techniques for coping with failures in distributed systems. Future generation Supercomputers will be message passing distributed systems consisting of millions of processors. As the number of processors grow, failure rate also grows. Thus, designing efficient checkpointing and recovery algorithms for coping with failures in such large systems is important for these systems to be fully utilized. We presented a novel communication-induced checkpointing algorithm which helps in reducing contention for accessing stable storage to store checkpoints. Under our algorithm, a process involved in a distributed computation can independently initiate consistent global checkpointing by saving its current state, called a tentative checkpoint. Other processes involved in the computation come to know about the consistent global checkpoint initiation through information piggy-backed with the application messages or limited control messages if necessary. When a process comes to know about a new consistent global checkpoint initiation, it takes a tentative checkpoint after processing the message. The tentative checkpoints taken can be flushed to stable storage when there is no contention for accessing stable storage. The tentative checkpoints together with the message logs stored in the stable storage form a consistent global checkpoint. Ad hoc networks consist of a set of nodes that can form a network for communication with each other without the aid of any infrastructure or human intervention. Nodes are energy-constrained and hence routing algorithm designed for these networks should take this into consideration. We proposed two routing protocols for mobile ad hoc networks which prevent nodes from broadcasting route requests unnecessarily during the route discovery phase and hence conserve energy and prevent contention in the network. One is called Triangle Based Routing (TBR) protocol. The other routing protocol we designed is called Routing Protocol with Selective Forwarding (RPSF). Both of the routing protocols greatly reduce the number of control packets which are needed to establish routes between pairs of source nodes and destination nodes. As a result, they reduce the energy consumed for route discovery. Moreover, these protocols reduce congestion and collision of packets due to limited number of nodes retransmitting the route requests.
APA, Harvard, Vancouver, ISO, and other styles
46

Khan, Md Mosaddek. "Speeding up GDL-based distributed constraint optimization algorithms in cooperative multi-agent systems." Thesis, University of Southampton, 2018. https://eprints.soton.ac.uk/421047/.

Full text
Abstract:
Coping with an increasing number of agents, tasks and/or resources in a complex environment poses an onerous challenge for coordination algorithms that are developed to process constraints in multi-agent systems. In particular, Distributed Constraint Optimization Problems (DCOPs) are a widely studied constraint handling framework for coordinating interactions in cooperative multi-agent systems. For the past decade, a number of algorithms have been developed to solve DCOPs, and they have been applied to many real world applications. However, it is often observed that the outcome obtained from such algorithms becomes outdated or unusable as the optimization process takes too much time. The issue of taking too long to complete the internal operation of a DCOP algorithm is even more severe and commonplace as the system becomes larger. This, in turn, limits the practical scalability of such algorithms. In effect, an optimization algorithm can eventually handle larger systems if the completion time can be minimized. However, it is difficult to maintain the quality of solution and generic applicability whilst minimizing the completion time. In this thesis, we investigate techniques that have been used to solve DCOPs and examine their efficacy in light of the above mentioned observation. Specifically, we identify that Generalized Distributive Law (GDL) based inference algorithms have a number of axiomatic benefits, and as such, are suited to deploy in practical multi-agent settings. However, scalability remains a widely acknowledged challenge for these algorithms owing to a number of potentially expensive phases. In the multi-agent systems literature, several attempts have sought to improve the scalability of GDL-based algorithms by typically speeding up one of the expensive phases of existing approaches. However, most of them focus on a specific application domain, and therefore cannot be applied to general DCOP settings. Although a few studies have been conducted to speed-up GDL-based algorithms for general settings, they typically experience lack of consistency in their performance. Against this background, the central problem that this thesis aims to address is of speeding up GDL-based DCOP algorithms, so that they can be applied to general DCOP settings without compromising on solution quality. To accomplish this objective, we determine three of the expensive phases of such algorithms, then speed them up independently. Firstly, the maximization operation - which a GDL-based algorithm performs repetitively during its optimization process. Notably, each of these operates on a search space that grows exponentially with either, or both, of the corresponding constraint function's arity and its associated variables' domain size. Consequently, this particular phase has been considered as one of the main reasons GDL-based algorithms can be computationally infeasible in practice, which eventually incurs delay in producing the final outcome of these algorithms. To overcome this challenge, we develop a generic domain pruning technique so that the corresponding maximization operator can act upon a significantly reduced search space of 33% to 81%. Moreover, we theoretically prove that the pruned search space obtained by our approach does not affect the outcome of the algorithms. Secondly, GDL-based algorithms follow the Standard Message Passing (SMP) protocol to exchange messages among the nodes of a corresponding graphical representation of a DCOP. We identify that this incurs a significant delay in the form of average waiting time for agents to attain the ultimate outcome. Building on this insight, we advance the state-of-the-art by developing a new way of speeding up GDL-based message passing algorithms. In particular, we propose a new cluster-based generic message passing protocol that minimizes the completion time of GDL-based algorithms by replacing the SMP protocol. To elaborate further, our approach utilizes partial decentralization and combines clustering with domain pruning. It also uses a regression method to determine the appropriate number of clusters for a given scenario. We empirically evaluate the performance of our proposed method in different possible settings, and find that it brings down the completion time by around 37 - 85% (1.6 - 6.5 times faster) for 100 - 900 nodes and by around 47-91% (1.9-11 times faster) for 3000-10000 nodes, compared to the current state-of-the-art. Finally, the conventional DCOP model assumes that the sub-problem that each agent is responsible for (i.e. the mapping of nodes in the constraint graph to agents) is part of the model description. While this assumption is often reasonable, there are many applications where there is some flexibility in making this assignment. Specifically, we recognise that a poor mapping can increase an algorithm's completion time in a significant manner, and that finding an optimal mapping is an NP-hard problem. In the wake of this trade-off, we propose a new time-efficient heuristic to determine a near-optimal mapping of nodes to the participating agents of a DCOP. As a pre-processing step, it works prior to executing the optimization process of a GDL-based algorithm, and can be executed in a centralized or a decentralized manner, depending on the applications' suitability. We empirically demonstrate that it performs at a level of around 90%-100% of the optimal mapping. Our results also show a speed-up of 16%-40% when compared with the state-of-the-art. This means that a GDL-based algorithm can perform 1.2-1.7 times faster when using node-to-agent mapping obtained by our method. When taken together, the contributions presented in this thesis signify advancement in the state-of-the art of GDL-based DCOP algorithms, in terms of their scalability and applicability, by speeding up their optimization process.
APA, Harvard, Vancouver, ISO, and other styles
47

Sagha, Hossein. "Development of innovative robust stability enhancement algorithms for distribution systems containing distributed generators." Thesis, Queensland University of Technology, 2015. https://eprints.qut.edu.au/91052/1/Hossein_Sagha_Thesis.pdf.

Full text
Abstract:
This project was a step forward in improving the voltage profile of traditional low voltage distribution networks with high photovoltaic generation or high peak demand. As a practical and economical solution, the developed methods use a Dynamic Voltage Restorer or DVR, which is a series voltage compensator, for continuous and communication-less power quality enhancement. The placement of DVR in the network is optimised in order to minimise its power rating and cost. In addition, new approaches were developed for grid synchronisation and control of DVR which are integrated with the voltage quality improvement algorithm for stable operation.
APA, Harvard, Vancouver, ISO, and other styles
48

Yu, Han. "PLANNING AND SCHEDULING FOR LARGE-SCALEDISTRIBUTED SYSTEMS." Doctoral diss., University of Central Florida, 2005. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/4186.

Full text
Abstract:
Many applications require computing resources well beyond those available on any single system. Simulations of atomic and subatomic systems with application to material science, computations related to study of natural sciences, and computer-aided design are examples of applications that can benefit from the resource-rich environment provided by a large collection of autonomous systems interconnected by high-speed networks. To transform such a collection of systems into a user's virtual machine, we have to develop new algorithms for coordination, planning, scheduling, resource discovery, and other functions that can be automated. Then we can develop societal services based upon these algorithms, which hide the complexity of the computing system for users. In this dissertation, we address the problem of planning and scheduling for large-scale distributed systems. We discuss a model of the system, analyze the need for planning, scheduling, and plan switching to cope with a dynamically changing environment, present algorithms for the three functions, report the simulation results to study the performance of the algorithms, and introduce an architecture for an intelligent large-scale distributed system.
Ph.D.
School of Computer Science
Engineering and Computer Science
Computer Science
APA, Harvard, Vancouver, ISO, and other styles
49

Yu, Dongxiao, and 于东晓. "Distributed algorithmic studies in wireless ad hoc networks." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2014. http://hdl.handle.net/10722/206656.

Full text
Abstract:
It has been envisioned that in the near future, wireless ad hoc networks would populate various application fields, ranging from disaster relief, environmental monitoring, surveillance, to medical applications, the observation of chemical and biological processes and community mesh networks. The decentralized and self-organizing nature of wireless ad hoc networks makes distributed algorithms fit very well in these networks, which however pose great challenges to the algorithm designers as they try to achieve optimal efficiency in communications. In this thesis, I develop a set of distributed algorithms addressing these challenges and solving some fundamental communication problems in wireless ad hoc networks. Communications in wireless ad hoc networks happen on a shared medium, and consequently are subject to interference. The first part of the thesis focuses on disseminating information on multiple-access channels while avoiding collisions. For both single-channel and multi-channel networks, the complexity of information dissemination is investigated, and nearly optimal distributed algorithms are proposed. The second part of the thesis focuses on designing efficient distributed algorithms for some fundamental problems under the physical Signal-to-Interference-plus-Noise-Ratio (SINR) interference model. The SINR model defines global fading interference with which the success of a signal reception depends on all simultaneous transmissions. Compared with graph based models, the SINR model reflects the fading and cumulative nature of radio signals. Hence, the SINR model represents the physical reality more precisely. However, the global nature of the SINR model makes the analysis of distributed algorithms much more challenging. Two types of fundamental problems are addressed in this part. The first type is closely related to communication coordination, including the wireless link scheduling problem and the node coloring problem. The second type of problems are about basic communication primitives, including the local broadcasting problem and the multiple-message broadcast problem. I investigate the complexity of these fundamental problems under the SINR interference model, and present efficient or optimal distributed algorithms. In the third part of the thesis, I propose a general interference model that can include commonly adopted interference models as special cases, and study whether efficient distributed algorithms can still be designed and analyzed in such a general model. Specifically, the affectance model is proposed in this part, which depicts the relative interference (affectance) on communication links caused by transmitting nodes. Both graph based models and the SINR model can be transformed into the affectance model. Under this general model, distributed algorithms with worst-case guarantees for the local broadcasting problem are presented. I also show how to make use of the developed techniques to get nearly optimal algorithms under the graph based model and the SINR model.
published_or_final_version
Computer Science
Doctoral
Doctor of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
50

Ştefănescu, Alin. "Automatic synthesis of distributed transition systems." [S.l. : s.n.], 2006. http://nbn-resolving.de/urn:nbn:de:bsz:93-opus-25606.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography