Academic literature on the topic '080599 Distributed Computing not elsewhere classified'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic '080599 Distributed Computing not elsewhere classified.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Dissertations / Theses on the topic "080599 Distributed Computing not elsewhere classified"

1

Nguyen, Van-Tuong. "An implementation of the parallelism, distribution and nondeterminism of membrane computing models on reconfigurable hardware." 2010. http://arrow.unisa.edu.au:8081/1959.8/100802.

Full text
Abstract:
Membrane computing investigates models of computation inspired by certain features of biological cells, especially features arising because of the presence of membranes. Because of their inherent large-scale parallelism, membrane computing models (called P systems) can be fully exploited only through the use of a parallel computing platform. However, it is an open question whether it is feasible to develop an efficient and useful parallel computing platform for membrane computing applications. Such a computing platform would significantly outperform equivalent sequential computing platforms while still achieving acceptable scalability, flexibility and extensibility. To move closer to an answer to this question, I have investigated a novel approach to the development of a parallel computing platform for membrane computing applications that has the potential to deliver a good balance between performance, flexibility, scalability and extensibility. This approach involves the use of reconfigurable hardware and an intelligent software component that is able to configure the hardware to suit the specific properties of the P system to be executed. As part of my investigations, I have created a prototype computing platform called Reconfig-P based on the proposed development approach. Reconfig-P is the only existing computing platform for membrane computing applications able to support both system-level and region-level parallelism. Using an intelligent hardware source code generator called P Builder, Reconfig-P is able to realise an input P system as a hardware circuit in various ways, depending on which aspects of P systems the user wishes to emphasise at the implementation level. For example, Reconfig-P can realise a P system in a rule-oriented manner or in a region-oriented manner. P Builder provides a unified implementation framework within which the various implementation strategies can be supported. The basic principles of this framework conform to a novel design pattern called Content-Form-Strategy. The framework seamlessly integrates the currently supported implementation approaches, and facilitates the inclusion of additional implementation strategies and additional P system features. Theoretical and empirical results regarding the execution time performance and hardware resource consumption of Reconfig-P suggest that the proposed development approach is a viable means of attaining a good balance between performance, scalability, flexibility and extensibility. Most of the existing computing platforms for membrane computing applications fail to support nondeterministic object distribution, a key aspect of P systems that presents several interesting implementation challenges. I have devised an efficient algorithm for nondeterministic object distribution that is suitable for implementation in hardware. Experimental results suggest that this algorithm could be incorporated into Reconfig-P without too significantly reducing its performance or efficiency.
Thesis (PhDInformationTechnology)--University of South Australia, 2010
APA, Harvard, Vancouver, ISO, and other styles
2

(11153640), Amir Daneshmand. "Parallel and Decentralized Algorithms for Big-data Optimization over Networks." Thesis, 2021.

Find full text
Abstract:

Recent decades have witnessed the rise of data deluge generated by heterogeneous sources, e.g., social networks, streaming, marketing services etc., which has naturally created a surge of interests in theory and applications of large-scale convex and non-convex optimization. For example, real-world instances of statistical learning problems such as deep learning, recommendation systems, etc. can generate sheer volumes of spatially/temporally diverse data (up to Petabytes of data in commercial applications) with millions of decision variables to be optimized. Such problems are often referred to as Big-data problems. Solving these problems by standard optimization methods demands intractable amount of centralized storage and computational resources which is infeasible and is the foremost purpose of parallel and decentralized algorithms developed in this thesis.


This thesis consists of two parts: (I) Distributed Nonconvex Optimization and (II) Distributed Convex Optimization.


In Part (I), we start by studying a winning paradigm in big-data optimization, Block Coordinate Descent (BCD) algorithm, which cease to be effective when problem dimensions grow overwhelmingly. In particular, we considered a general family of constrained non-convex composite large-scale problems defined on multicore computing machines equipped with shared memory. We design a hybrid deterministic/random parallel algorithm to efficiently solve such problems combining synergically Successive Convex Approximation (SCA) with greedy/random dimensionality reduction techniques. We provide theoretical and empirical results showing efficacy of the proposed scheme in face of huge-scale problems. The next step is to broaden the network setting to general mesh networks modeled as directed graphs, and propose a class of gradient-tracking based algorithms with global convergence guarantees to critical points of the problem. We further explore the geometry of the landscape of the non-convex problems to establish second-order guarantees and strengthen our convergence to local optimal solutions results to global optimal solutions for a wide range of Machine Learning problems.


In Part (II), we focus on a family of distributed convex optimization problems defined over meshed networks. Relevant state-of-the-art algorithms often consider limited problem settings with pessimistic communication complexities with respect to the complexity of their centralized variants, which raises an important question: can one achieve the rate of centralized first-order methods over networks, and moreover, can one improve upon their communication costs by using higher-order local solvers? To answer these questions, we proposed an algorithm that utilizes surrogate objective functions in local solvers (hence going beyond first-order realms, such as proximal-gradient) coupled with a perturbed (push-sum) consensus mechanism that aims to track locally the gradient of the central objective function. The algorithm is proved to match the convergence rate of its centralized counterparts, up to multiplying network factors. When considering in particular, Empirical Risk Minimization (ERM) problems with statistically homogeneous data across the agents, our algorithm employing high-order surrogates provably achieves faster rates than what is achievable by first-order methods. Such improvements are made without exchanging any Hessian matrices over the network.


Finally, we focus on the ill-conditioning issue impacting the efficiency of decentralized first-order methods over networks which rendered them impractical both in terms of computation and communication cost. A natural solution is to develop distributed second-order methods, but their requisite for Hessian information incurs substantial communication overheads on the network. To work around such exorbitant communication costs, we propose a “statistically informed” preconditioned cubic regularized Newton method which provably improves upon the rates of first-order methods. The proposed scheme does not require communication of Hessian information in the network, and yet, achieves the iteration complexity of centralized second-order methods up to the statistical precision. In addition, (second-order) approximate nature of the utilized surrogate functions, improves upon the per-iteration computational cost of our earlier proposed scheme in this setting.

APA, Harvard, Vancouver, ISO, and other styles
3

(6185759), Manish Nagaraj. "Energy Efficient Byzantine Agreement Protocols for Cyber Physical Resilience." Thesis, 2019.

Find full text
Abstract:

Cyber physical systems are deployed in a wide range of applications from sensor nodes in a factory setting to drones in defense applications. This distributed setting of nodes or processes often needs to reach agreement on a set of values. Byzantine Agreement protocols address this issue of reaching an agreement in an environment where a malicious entity can take control over a set of nodes and deviates the system from its normal operation. However these protocols do not consider the energy consumption of the nodes. We explore Byzantine Agreement protocols from an energy efficient perspective providing both energy resilience where the actions of the Byzantine nodes can not adversely effect the energy consumption of non-malicious nodes as well as fairness in energy consumption of nodes over multiple rounds of agreement.

APA, Harvard, Vancouver, ISO, and other styles
4

(6259343), Xiaodong Hou. "Distributed Solutions for a Class of Multi-agent Optimization Problems." Thesis, 2019.

Find full text
Abstract:
Distributed optimization over multi-agent networks has become an increasingly popular research topic as it incorporates many applications from various areas such as consensus optimization, distributed control, network resource allocation, large scale machine learning, etc. Parallel distributed solution algorithms are highly desirable as they are more scalable, more robust against agent failure, align more naturally with either underlying agent network topology or big-data parallel computing framework. In this dissertation, we consider a multi-agent optimization formulation where the global objective function is the summation of individual local objective functions with respect to local agents' decision variables of different dimensions, and the constraints include both local private constraints and shared coupling constraints. Employing and extending tools from the monotone operator theory (including resolvent operator, operator splitting, etc.) and fixed point iteration of nonexpansive, averaged operators, a series of distributed solution approaches are proposed, which are all iterative algorithms that rely on parallel agent level local updates and inter-agent coordination. Some of the algorithms require synchronizations across all agents for information exchange during each iteration while others allow asynchrony and delays. The algorithms' convergence to an optimal solution if one exists are established by first characterizing them as fixed point iterations of certain averaged operators under certain carefully designed norms, then showing that the fixed point sets of these averaged operators are exactly the optimal solution set of the original multi-agent optimization problem. The effectiveness and performances of the proposed algorithms are demonstrated and compared through several numerical examples.
APA, Harvard, Vancouver, ISO, and other styles
5

(9530630), Akshay Jajoo. "EXPLOITING THE SPATIAL DIMENSION OF BIG DATA JOBS FOR EFFICIENT CLUSTER JOB SCHEDULING." Thesis, 2020.

Find full text
Abstract:
With the growing business impact of distributed big data analytics jobs, it has become crucial to optimize their execution and resource consumption. In most cases, such jobs consist of multiple sub-entities called tasks and are executed online in a large shared distributed computing system. The ability to accurately estimate runtime properties and coordinate execution of sub-entities of a job allows a scheduler to efficiently schedule jobs for optimal scheduling. This thesis presents the first study that highlights spatial dimension, an inherent property of distributed jobs, and underscores its importance in efficient cluster job scheduling. We develop two new classes of spatial dimension based algorithms to
address the two primary challenges of cluster scheduling. First, we propose, validate, and design two complete systems that employ learning algorithms exploiting spatial dimension. We demonstrate high similarity in runtime properties between sub-entities of the same job by detailed trace analysis on four different industrial cluster traces. We identify design challenges and propose principles for a sampling based learning system for two examples, first for a coflow scheduler, and second for a cluster job scheduler.
We also propose, design, and demonstrate the effectiveness of new multi-task scheduling algorithms based on effective synchronization across the spatial dimension. We underline and validate by experimental analysis the importance of synchronization between sub-entities (flows, tasks) of a distributed entity (coflow, data analytics jobs) for its efficient execution. We also highlight that by not considering sibling sub-entities when scheduling something it may also lead to sub-optimal overall cluster performance. We propose, design, and implement a full coflow scheduler based on these assertions.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography