Academic literature on the topic 'Parallel and distributed algorithms'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Parallel and distributed algorithms.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Parallel and distributed algorithms"

1

Aupy, Guillaume, and Xueyan Tang. "Parallel and distributed algorithms." Concurrency and Computation: Practice and Experience 30, no. 17 (July 2, 2018): e4663. http://dx.doi.org/10.1002/cpe.4663.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Subramanian, K., and M. Zargham. "Distributed and Parallel Demand Driven Logic Simulation Algorithms." VLSI Design 1, no. 2 (January 1, 1994): 169–79. http://dx.doi.org/10.1155/1994/12503.

Full text
Abstract:
Based on the demand-driven approach, distributed and parallel simulation algorithms are proposed. Demand-driven simulation ties to minimize the number of component evaluations by restricting to only those component computations required for the watched output requests. For a specific output value request, the input line values that are required are requested to the respective component. The process continues until known signal values are needed (system input signal values). We present a distributed demand-driven algorithm with infinite memory requirement (but still the memory required at each process is no greater than the sequential demand-driven simulation), and a parallel demand-driven simulation with finite memory requirement. In our algorithms, each component is assigned a logical process.The algorithms have been implemented on the Sequent Balance 8000 Multi-processor machine. Several sample circuits were simulated. The algorithms were compared with the distributed discrete-event simulation. Our distributed algorithm performed many times faster than the discrete-event simulation for cases when few results were needed. Parallel algorithm performed 2 to 4 times faster than the distributed discrete-event simulation.
APA, Harvard, Vancouver, ISO, and other styles
3

Rine, David C. "Parallel and Distributed Processability of Objects." Fundamenta Informaticae 12, no. 3 (July 1, 1989): 317–56. http://dx.doi.org/10.3233/fi-1989-12304.

Full text
Abstract:
Partitioning and allocating of software components are two important parts of software design in distributed software engineering. This paper presents two general algorithms that can, to a limited extent, be used as tools to assist in partitioning software components represented as objects in a distributed software design environment. One algorithm produces a partition (equivalence classes) of the objects, and a second algorithm allows a minimum amount of redundancy. Only binary relationships of actions (use or non-use) are considered in this paper.
APA, Harvard, Vancouver, ISO, and other styles
4

Ravikumar, C. P., Vikas Jain, and Anurag Dod. "Distributed Fault Simulation Algorithms on Parallel Virtual Machine." VLSI Design 12, no. 1 (January 1, 2001): 81–99. http://dx.doi.org/10.1155/2001/58303.

Full text
Abstract:
In this paper, we describe distributed algorithms for combinational fault simulation assuming the classical stuck-at fault model. Our algorithms have been implemented on a network of Sun workstations under the Parallel Virtual Machine (PVM) environment. Two techniques are used for subdividing work among processors – test set partition and fault set partition. The sequential algorithm for fault simulation, used on individual nodes of the network, is based on a novel path compression technique proposed in this paper. We describe experimental results on a number of ISCAS′85 benchmark circuits.
APA, Harvard, Vancouver, ISO, and other styles
5

MIKI, Mitsunori, Tomoyuki HIROYASU, and Ikki OHMUKAI. "Hybridization Crossover for Parallel Distributed Genetic Algorithms." Proceedings of The Computational Mechanics Conference 2000.13 (2000): 299–300. http://dx.doi.org/10.1299/jsmecmd.2000.13.299.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Hanuliak, Juraj, and Ivan Hanuliak. "To performance evaluation of distributed parallel algorithms." Kybernetes 34, no. 9/10 (October 2005): 1633–50. http://dx.doi.org/10.1108/03684920510614858.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Alba, Enrique, and Jos� M. Troya. "A survey of parallel distributed genetic algorithms." Complexity 4, no. 4 (March 1999): 31–52. http://dx.doi.org/10.1002/(sici)1099-0526(199903/04)4:4<31::aid-cplx5>3.0.co;2-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

CHO, KILSEOK, ALAN D. GEORGE, RAJ SUBRAMANIYAN, and KEONWOOK KIM. "PARALLEL ALGORITHMS FOR ADAPTIVE MATCHED-FIELD PROCESSING ON DISTRIBUTED ARRAY SYSTEMS." Journal of Computational Acoustics 12, no. 02 (June 2004): 149–74. http://dx.doi.org/10.1142/s0218396x04002274.

Full text
Abstract:
Matched-field processing (MFP) localizes sources more accurately than plane-wave beamforming by employing full-wave acoustic propagation models for the cluttered ocean environment. The minimum variance distortionless response MFP (MVDR–MFP) algorithm incorporates the MVDR technique into the MFP algorithm to enhance beamforming performance. Such an adaptive MFP algorithm involves intensive computational and memory requirements due to its complex acoustic model and environmental adaptation. The real-time implementation of adaptive MFP algorithms for large surveillance areas presents a serious computational challenge where high-performance embedded computing and parallel processing may be required to meet real-time constraints. In this paper, three parallel algorithms based on domain decomposition techniques are presented for the MVDR–MFP algorithm on distributed array systems. The parallel performance factors in terms of execution times, communication times, parallel efficiencies, and memory capacities are examined on three potential distributed systems including two types of digital signal processor arrays and a cluster of personal computers. The performance results demonstrate that these parallel algorithms provide a feasible solution for real-time, scalable, and cost-effective adaptive beamforming on embedded, distributed array systems.
APA, Harvard, Vancouver, ISO, and other styles
9

Gravvanis, George A., and Hamid R. Arabnia. "The Journal of Parallel Algorithms and Applications: Special Issue on Parallel and Distributed Algorithms." Parallel Algorithms and Applications 19, no. 2-3 (June 2004): 77–78. http://dx.doi.org/10.1080/10637190410001725445.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

IRONY, DROR, and SIVAN TOLEDO. "TRADING REPLICATION FOR COMMUNICATION IN PARALLEL DISTRIBUTED-MEMORY DENSE SOLVERS." Parallel Processing Letters 12, no. 01 (March 2002): 79–94. http://dx.doi.org/10.1142/s0129626402000847.

Full text
Abstract:
We present new communication-efficient parallel dense linear solvers: a solver for triangular linear systems with multiple right-hand sides and an LU factorization algorithm. These solvers are highly parallel and they perform a factor of 0.4P1/6 less communication than existing algorithms, where P is number of processors. The new solvers reduce communication at the expense of using more temporary storage. Previously, algorithms that reduce communication by using more memory were only known for matrix multiplication. Our algorithms are recursive, elegant, and relatively simple to implement. We have implemented them using MPI, a message-passing libray, and tested them on a cluster of workstations.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Parallel and distributed algorithms"

1

Bolotski, Michael. "Distributed bit-parallel architecture and algorithms for early vision." Thesis, University of British Columbia, 1990. http://hdl.handle.net/2429/29462.

Full text
Abstract:
A new form of parallelism, distributed bit-parallelism, is introduced. A distributed bit-parallel organization distributes each bit of a data item to a different processor. Bit-parallelism allows computation that is sub-linear with word size for such operations as integer addition, arithmetic shifts, and data moves. The implications of bit-parallelism for system architecture are analyzed. An implementation of a bit-parallel architecture based on a mesh with bypass network is presented. The performance of bit-parallel algorithms on this architecture is analyzed and found to be several times faster than bit-serial algorithms. The application of the architecture to low level vision algorithms is discussed.
Applied Science, Faculty of
Electrical and Computer Engineering, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
2

Collet, Julien. "Exploration of parallel graph-processing algorithms on distributed architectures." Thesis, Compiègne, 2017. http://www.theses.fr/2017COMP2391/document.

Full text
Abstract:
Avec l'explosion du volume de données produites chaque année, les applications du domaine du traitement de graphes ont de plus en plus besoin d'être parallélisées et déployées sur des architectures distribuées afin d'adresser le besoin en mémoire et en ressource de calcul. Si de telles architectures larges échelles existent, issue notamment du domaine du calcul haute performance (HPC), la complexité de programmation et de déploiement d’algorithmes de traitement de graphes sur de telles cibles est souvent un frein à leur utilisation. De plus, la difficile compréhension, a priori, du comportement en performances de ce type d'applications complexifie également l'évaluation du niveau d'adéquation des architectures matérielles avec de tels algorithmes. Dans ce contexte, ces travaux de thèses portent sur l’exploration d’algorithmes de traitement de graphes sur architectures distribuées en utilisant GraphLab, un Framework de l’état de l’art dédié à la programmation parallèle de tels algorithmes. En particulier, deux cas d'applications réelles ont été étudiées en détails et déployées sur différentes architectures à mémoire distribuée, l’un venant de l’analyse de trace d’exécution et l’autre du domaine du traitement de données génomiques. Ces études ont permis de mettre en évidence l’existence de régimes de fonctionnement permettant d'identifier des points de fonctionnements pertinents dans lesquels on souhaitera placer un système pour maximiser son efficacité. Dans un deuxième temps, une étude a permis de comparer l'efficacité d'architectures généralistes (type commodity cluster) et d'architectures plus spécialisées (type serveur de calcul hautes performances) pour le traitement de graphes distribué. Cette étude a démontré que les architectures composées de grappes de machines de type workstation, moins onéreuses et plus simples, permettaient d'obtenir des performances plus élevées. Cet écart est d'avantage accentué quand les performances sont pondérées par les coûts d'achats et opérationnels. L'étude du comportement en performance de ces architectures a également permis de proposer in fine des règles de dimensionnement et de conception des architectures distribuées, dans ce contexte. En particulier, nous montrons comment l’étude des performances fait apparaitre les axes d’amélioration du matériel et comment il est possible de dimensionner un cluster pour traiter efficacement une instance donnée. Finalement, des propositions matérielles pour la conception de serveurs de calculs plus performants pour le traitement de graphes sont formulées. Premièrement, un mécanisme est proposé afin de tempérer la baisse significative de performance observée quand le cluster opère dans un point de fonctionnement où la mémoire vive est saturée. Enfin, les deux applications développées ont été évaluées sur une architecture à base de processeurs basse-consommation afin d'étudier la pertinence de telles architectures pour le traitement de graphes. Les performances mesurés en utilisant de telles plateformes sont encourageantes et montrent en particulier que la diminution des performances brutes par rapport aux architectures existantes est compensée par une efficacité énergétique bien supérieure
With the advent of ever-increasing graph datasets in a large number of domains, parallel graph-processing applications deployed on distributed architectures are more and more needed to cope with the growing demand for memory and compute resources. Though large-scale distributed architectures are available, notably in the High-Performance Computing (HPC) domain, the programming and deployment complexity of such graphprocessing algorithms, whose parallelization and complexity are highly data-dependent, hamper usability. Moreover, the difficult evaluation of performance behaviors of these applications complexifies the assessment of the relevance of the used architecture. With this in mind, this thesis work deals with the exploration of graph-processing algorithms on distributed architectures, notably using GraphLab, a state of the art graphprocessing framework. Two use-cases are considered. For each, a parallel implementation is proposed and deployed on several distributed architectures of varying scales. This study highlights operating ranges, which can eventually be leveraged to appropriately select a relevant operating point with respect to the datasets processed and used cluster nodes. Further study enables a performance comparison of commodity cluster architectures and higher-end compute servers using the two use-cases previously developed. This study highlights the particular relevance of using clustered commodity workstations, which are considerably cheaper and simpler with respect to node architecture, over higher-end systems in this applicative context. Then, this thesis work explores how performance studies are helpful in cluster design for graph-processing. In particular, studying throughput performances of a graph-processing system gives fruitful insights for further node architecture improvements. Moreover, this work shows that a more in-depth performance analysis can lead to guidelines for the appropriate sizing of a cluster for a given workload, paving the way toward resource allocation for graph-processing. Finally, hardware improvements for next generations of graph-processing servers areproposed and evaluated. A flash-based victim-swap mechanism is proposed for the mitigation of unwanted overloaded operations. Then, the relevance of ARM-based microservers for graph-processing is investigated with a port of GraphLab on a NVIDIA TX2-based architecture
APA, Harvard, Vancouver, ISO, and other styles
3

Cordova, Gabriel. "A distributed reconstruction of EKG signals." To access this resource online via ProQuest Dissertations and Theses @ UTEP, 2008. http://0-proquest.umi.com.lib.utep.edu/login?COPT=REJTPTU0YmImSU5UPTAmVkVSPTI=&clientId=2515.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Xu, Lei. "Cellular distributed and parallel computing." Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:88ffe124-c2fd-4144-86fe-47b35f4908bd.

Full text
Abstract:
This thesis focuses on novel approaches to distributed and parallel computing that are inspired by the mechanism and functioning of biological cells. We refer to this concept as cellular distributed and parallel computing which focuses on three important principles: simplicity, parallelism, and locality. We first give a parallel polynomial-time solution to the constraint satisfaction problem (CSP) based on a theoretical model of cellular distributed and parallel computing, which is known as neural-like P systems (or neural-like membrane systems). We then design a class of simple neural-like P systems to solve the fundamental maximal independent set (MIS) selection problem efficiently in a distributed way, by drawing inspiration from the way that developing cells in the fruit fly become specialised. Building on the novel bio-inspired approach to distributed MIS selection, we propose a new simple randomised algorithm for another fundamental distributed computing problem: the distributed greedy colouring (GC) problem. We then propose an improved distributed MIS selection algorithm that incorporates for the first time another important feature of the biological system: adapting the probabilities used at each node based on local feedback from neighbouring nodes. The improved distributed MIS selection algorithm is again extended to solve the distributed greedy colouring problem. Both improved algorithms are simple and robust and work under very restrictive conditions, moreover, they both achieve state-of-the-art performance in terms of their worst-case time complexity and message complexity. Given any n-node graph with maximum degree Delta, the expected time complexity of our improved distributed MIS selection algorithm is O(log n) and the message complexity per node is O(1). The expected time complexity of our improved distributed greedy colouring algorithm is O(Delta + log n) and the message complexity per node is again O(1). Finally, we provide some experimental results to illustrate the time and message complexity of our proposed algorithms in practice. In particular, we show experimentally that the number of colours used by our distributed greedy colouring algorithms turns out to be optimal or near-optimal for many standard graph colouring benchmarks, so they provide effective simple heuristic approaches to computing a colouring with a small number of colours.
APA, Harvard, Vancouver, ISO, and other styles
5

Kim, Jinwoo. "Hierarchical asynchronous genetic algorithms for parallel/distributed simulation-based optimization." Diss., The University of Arizona, 1994. http://hdl.handle.net/10150/186831.

Full text
Abstract:
The objective of this dissertation is to develop a multi-resolution optimization strategy based on the evolution algorithms in the parallel/distributed simulation environment. The system architecture is constructed hierarchically with multiple clusters which consist of an expert system (controller) and set of genetic algorithm optimizers (agents). We propose an asynchronous genetic algorithm (AGA) which continuously updates the population in parallel genetic algorithms. Asynchronous evaluation of population in a parallel computer improves the utilization of the processors and reduces search time when the evaluation time of individuals is highly variable. Further, we have devised a noise assignment scheme which resolves the pre-convergence drawback of the genetic algorithms. In this scheme, binary representation (discrete sampling) of an individual is combined with a random number (analog sampling), so that the genetic algorithm can investigate the entire search space regardless of the bit-size of an individual. Real application problems require the evaluation of a large number of parameters and their search complexity grows beyond the capability of a single level GA-optimizer. In response, we have developed a novel scheme called Hierarchical Genetic Algorithms. This multilevel GA optimization strategy is based on an Intelligent Machine Architecture which supporting non-deterministic computation, intensive and irregular memory access patterns, and large potential for parallel computing. The clusters in the Hierarchical GAs are coordinated hierarchically and creation and deletion of nodes are performed dynamically based upon performance. During the optimization process, the clusters cooperate together to solve different levels of the abstracted problem. A candidate solution at a higher level creates a lower level cluster which utilizes previously optimized parameter information. It can also contribute to the search process of a higher level by sending the feedback information. Hierarchical GAs demonstrate performance with various experiments. We have compared the performance of the Hierarchical GAs and simple GA (single-level). The Hierarchical GAs adaptively changes its structure to allocate more computing resources to the promising nodes. With the same amount of memory size for population, the simulation results shows that the Hierarchical GAs find a solution faster than the simple GA.
APA, Harvard, Vancouver, ISO, and other styles
6

Kalaiselvi, S. "Checkpointing Algorithms for Parallel Computers." Thesis, Indian Institute of Science, 1997. http://hdl.handle.net/2005/67.

Full text
Abstract:
Checkpointing is a technique widely used in parallel/distributed computers for rollback error recovery. Checkpointing is defined as the coordinated saving of process state information at specified time instances. Checkpoints help in restoring the computation from the latest saved state, in case of failure. In addition to fault recovery, checkpointing has applications in fault detection, distributed debugging and process migration. Checkpointing in uniprocessor systems is easy due to the fact that there is a single clock and events occur with respect to this clock. There is a clear demarcation of events that happens before a checkpoint and events that happens after a checkpoint. In parallel computers a large number of computers coordinate to solve a single problem. Since there might be multiple streams of execution, checkpoints have to be introduced along all these streams simultaneously. Absence of a global clock necessitates explicit coordination to obtain a consistent global state. Events occurring in a distributed system, can be ordered partially using Lamport's happens before relation. Lamport's happens before relation ->is a partial ordering relation to identify dependent and concurrent events occurring in a distributed system. It is defined as follows: ·If two events a and b happen in the same process, and if a happens before b, then a->b ·If a is the sending event of a message and b is the receiving event of the same message then a -> b ·If neither a à b nor b -> a, then a and b are said to be concurrent. A consistent global state may have concurrent checkpoints. In the first chapter of the thesis we discuss issues regarding ordering of events in a parallel computer, need for coordination among checkpoints and other aspects related to checkpointing. Checkpointing locations can either be identified statically or dynamically. The static approach assumes that a representation of a program to be checkpointed is available with information that enables a programmer to specify the places where checkpoints are to be taken. The dynamic approach identifies the checkpointing locations at run time. In this thesis, we have proposed algorithms for both static and dynamic checkpointing. The main contributions of this thesis are as follows: 1. Parallel computers that are being built now have faster communication and hence more efficient clock synchronisation compared to those built a few years ago. Based on efficient clock synchronisation protocols, the clock drift in current machines can be maintained within a few microseconds. We have proposed a dynamic checkpointing algorithm for parallel computers assuming bounded clock drifts. 2. The shared memory paradigm is convenient for programming while message passing paradigm is easy to scale. Distributed Shared Memory (DSM) systems combine the advantage of both paradigms and can be visualized easily on top of a network of workstations. IEEE has recently proposed an interconnect standard called Scalable Coherent Interface (SCI), to con6gure computers as a Distributed Shared Memory system. A periodic dynamic checkpointing algorithm has been proposed in the thesis for a DSM system which uses the SCI standard. 3. When information about a parallel program is available one can make use of this knowledge to perform efficient checkpointing. A static checkpointing approach based on task graphs is proposed for parallel programs. The proposed task graph based static checkpointing approach has been implemented on a Parallel Virtual Machine (PVM) platform. We now give a gist of various chapters of the thesis. Chapter 2 of the thesis gives a classification of existing checkpointing algorithms. The chapter surveys algorithm that have been reported in literature for checkpointing parallel/distributed systems. A point to be noted is that most of the algorithms published for checkpointing message passing systems are based on the seminal article by Chandy & Lamport. A large number of checkpointing algorithms have been published by relaxing the assumptions made in the above mentioned article and by extending the features to minimise the overheads of coordination and context saving. Checkpointing for shared memory systems primarily extend cache coherence protocols to maintain a consistent memory. All of them assume that the main memory is safe for storing the context. Recently algorithms have been published for distributed shared memory systems, which extend the cache coherence protocols used in shared memory systems. They however also include methods for storing the status of distributed memory in stable storage. Chapter 2 concludes with brief comments on the desirable features of a checkpointing algorithm. In Chapter 3, we develop a dynamic checkpointing algorithm for message passing systems assuming that the clock drift of processors in the system is bounded. Efficient clock synchronisation protocols have been implemented on recent parallel computers owing to the fact that communication between processors is very fast. Based on efficient clock synchronisation protocols, clock skew can be limited to a few microseconds. The algorithm proposed in the thesis uses clocks for checkpoint coordination and vector counts for identifying messages to be logged. The algorithm is a periodic, distributed algorithm. We prove correctness of the algorithm and compare it with similar clock based algorithms. Distributed Shared Memory (DSM) systems provide the benefit of ease of programming in a scalable system. The recently proposed IEEE Scalable Coherent Interface (SCI) standard, facilitates the construction of scalable coherent systems. In Chapter 4 we discuss a checkpointing algorithm for an SCI based DSM system. SCI maintains cache coherence in hardware using a distributed cache directory which scales with the number of processors in the system. SCI recommends a two phase transaction protocol for communication. Our algorithm is a two phase centralised coordinated algorithm. Phase one initiates checkpoints and the checkpointing activity is completed in phase two. The correctness of the algorithm is established theoretically. The chapter concludes with the discussion of the features of SCI exploited by the checkpointing algorithm proposed in the thesis. In Chapter 5, a static checkpointing algorithm is developed assuming that the program to be executed on a parallel computer is given as a directed acyclic task graph. We assume that the estimates of the time to execute each task in the task graph is given. Given the timing at which checkpoints are to be taken, the algorithm identifies a set of edges where checkpointing tasks can be placed ensuring that they form a consistent global checkpoint. The proposed algorithm eliminates coordination overhead at run time. It significantly reduces the context saving overhead by taking checkpoints along edges of the task graph. The algorithm is used as a preprocessing step before scheduling the tasks to processors. The algorithm complexity is O(km) where m is the number of edges in the graph and k the maximum number of global checkpoints to be taken. The static algorithm is implemented on a parallel computer with a PVM environment as it is widely available and portable. The task graph of a program can be constructed manually or through program development tools. Our implementation is a collection of preprocessing and run time routines. The preprocessing routines operate on the task graph information to generate a set of edges to be checkpointed for each global checkpoint and write the information on disk. The run time routines save the context along the marked edges. In case of recovery, the recovery algorithms read the information from stable storage and reconstruct the context. The limitation of our static checkpointing algorithm is that it can operate only on deterministic task graphs. To demonstrate the practical feasibility of the proposed approach, case studies of checkpointing some parallel programs are included in the thesis. We conclude the thesis with a summary of proposed algorithms and possible directions to continue research in the area of checkpointing.
APA, Harvard, Vancouver, ISO, and other styles
7

Loo, Alfred. "A statistical approach to parallel sorting and selection algorithms design." Thesis, University of Sunderland, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.311295.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Gupta, Sandeep K. S. "Synthesizing communication-efficient distributed memory parallel programs for block recursive algorithms /." The Ohio State University, 1995. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487861796820607.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

White, Tennis S. "Distributed control reconfiguration algorithms for 2-dimensional mesh architectures." Diss., Virginia Tech, 1991. http://hdl.handle.net/10919/39919.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Narravula, Harsha V. Katsinis Constantine. "Performance of parallel algorithms on a broadcast-based architecture /." Philadelphia, Pa. : Drexel University, 2003. http://dspace.library.drexel.edu/handle/1860/254.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Parallel and distributed algorithms"

1

Raynal, Michel. Distributed algorithms and protocols. Chichester: Wiley, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Huttunen, Pentti. Data-parallel computation in parallel and distributed environments. Lappeenranta, Finland: Lappeenranta University of Technology, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Cantú-Paz, Erick, and Francisco Fernández de Vega. Parallel and distributed computational intelligence. Berlin: Springer, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

International, Workshop on Parallel &. Distributed Algorithms (1988 Gers France). Parallel & distributed algorithms: Proceedings of the International Workshop on Parallel & Distributed Algorithms, Chateau de Bonas, Gers, France, 3-6 October, 1988. Amsterdam: North-Holland, 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Scaling up machine learning: Parallel and distributed approaches. Cambridge: Cambridge University Press, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Cooper, Amanda. Explicit restart Lanczos algorithms in a massively parallel distributed memory environment. [S.l: The Author], 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Algorithms for mutual exclusion. London: North Oxford Academic, 1986.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Algorithms for mutual exclusion. Cambridge, Mass: MIT Press, 1986.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Füsun, Özgüner, Erçal Fikret, North Atlantic Treaty Organization. Scientific Affairs Division., and NATO Advanced Study Institute on Parallel Computing on Distributed Memory Multiprocessors (1991 : Bilkent University), eds. Parallel computing on distributed memory multiprocessors. Berlin: Springer-Verlag, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

L, Meyer Gerard G., ed. A parallel algorithm synthesis procedure for high-performance computer architectures. New York: Kluwer Academic/Plenum Publishers, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Parallel and distributed algorithms"

1

Krishnamurthy, Arvind, Steven Lumetta, D. Culler, and Katherine Yelick. "Connected components on distributed memory machines." In Parallel Algorithms, 1–21. Providence, Rhode Island: American Mathematical Society, 1997. http://dx.doi.org/10.1090/dimacs/030/01.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Jezequel, Jean-Marc. "Building a global time on parallel machines." In Distributed Algorithms, 136–47. Berlin, Heidelberg: Springer Berlin Heidelberg, 1989. http://dx.doi.org/10.1007/3-540-51687-5_38.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kanellakis, Paris C., Dimitrios Michailidis, and Alex A. Shvartsman. "Controlling memory access concurrency in efficient fault-tolerant parallel algorithms (extended abstract)." In Distributed Algorithms, 99–114. Berlin, Heidelberg: Springer Berlin Heidelberg, 1993. http://dx.doi.org/10.1007/3-540-57271-6_30.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Folliot, Bertil, Giovanni Chiola, Peter Druschel, and Anne-Marie Kermarrec. "Distributed Systems and Algorithms." In Euro-Par 2001 Parallel Processing, 457. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-44681-8_66.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Mavronicolas, Marios, and Andre Schiper. "Distributed Systems and Algorithms." In Euro-Par 2002 Parallel Processing, 551–52. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-45706-2_75.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Misra, Jayadev, Wolfgang Reisig, Michael Schoettner, and Laurent Lefevre. "Topic 9 Distributed Algorithms." In Euro-Par 2003 Parallel Processing, 623. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/978-3-540-45209-6_88.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Rana, Omer, Giandomenico Spezzano, Michael Gerndt, and Daniel S. Katz. "Distributed Systems and Algorithms." In Euro-Par 2010 - Parallel Processing, 1. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15277-1_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Felber, Pascal, Ricardo Jimenez-Peris, Giovanni Schmid, and Pierre Sens. "Distributed Systems and Algorithms." In Euro-Par 2010 - Parallel Processing, 510. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15277-1_48.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Mayr, Ernst W. "Distributed Systems and Algorithms." In Euro-Par 2000 Parallel Processing, 573–74. Berlin, Heidelberg: Springer Berlin Heidelberg, 2000. http://dx.doi.org/10.1007/3-540-44520-x_78.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Padiou, Gérard, and André Schiper. "Distributed Systems and Algorithms." In Euro-Par’99 Parallel Processing, 767–68. Berlin, Heidelberg: Springer Berlin Heidelberg, 1999. http://dx.doi.org/10.1007/3-540-48311-x_108.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Parallel and distributed algorithms"

1

di Serafino, D. "Parallel algorithms." In Proceedings Eleventh Euromicro Conference on Parallel, Distributed and Network-Based Processing. IEEE, 2003. http://dx.doi.org/10.1109/empdp.2003.1183608.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Niandong Fang. "Engineering parallel algorithms." In Proceedings of 5th IEEE International Symposium on High Performance Distributed Computing. IEEE, 1996. http://dx.doi.org/10.1109/hpdc.1996.546193.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Albers, Susanne. "Session details: Parallel and distributed scheduling." In SPAA08: 20th ACM Symposium on Parallelism in Algorithms and Architectures. New York, NY, USA: ACM, 2008. http://dx.doi.org/10.1145/3246985.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kofakis, Petros G., and Ioannis Louis. "Distributed parallel implementation of seismic algorithms." In SPIE's 1995 International Symposium on Optical Science, Engineering, and Instrumentation, edited by Siamak Hassanzadeh. SPIE, 1995. http://dx.doi.org/10.1117/12.218499.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Charr, Jean-Claude, Raphael Couturier, and David Laiymani. "Parallel numerical asynchronous iterative algorithms: Large scale experimentations." In Distributed Processing (IPDPS). IEEE, 2009. http://dx.doi.org/10.1109/ipdps.2009.5161158.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Alba, E., G. Luque, and F. Luna. "Workforce planning with parallel algorithms." In Proceedings 20th IEEE International Parallel & Distributed Processing Symposium. IEEE, 2006. http://dx.doi.org/10.1109/ipdps.2006.1639527.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Garg, Vijay K., and Rohan Garg. "Parallel algorithms for predicate detection." In ICDCN '19: International Conference on Distributed Computing and Networking. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3288599.3288604.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Arge, Lars, Michael T. Goodrich, and Nodari Sitchinava. "Parallel external memory graph algorithms." In 2010 IEEE International Symposium on Parallel & Distributed Processing (IPDPS). IEEE, 2010. http://dx.doi.org/10.1109/ipdps.2010.5470440.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Zambonelli, F. "Distributed algorithms and systems." In Proceedings Eleventh Euromicro Conference on Parallel, Distributed and Network-Based Processing. IEEE, 2003. http://dx.doi.org/10.1109/empdp.2003.1183591.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Badkobeh, Golnaz, Per Kristian Lehre, and Dirk Sudholt. "Black-box Complexity of Parallel Search with Distributed Populations." In FOGA '15: Foundations of Genetic Algorithms XIII. New York, NY, USA: ACM, 2015. http://dx.doi.org/10.1145/2725494.2725504.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Parallel and distributed algorithms"

1

Leighton, Tom. Parallel and Distributed Computing Combinatorial Algorithms. Fort Belvoir, VA: Defense Technical Information Center, October 1993. http://dx.doi.org/10.21236/ada277333.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Guest, M. F., E. Apra, and D. E. Bernholdt. High performance computational chemistry: Towards fully distributed parallel algorithms. Office of Scientific and Technical Information (OSTI), July 1994. http://dx.doi.org/10.2172/10162988.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Choi, J., D. W. Walker, and J. J. Dongarra. Parallel matrix transpose algorithms on distributed memory concurrent computers. Office of Scientific and Technical Information (OSTI), October 1993. http://dx.doi.org/10.2172/10193001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Stevenson, Robert L., Andrew Lumsdaine, Jeffery M. Squires, and Micheal P. McNally. Parallel and Distributed Algorithms for High-Speed Image Processing. Fort Belvoir, VA: Defense Technical Information Center, April 2000. http://dx.doi.org/10.21236/ada377689.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Smith, Sharon L., and Robert B. Schnabel. Centralized and Distributed Dynamic Scheduling for Adaptive, Parallel Algorithms. Fort Belvoir, VA: Defense Technical Information Center, February 1991. http://dx.doi.org/10.21236/ada233557.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Choi, Jaeyoung, D. W. Walker, and J. J. Dongarra. PUMMA: Parallel Universal Matrix Multiplication Algorithms on distributed memory concurrent computers. Office of Scientific and Technical Information (OSTI), August 1993. http://dx.doi.org/10.2172/10180105.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Cho, Kilseok, Alan D. George, Raj Subramaniyan, and Keonwook Kim. Parallel Algorithms for Adaptive Matched-Field Processing in Distributed Array Systems. Fort Belvoir, VA: Defense Technical Information Center, January 2003. http://dx.doi.org/10.21236/ada465545.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

George, Alan D. Parallel and Distributed Computing Architectures and Algorithms for Fault-Tolerant Sonar Arrays. Fort Belvoir, VA: Defense Technical Information Center, January 1999. http://dx.doi.org/10.21236/ada359698.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Cho, Kilseok, Alan D. George, and Raj Subramaniyan. Fault-Tolerant Parallel Algorithms for Adaptive Matched-Field Processing on Distributed Array Systems. Fort Belvoir, VA: Defense Technical Information Center, September 2004. http://dx.doi.org/10.21236/ada466282.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Lober, R. R., T. J. Tautges, and C. T. Vaughan. Parallel paving: An algorithm for generating distributed, adaptive, all-quadrilateral meshes on parallel computers. Office of Scientific and Technical Information (OSTI), March 1997. http://dx.doi.org/10.2172/469139.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography