Journal articles on the topic 'Efficient parallel algorithms'

To see the other types of publications on this topic, follow the link: Efficient parallel algorithms.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Efficient parallel algorithms.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

de Werra, D. "Efficient parallel algorithms." European Journal of Operational Research 44, no. 3 (February 1990): 426. http://dx.doi.org/10.1016/0377-2217(90)90254-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Bahig, Hazem M., Mohamed A. G. Hazber, Khaled Al-Utaibi, Dieaa I. Nassr, and Hatem M. Bahig. "Efficient Sequential and Parallel Prime Sieve Algorithms." Symmetry 14, no. 12 (November 30, 2022): 2527. http://dx.doi.org/10.3390/sym14122527.

Full text
Abstract:
Generating prime numbers less than or equal to an integer number m plays an important role in many asymmetric key cryptosystems. Recently, a new sequential prime sieve algorithm was proposed based on set theory. The main drawback of this algorithm is that the running time and storage are high when the size of m is large. This paper introduces three new algorithms for a prime sieve based on two approaches. The first approach develops a fast sequential prime sieve algorithm based on set theory and some structural improvements to the recent prime sieve algorithm. The second approach introduces two new parallel algorithms in the shared memory parallel model based on static and dynamic strategies. The analysis of the experimental studies shows the following results. (1) The proposed sequential algorithm outperforms the recent prime sieve algorithm in terms of running time by 98% and memory consumption by 80%, on average. (2) The two proposed parallel algorithms outperform the proposed sequential algorithm by 72% and 67%, respectively, on average. (3) The maximum speedups achieved by the dynamic and static parallel algorithms using 16 threads are 7 and 4.5, respectively. As a result, the proposed algorithms are more effective than the recent algorithm in terms of running time, storage and scalability in generating primes.
APA, Harvard, Vancouver, ISO, and other styles
3

He, Xin. "Efficient parallel algorithms for series parallel graphs." Journal of Algorithms 12, no. 3 (September 1991): 409–30. http://dx.doi.org/10.1016/0196-6774(91)90012-n.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Rajasekaran, S. "Efficient parallel hierarchical clustering algorithms." IEEE Transactions on Parallel and Distributed Systems 16, no. 6 (June 2005): 497–502. http://dx.doi.org/10.1109/tpds.2005.72.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Miller, R., and Q. F. Stout. "Efficient parallel convex hull algorithms." IEEE Transactions on Computers 37, no. 12 (1988): 1605–18. http://dx.doi.org/10.1109/12.9737.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Gibbons, Phillip B., Yossi Matias, and Vijaya Ramachandran. "Efficient Low-Contention Parallel Algorithms." Journal of Computer and System Sciences 53, no. 3 (December 1996): 417–42. http://dx.doi.org/10.1006/jcss.1996.0079.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

García-Monzó, Alejandro, Héctor Migallón, Antonio Jimeno-Morenilla, José-Luis Sánchez-Romero, Héctor Rico, and Ravipudi Rao. "Efficient Subpopulation Based Parallel TLBO Optimization Algorithms." Electronics 8, no. 1 (December 23, 2018): 19. http://dx.doi.org/10.3390/electronics8010019.

Full text
Abstract:
A numerous group of optimization algorithms based on heuristic techniques have been proposed in recent years. Most of them are based on phenomena in nature and require the correct tuning of some parameters, which are specific to the algorithm. Heuristic algorithms allow problems to be solved more quickly than deterministic methods. The computational time required to obtain the optimum (or near optimum) value of a cost function is a critical aspect of scientific applications in countless fields of knowledge. Therefore, we proposed efficient algorithms parallel to Teaching-learning-based optimization algorithms. TLBO is efficient and free from specific parameters to be tuned. The parallel proposals were designed with two levels of parallelization, one for shared memory platforms and the other for distributed memory platforms, obtaining good parallel performance in both types of parallel architectures and on heterogeneous memory parallel platforms.
APA, Harvard, Vancouver, ISO, and other styles
8

Du, Q., M. Mu, and Z. N. Wu. "Efficient Parallel Algorithms for Parabolic Problems." SIAM Journal on Numerical Analysis 39, no. 5 (January 2002): 1469–87. http://dx.doi.org/10.1137/s0036142900381710.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

RAJASEKARAN, SANGUTHEVAR. "EFFICIENT PARALLEL ALGORITHMS FOR TEMPLATE MATCHING." Parallel Processing Letters 12, no. 03n04 (September 2002): 359–64. http://dx.doi.org/10.1142/s012962640200104x.

Full text
Abstract:
The parallel complexity of template matching has been well studied. In this paper we present more work-efficient algorithms than the existing ones. Our algorithms are based on FFT primitives. We consider the following models of computing: PRAM and the hypercube.
APA, Harvard, Vancouver, ISO, and other styles
10

DAS, SAJAL K., PAUL S. FISHER, and HUA ZHANG. "EFFICIENT PARALLEL ALGORITHMS FOR PATTERN RECOGNITION∗." Parallel Algorithms and Applications 2, no. 1-2 (January 1994): 81–98. http://dx.doi.org/10.1080/10637199408915408.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Tzong-Wann Kao, Shi-Jinn Horng, Yue-Li Wang, and Horng-Ren Tsai. "Designing efficient parallel algorithms on CRAP." IEEE Transactions on Parallel and Distributed Systems 6, no. 5 (May 1995): 554–60. http://dx.doi.org/10.1109/71.382325.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Klein, Philip N. "Efficient Parallel Algorithms for Chordal Graphs." SIAM Journal on Computing 25, no. 4 (August 1996): 797–827. http://dx.doi.org/10.1137/s0097539789166880.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Lecomber, David, and Mike Rudgyard. "Efficient parallel algorithms for numerical simulation." Future Generation Computer Systems 17, no. 8 (June 2001): 961–67. http://dx.doi.org/10.1016/s0167-739x(01)00038-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Gibbons, Jeremy, Wentong Cai, and David B. Skillicorn. "Efficient parallel algorithms for tree accumulations." Science of Computer Programming 23, no. 1 (October 1994): 1–18. http://dx.doi.org/10.1016/0167-6423(94)00013-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Abu-Khzam, Faisal N., Shouwei Li, Christine Markarian, Friedhelm Meyer auf der Heide, and Pavel Podlipyan. "Efficient parallel algorithms for parameterized problems." Theoretical Computer Science 786 (September 2019): 2–12. http://dx.doi.org/10.1016/j.tcs.2018.11.006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Kruskal, Clyde P., Larry Rudolph, and Marc Snir. "Efficient parallel algorithms for graph problems." Algorithmica 5, no. 1-4 (June 1990): 43–64. http://dx.doi.org/10.1007/bf01840376.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Arvind, K., V. Kamakoti, and C. P. Rangan. "Efficient Parallel Algorithms for Permutation Graphs." Journal of Parallel and Distributed Computing 26, no. 1 (April 1995): 116–24. http://dx.doi.org/10.1006/jpdc.1995.1052.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

SAOUDI, A., and M. NIVAT. "PARALLEL ALGORITHMS FOR MULTIDIMENSIONAL IMAGE TEMPLATE MATCHING." International Journal of Pattern Recognition and Artificial Intelligence 08, no. 02 (April 1994): 457–64. http://dx.doi.org/10.1142/s021800149400022x.

Full text
Abstract:
This paper presents efficient and optimal parallel algorithms for multidimensional image template matching on CREW PRAM model. For an Nd image and an Md window, we present an optimal (respectively efficient) algorithm which runs in O(log(M)) time with O(Md×Nd/log(M) processors (respectively O(Md×Nd)). We also present efficient and optimal algorithms for solving the multidimensional array and pattern matching.
APA, Harvard, Vancouver, ISO, and other styles
19

BAE, S., S. H. KO, and P. D. CODDINGTON. "PARALLEL WOLFF CLUSTER ALGORITHMS." International Journal of Modern Physics C 06, no. 02 (April 1995): 197–210. http://dx.doi.org/10.1142/s0129183195000150.

Full text
Abstract:
The Wolff single-cluster algorithm is the most efficient method known for Monte Carlo simulation of many spin models. Due to the irregular size, shape and position of the Wolff clusters, this method does not easily lend itself to efficient parallel implementation, so that simulations using this method have thus far been confined to workstations and vector machines. Here we present two parallel implementations of this algorithm, and show that one gives fairly good performance on a MIMD parallel computer.
APA, Harvard, Vancouver, ISO, and other styles
20

Mahajan, Rita, Komal Devi, and Deepak Bagai. "Area efficient parallel lfsr for cyclic redundancy check." International Journal of Electrical and Computer Engineering (IJECE) 10, no. 2 (April 1, 2020): 1755. http://dx.doi.org/10.11591/ijece.v10i2.pp1755-1763.

Full text
Abstract:
Cyclic Redundancy Check (CRC), code for error detection finds many applications in the field of digital communication, data storage, control system and data compression. CRC encoding operation is carried out by using a Linear Feedback Shift Register (LFSR). Serial implementation of CRC requires more clock cycles which is equal to data message length plus generator polynomial degree but in parallel implementation of CRC one clock cycle is required if a whole data message is applied at a time. In previous work related to parallel LFSR, hardware complexity of the architecture reduced using a technique named state space transformation. This paper presents detailed explaination of search algorithm implementation and technique to find number of XOR gates required for different CRC algorithms. This paper presents a searching algorithm and new technique to find the number of XOR gates required for different CRC algorithms. The comparison between proposed and previous architectures shows that the number of XOR gates are reduced for CRC algorithms which improve the hardware efficiency. Searching algorithm and all the matrix computations have been performed using MATLAB simulations.
APA, Harvard, Vancouver, ISO, and other styles
21

Al-Ssulami, Abdulrakeeb M., Hassan Mathkour, and Mohammed Amer Arafah. "Efficient String Matching Algorithm for Searching Large DNA and Binary Texts." International Journal on Semantic Web and Information Systems 13, no. 4 (October 2017): 198–220. http://dx.doi.org/10.4018/ijswis.2017100110.

Full text
Abstract:
The exact string matching is essential in application areas such as Bioinformatics and Intrusion Detection Systems. Speeding-up the string matching algorithm will therefore result in accelerating the searching process in DNA and binary data. Previously, there are two types of fast algorithms exist, bit-parallel based algorithms and hashing algorithms. The bit-parallel based are efficient when dealing with patterns of short lengths, less than 64, but slow on long patterns. On the other hand, hashing algorithms have optimal sublinear average case on large alphabets and long patterns, but the efficiency not so good on small alphabet such as DNA and binary texts. In this paper, the authors present hybrid algorithm to overcome the shortcomings of those previous algorithms. The proposed algorithm is based on q-gram hashing with guaranteeing the maximal shift in advance. Experimental results on random and complete human genome confirm that the proposed algorithm is efficient on various pattern lengths and small alphabet.
APA, Harvard, Vancouver, ISO, and other styles
22

Hirata, Hiroaki, and Atsushi Nunome. "A Modified Parallel Heapsort Algorithm." International Journal of Software Innovation 8, no. 3 (July 2020): 1–18. http://dx.doi.org/10.4018/ijsi.2020070101.

Full text
Abstract:
Sorting is a fundamental and essential problem required in the wide range of application fields, and so many sorting algorithms have been developed. Among those algorithms, heapsort is one of the most elegant and efficient sorting algorithms. But no parallel heapsort algorithm had been presented until the authors developed a restricted parallel algorithm a few years ago. This parallel algorithm had a restriction which makes it difficult to be used universally for general data sets. So, in this article, the authors present a modified parallel algorithm which is free from such restriction and can be used for any data set. This new algorithm can achieve almost the same performance as the restricted algorithm the authors developed before.
APA, Harvard, Vancouver, ISO, and other styles
23

DA SILVA, FABRICIO ALVES BARBOSA, and ISAAC D. SCHERSON. "EFFICIENT PARALLEL JOB SCHEDULING USING GANG SERVICE." International Journal of Foundations of Computer Science 12, no. 03 (June 2001): 265–84. http://dx.doi.org/10.1142/s0129054101000485.

Full text
Abstract:
Gang scheduling has been widely used as a practical solution to the dynamic parallel job scheduling problem. To overcome some of the limitations of traditional Gang scheduling algorithms, Concurrent Gang is proposed as a class of scheduling policies which allows the flexible and simultaneous scheduling of multiple parallel jobs. It hence improves the space sharing characteristics of Gang scheduling while preserving all other advantages. To provide a sound analysis of Concurrent Gang performance, a novel methodology based on the traditional concept of competitive ratio is also introduced. Dubbed dynamic competitive ratio, the new method is used to compare dynamic bin packing algorithms used in this paper. These packing algorithms apply to the Concurrent Gang scheduling of a workload generated by a statistical model. Moreover, dynamic competitive ratio is the figure of merit used to evaluate and compare packing strategies for job scheduling under multiple constraints. It will be shown that for the unidimensional case there is a small difference between the performance of best fit and first fit; first fit can hence be used without significant system degradation. For the multidimensional case, when memory is also considered, we concluded that the packing algorithm must try to balance the resource utilization in all dimensions simulataneously, instead of given priority to only one dimension of the problem.
APA, Harvard, Vancouver, ISO, and other styles
24

HSIEH, SUN-YUAN, CHIN-WEN HO, TSAN-SHENG HSU, MING-TAT KO, and GEN-HUEY CHEN. "EFFICIENT PARALLEL ALGORITHMS ON DISTANCE HEREDITARY GRAPHS." Parallel Processing Letters 09, no. 01 (March 1999): 43–52. http://dx.doi.org/10.1142/s0129626499000074.

Full text
Abstract:
Distance hereditary graphs are graphs in which every two vertices have the same distance in every connected induced subgraph containing them. In this paper, we study properties of distance hereditary graphs from the view point of parallel computations. We present efficient parallel algorithms for finding a minimum weighted connected dominating set, a minimum weighted Steiner tree, which take O( log n) time using O(n + m) processors on CRCW PRAM, where n and m are the number of vertices and edges of a given graph, respectively. We also find a maximum weighted clique of a distance hereditary graph in O( log 2 n) time using O(n + m) processors on a CREW PRAM.
APA, Harvard, Vancouver, ISO, and other styles
25

Greenberg, Albert G., Boris D. Lubachevsky, and Andrew M. Odlyzko. "Simple, efficient, asynchronous parallel algorithms for maximization." ACM Transactions on Programming Languages and Systems 10, no. 2 (April 1988): 313–37. http://dx.doi.org/10.1145/42190.42278.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Cantú-Paz, Erick, and David E. Goldberg. "Efficient parallel genetic algorithms: theory and practice." Computer Methods in Applied Mechanics and Engineering 186, no. 2-4 (June 2000): 221–38. http://dx.doi.org/10.1016/s0045-7825(99)00385-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Sano, Kentaro, Shintaro Momose, Hiroyuki Takizawa, Hiroaki Kobayashi, and Tadao Nakamura. "Efficient parallel processing of competitive learning algorithms." Parallel Computing 30, no. 12 (December 2004): 1361–83. http://dx.doi.org/10.1016/j.parco.2004.10.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Park, J. H., and K. M. George. "Efficient parallel hardware algorithms for string matching." Microprocessors and Microsystems 23, no. 3 (October 1999): 155–68. http://dx.doi.org/10.1016/s0141-9331(99)00032-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Murty, Ravi, and Daniel Okunbor. "Efficient parallel algorithms for molecular dynamics simulations." Parallel Computing 25, no. 3 (March 1999): 217–30. http://dx.doi.org/10.1016/s0167-8191(98)00114-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Karloff, Howard J., and David B. Shmoys. "Efficient parallel algorithms for edge coloring problems." Journal of Algorithms 8, no. 1 (March 1987): 39–52. http://dx.doi.org/10.1016/0196-6774(87)90026-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Kanellakis, Paris C., and Alex A. Shvartsman. "Efficient parallel algorithms can be made robust." Distributed Computing 5, no. 4 (April 1992): 201–17. http://dx.doi.org/10.1007/bf02277667.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Shankar, Narayan, and Vijaya Ramachandran. "Efficient parallel circuits and algorithms for division." Information Processing Letters 29, no. 6 (December 1988): 307–13. http://dx.doi.org/10.1016/0020-0190(88)90230-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Atallah, Chen, and Daescu. "Efficient Parallel Algorithms for Planar st-Graphs." Algorithmica 35, no. 3 (March 2003): 194–215. http://dx.doi.org/10.1007/s00453-002-0995-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Chen, L. "Efficient Parallel Algorithms for Euclidean Distance Transform." Computer Journal 47, no. 6 (June 1, 2004): 694–700. http://dx.doi.org/10.1093/comjnl/47.6.694.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Rajasekaran, S., R. Ammar, D. G. Shin, and G. Zhang. "Efficient parallel algorithms for processing biological sequences." International Journal of Computer Applications in Technology 26, no. 3 (2006): 119. http://dx.doi.org/10.1504/ijcat.2006.010596.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

DATTA, AMITAVA. "EFFICIENT PARALLEL RANGE SEARCHING AND PARTITIONING ALGORITHMS*." Parallel Algorithms and Applications 16, no. 4 (January 2001): 301–17. http://dx.doi.org/10.1080/01495730108935276.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Kruskal, Clyde P., Larry Rudolph, and Marc Snir. "A complexity theory of efficient parallel algorithms." Theoretical Computer Science 71, no. 1 (March 1990): 95–132. http://dx.doi.org/10.1016/0304-3975(90)90192-k.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Chen, Lin, and Yaacov Yesha. "Efficient parallel algorithms for bipartite permutation graphs." Networks 23, no. 1 (January 1993): 29–39. http://dx.doi.org/10.1002/net.3230230105.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Dehne, Dittrich, and Hutchinson. "Efficient External Memory Algorithms by Simulating Coarse-Grained Parallel Algorithms." Algorithmica 36, no. 2 (June 2003): 97–122. http://dx.doi.org/10.1007/s00453-002-1009-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Belazi, Akram, Héctor Migallón, Daniel Gónzalez-Sánchez, Jorge Gónzalez-García, Antonio Jimeno-Morenilla, and José-Luis Sánchez-Romero. "Enhanced Parallel Sine Cosine Algorithm for Constrained and Unconstrained Optimization." Mathematics 10, no. 7 (April 3, 2022): 1166. http://dx.doi.org/10.3390/math10071166.

Full text
Abstract:
The sine cosine algorithm’s main idea is the sine and cosine-based vacillation outwards or towards the best solution. The first main contribution of this paper proposes an enhanced version of the SCA algorithm called as ESCA algorithm. The supremacy of the proposed algorithm over a set of state-of-the-art algorithms in terms of solution accuracy and convergence speed will be demonstrated by experimental tests. When these algorithms are transferred to the business sector, they must meet time requirements dependent on the industrial process. If these temporal requirements are not met, an efficient solution is to speed them up by designing parallel algorithms. The second major contribution of this work is the design of several parallel algorithms for efficiently exploiting current multicore processor architectures. First, one-level synchronous and asynchronous parallel ESCA algorithms are designed. They have two favors; retain the proposed algorithm’s behavior and provide excellent parallel performance by combining coarse-grained parallelism with fine-grained parallelism. Moreover, the parallel scalability of the proposed algorithms is further improved by employing a two-level parallel strategy. Indeed, the experimental results suggest that the one-level parallel ESCA algorithms reduce the computing time, on average, by 87.4% and 90.8%, respectively, using 12 physical processing cores. The two-level parallel algorithms provide extra reductions of the computing time by 91.4%, 93.1%, and 94.5% with 16, 20, and 24 processing cores, including physical and logical cores. Comparison analysis is carried out on 30 unconstrained benchmark functions and three challenging engineering design problems. The experimental outcomes show that the proposed ESCA algorithm behaves outstandingly well in terms of exploration and exploitation behaviors, local optima avoidance, and convergence speed toward the optimum. The overall performance of the proposed algorithm is statistically validated using three non-parametric statistical tests, namely Friedman, Friedman aligned, and Quade tests.
APA, Harvard, Vancouver, ISO, and other styles
41

GÖTZE, JÜRGEN. "EFFICIENT PARALLEL IMPLEMENTATION OF KOGBETLIANTZ'S SVD ALGORITHM." International Journal of High Speed Electronics and Systems 04, no. 01 (March 1993): 35–53. http://dx.doi.org/10.1142/s0129156493000030.

Full text
Abstract:
In this paper efficient implementations of Kogbetliantz's algorithm for computing the singular value decomposition (SVD) of an m × n (m ≥ n) matrix on an upper triangular array of processors are presented. Modifications of the rotation evaluations, as approximations and factorizations of the rotations, are considered in order to obtain a reduction of the hardware and time requirements of the processor array. Particularly, it is shown that only the combination of these modifications enables the derivation of square root free or square root and division free algorithms.
APA, Harvard, Vancouver, ISO, and other styles
42

Li, De Bo, Qi Sheng Xu, Yue Liang Shen, Zhi Yong Wen, and Ya Ming Liu. "Parallel Algorithms for Compressible Turbulent Flow Simulation Using Direct Numerical Method." Advanced Materials Research 516-517 (May 2012): 980–91. http://dx.doi.org/10.4028/www.scientific.net/amr.516-517.980.

Full text
Abstract:
In this study, SPMD parallel computation of compressible turbulent jet flow with an explicit finite difference method by direct numerical method is performed on the IBM Linux Cluster. The conservation equations, boundary conditions including NSCBC (charactering boundary conditions), grid generation method, and the solving processing are carefully presented in order to give other researchers a clear understanding of the large scale parallel computing of compressible turbulent flows using explicit finite difference method, which is scarce in the literatures. The speedup factor and parallel computational efficiency are presented with different domain decomposition methods. In order to use our explicit finite method for large scale parallel computing, the grid size imposed on each processor, the speedup factor, and the efficiency factor should be carefully chosen in order to design an efficient parallel code. Our newly developed parallel code is quite efficient from that of implicit finite difference method or spectral method on parallel computational efficiency. This is quite useful for future research for gas and particle two-phase flow, which is still a problem for highly efficient code for two-phase parallel computing.
APA, Harvard, Vancouver, ISO, and other styles
43

Zaroliagis, Christos D. "Simple and Work-Efficient Parallel Algorithms for the Minimum Spanning Tree Problem." Parallel Processing Letters 07, no. 01 (March 1997): 25–37. http://dx.doi.org/10.1142/s012962649700005x.

Full text
Abstract:
Two Simple and work-efficient parallel algorithms for the minimum spanning tree problem are presented. Both algorithms perform O(m log n) work. The first algorithm runs in O( log 2 n) time on an EREW PRAM, while the second algorithm runs in O( log n) time on a COMMON CRCW PRAM.
APA, Harvard, Vancouver, ISO, and other styles
44

MATSUZAKI, KIMINORI, ZHENJIANG HU, KAZUHIKO KAKEHI, and MASATO TAKEICHI. "SYSTEMATIC DERIVATION OF TREE CONTRACTION ALGORITHMS." Parallel Processing Letters 15, no. 03 (September 2005): 321–36. http://dx.doi.org/10.1142/s0129626405002246.

Full text
Abstract:
While tree contraction algorithms play an important role in efficient tree computation in parallel, it is difficult to develop such algorithms due to the strict conditions imposed on contracting operators. In this paper, we propose a systematic method of deriving efficient tree contraction algorithms from recursive functions on trees. We identify a general recursive form that can be parallelized into efficient tree contraction algorithms, and present a derivation strategy for transforming general recursive functions to the parallelizable form. We illustrate our approach by deriving a novel parallel algorithm for the maximum connected-set sum problem on arbitrary trees, the tree-version of the well-known maximum segment sum problem.
APA, Harvard, Vancouver, ISO, and other styles
45

Shi, Jessica, Laxman Dhulipala, and Julian Shun. "Theoretically and practically efficient parallel nucleus decomposition." Proceedings of the VLDB Endowment 15, no. 3 (November 2021): 583–96. http://dx.doi.org/10.14778/3494124.3494140.

Full text
Abstract:
This paper studies the nucleus decomposition problem, which has been shown to be useful in finding dense substructures in graphs. We present a novel parallel algorithm that is efficient both in theory and in practice. Our algorithm achieves a work complexity matching the best sequential algorithm while also having low depth (parallel running time), which significantly improves upon the only existing parallel nucleus decomposition algorithm (Sariyüce et al. , PVLDB 2018). The key to the theoretical efficiency of our algorithm is a new lemma that bounds the amount of work done when peeling cliques from the graph, combined with the use of a theoretically-efficient parallel algorithms for clique listing and bucketing. We introduce several new practical optimizations, including a new multi-level hash table structure to store information on cliques space-efficiently and a technique for traversing this structure cache-efficiently. On a 30-core machine with two-way hyper-threading on real-world graphs, we achieve up to a 55x speedup over the state-of-the-art parallel nucleus decomposition algorithm by Sariyüce et al. , and up to a 40x self-relative parallel speedup. We are able to efficiently compute larger nucleus decompositions than prior work on several million-scale graphs for the first time.
APA, Harvard, Vancouver, ISO, and other styles
46

JOHNSON, THEODORE, and TIMOTHY A. DAVIS. "PARALLEL BUDDY MEMORY MANAGEMENT." Parallel Processing Letters 02, no. 04 (December 1992): 391–98. http://dx.doi.org/10.1142/s0129626492000544.

Full text
Abstract:
Shared memory multiprocessor systems need efficient dynamic storage allocators, both for system purposes and to support parallel programs. Memory managers are often based on the buddy system, which provides fast allocation and release. Previous parallel buddy memory managers made no attempt to coordinate the allocation, splitting and release of blocks, and as a result needlessly fragment memory. We present a fast and simple parallel buddy memory manager that is also as space efficient as a serial buddy memory manager. We test our algorithms using memory allocation/deallocation traces collected from a parallel sparse matrix algorithm.
APA, Harvard, Vancouver, ISO, and other styles
47

Mata-Montero, Manrique, Nabil Shalaby, and Bradley Sheppard. "Efficient Serial and Parallel Algorithms for Selection of Unique Oligos in EST Databases." Advances in Bioinformatics 2013 (April 8, 2013): 1–6. http://dx.doi.org/10.1155/2013/793130.

Full text
Abstract:
Obtaining unique oligos from an EST database is a problem of great importance in bioinformatics, particularly in the discovery of new genes and the mapping of the human genome. Many algorithms have been developed to find unique oligos, many of which are much less time consuming than the traditional brute force approach. An algorithm was presented by Zheng et al. (2004) which finds the solution of the unique oligos search problem efficiently. We implement this algorithm as well as several new algorithms based on some theorems included in this paper. We demonstrate how, with these new algorithms, we can obtain unique oligos much faster than with previous ones. We parallelize these new algorithms to further improve the time of finding unique oligos. All algorithms are run on ESTs obtained from a Barley EST database.
APA, Harvard, Vancouver, ISO, and other styles
48

Hajij, Mustafa, and Paul Rosen. "An Efficient Data Retrieval Parallel Reeb Graph Algorithm." Algorithms 13, no. 10 (October 12, 2020): 258. http://dx.doi.org/10.3390/a13100258.

Full text
Abstract:
The Reeb graph of a scalar function that is defined on a domain gives a topologically meaningful summary of that domain. Reeb graphs have been shown in the past decade to be of great importance in geometric processing, image processing, computer graphics, and computational topology. The demand for analyzing large data sets has increased in the last decade. Hence, the parallelization of topological computations needs to be more fully considered. We propose a parallel augmented Reeb graph algorithm on triangulated meshes with and without a boundary. That is, in addition to our parallel algorithm for computing a Reeb graph, we describe a method for extracting the original manifold data from the Reeb graph structure. We demonstrate the running time of our algorithm on standard datasets. As an application, we show how our algorithm can be utilized in mesh segmentation algorithms.
APA, Harvard, Vancouver, ISO, and other styles
49

Hu, Hui. "Parallel PSO-Based Optimal Strategy Study of Energy Efficient Operation Control for Train." Advanced Materials Research 605-607 (December 2012): 1861–65. http://dx.doi.org/10.4028/www.scientific.net/amr.605-607.1861.

Full text
Abstract:
Different from existing evolutionary algorithms which usually are implemented in serial computation mode, two improved parallel particle swarm optimization algorithms based on parallel computation structure model are proposed to search the energy efficient operation strategy for a train in railway network. The algorithms are designed and verified using a simulation case. Compared with other methods, these improved parallel PSO-based algorithms have a much better performance in efficiency and may be considered to put into practice.
APA, Harvard, Vancouver, ISO, and other styles
50

Lee, C. S. G., and P. R. Chang. "Efficient parallel algorithms for robot forward dynamics computation." IEEE Transactions on Systems, Man, and Cybernetics 18, no. 2 (1988): 238–51. http://dx.doi.org/10.1109/21.3463.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography