Academic literature on the topic 'Distribute and Parallel Computing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Distribute and Parallel Computing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Distribute and Parallel Computing"

1

Umar, A. "Distributed And Parallel Computing." IEEE Concurrency 6, no. 4 (October 1998): 80–81. http://dx.doi.org/10.1109/mcc.1998.736439.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ramsay, A. "Distributed versus parallel computing." Artificial Intelligence Review 1, no. 1 (March 1986): 11–25. http://dx.doi.org/10.1007/bf01988525.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wismüller, Roland. "Parallel and distributed computing." Software Focus 2, no. 3 (September 2001): 124. http://dx.doi.org/10.1002/swf.44.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Sun, Qi, and Hui Yan Zhao. "Design of Distribute Monitoring Platform Base on Cloud Computing." Applied Mechanics and Materials 687-691 (November 2014): 1076–79. http://dx.doi.org/10.4028/www.scientific.net/amm.687-691.1076.

Full text
Abstract:
Based on cloud computing distributed network measurement system compared to traditional measurement infrastructure, the use of cloud computing platform measurement data stored in massive large virtual resource pool to ensure the reliability of data storage and scalability, re-use cloud computing platform parallel processing mechanism, the mass measurement data for fast, concurrent analytical processing and data mining. Measuring probe supports a variety of different measurement algorithms deployed to support a variety of data acquisition formats, in the measurement method provides a congestion response policies and load balancing strategies.
APA, Harvard, Vancouver, ISO, and other styles
5

Gao, Tie Liang, Jiao Li, Jun Peng Zhang, and Bing Jie Shi. "The Research of MapReduce on the Cloud Computing." Applied Mechanics and Materials 182-183 (June 2012): 2127–30. http://dx.doi.org/10.4028/www.scientific.net/amm.182-183.2127.

Full text
Abstract:
MapReduce is a kind of model of program that is use in the parallel computing about large scale data muster in the Cloud Computing[1] , it mainly consist of map and reduce . MapReduce is tremendously convenient for the programmer who can’t familiar with the parallel program .These people use the MapReduce to run their program on the distribute system. This paper mainly research the model and process and theory of MapReduce .
APA, Harvard, Vancouver, ISO, and other styles
6

Egorov, Alexander, Natalya Krupenina, and Lyubov Tyndykar. "The parallel approach to issue of operational management optimization problem on transport gateway system." E3S Web of Conferences 203 (2020): 05003. http://dx.doi.org/10.1051/e3sconf/202020305003.

Full text
Abstract:
The universal parallelization software shell for joint data processing, implemented in combination with a distributed computing system, is considered. The research purpose – to find the most effective solution for the navigable canal management information system organizing. One optimization option is to increase computer devices computing power by combining them into a single computing cluster. The management optimizing task of a locked shipping channel for execution to adapt in a multi-threaded environment is proposed with constraints on a technologically feasible schedule. In article shows algorithms and gives recommendations for their application in the subtasks formation in parallel processing case, as well as on a separate thread. The proposed approach to building a tree of options allows you to optimally distribute the load between all resources multi-threaded system any structure.
APA, Harvard, Vancouver, ISO, and other styles
7

Myint, Khin Nyein, Myo Hein Zaw, and Win Thanda Aung. "Parallel and Distributed Computing Using MPI on Raspberry Pi Cluster." International Journal of Future Computer and Communication 9, no. 1 (March 2020): 18–22. http://dx.doi.org/10.18178/ijfcc.2020.9.1.559.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Mukaddes, A. M. M., and Ryuji Shioya. "Parallel Performance of Domain Decomposition Method on Distributed Computing Environment." International Journal of Engineering and Technology 2, no. 1 (2010): 28–34. http://dx.doi.org/10.7763/ijet.2010.v2.95.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Stankovic. "Introduction—Parallel and Distributed Computing." IEEE Transactions on Computers C-36, no. 4 (April 1987): 385–86. http://dx.doi.org/10.1109/tc.1987.1676919.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Sunderam, V. S., and G. A. Geist. "Heterogeneous parallel and distributed computing." Parallel Computing 25, no. 13-14 (December 1999): 1699–721. http://dx.doi.org/10.1016/s0167-8191(99)00088-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Distribute and Parallel Computing"

1

Xu, Lei. "Cellular distributed and parallel computing." Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:88ffe124-c2fd-4144-86fe-47b35f4908bd.

Full text
Abstract:
This thesis focuses on novel approaches to distributed and parallel computing that are inspired by the mechanism and functioning of biological cells. We refer to this concept as cellular distributed and parallel computing which focuses on three important principles: simplicity, parallelism, and locality. We first give a parallel polynomial-time solution to the constraint satisfaction problem (CSP) based on a theoretical model of cellular distributed and parallel computing, which is known as neural-like P systems (or neural-like membrane systems). We then design a class of simple neural-like P systems to solve the fundamental maximal independent set (MIS) selection problem efficiently in a distributed way, by drawing inspiration from the way that developing cells in the fruit fly become specialised. Building on the novel bio-inspired approach to distributed MIS selection, we propose a new simple randomised algorithm for another fundamental distributed computing problem: the distributed greedy colouring (GC) problem. We then propose an improved distributed MIS selection algorithm that incorporates for the first time another important feature of the biological system: adapting the probabilities used at each node based on local feedback from neighbouring nodes. The improved distributed MIS selection algorithm is again extended to solve the distributed greedy colouring problem. Both improved algorithms are simple and robust and work under very restrictive conditions, moreover, they both achieve state-of-the-art performance in terms of their worst-case time complexity and message complexity. Given any n-node graph with maximum degree Delta, the expected time complexity of our improved distributed MIS selection algorithm is O(log n) and the message complexity per node is O(1). The expected time complexity of our improved distributed greedy colouring algorithm is O(Delta + log n) and the message complexity per node is again O(1). Finally, we provide some experimental results to illustrate the time and message complexity of our proposed algorithms in practice. In particular, we show experimentally that the number of colours used by our distributed greedy colouring algorithms turns out to be optimal or near-optimal for many standard graph colouring benchmarks, so they provide effective simple heuristic approaches to computing a colouring with a small number of colours.
APA, Harvard, Vancouver, ISO, and other styles
2

Xiang, Yonghong. "Interconnection networks for parallel and distributed computing." Thesis, Durham University, 2008. http://etheses.dur.ac.uk/2156/.

Full text
Abstract:
Parallel computers are generally either shared-memory machines or distributed- memory machines. There are currently technological limitations on shared-memory architectures and so parallel computers utilizing a large number of processors tend tube distributed-memory machines. We are concerned solely with distributed-memory multiprocessors. In such machines, the dominant factor inhibiting faster global computations is inter-processor communication. Communication is dependent upon the topology of the interconnection network, the routing mechanism, the flow control policy, and the method of switching. We are concerned with issues relating to the topology of the interconnection network. The choice of how we connect processors in a distributed-memory multiprocessor is a fundamental design decision. There are numerous, often conflicting, considerations to bear in mind. However, there does not exist an interconnection network that is optimal on all counts and trade-offs have to be made. A multitude of interconnection networks have been proposed with each of these networks having some good (topological) properties and some not so good. Existing noteworthy networks include trees, fat-trees, meshes, cube-connected cycles, butterflies, Möbius cubes, hypercubes, augmented cubes, k-ary n-cubes, twisted cubes, n-star graphs, (n, k)-star graphs, alternating group graphs, de Bruijn networks, and bubble-sort graphs, to name but a few. We will mainly focus on k-ary n-cubes and (n, k)-star graphs in this thesis. Meanwhile, we propose a new interconnection network called augmented k-ary n- cubes. The following results are given in the thesis.1. Let k ≥ 4 be even and let n ≥ 2. Consider a faulty k-ary n-cube Q(^k_n) in which the number of node faults f(_n) and the number of link faults f(_e) are such that f(_n) + f(_e) ≤ 2n - 2. We prove that given any two healthy nodes s and e of Q(^k_n), there is a path from s to e of length at least k(^n) - 2f(_n) - 1 (resp. k(^n) - 2f(_n) - 2) if the nodes s and e have different (resp. the same) parities (the parity of a node Q(^k_n) in is the sum modulo 2 of the elements in the n-tuple over 0, 1, ∙∙∙ , k - 1 representing the node). Our result is optimal in the sense that there are pairs of nodes and fault configurations for which these bounds cannot be improved, and it answers questions recently posed by Yang, Tan and Hsu, and by Fu. Furthermore, we extend known results, obtained by Kim and Park, for the case when n = 2.2. We give precise solutions to problems posed by Wang, An, Pan, Wang and Qu and by Hsieh, Lin and Huang. In particular, we show that Q(^k_n) is bi-panconnected and edge-bipancyclic, when k ≥ 3 and n ≥ 2, and we also show that when k is odd, Q(^k_n) is m-panconnected, for m = (^n(k - 1) + 2k - 6’ / ‘_2), and (k -1) pancyclic (these bounds are optimal). We introduce a path-shortening technique, called progressive shortening, and strengthen existing results, showing that when paths are formed using progressive shortening then these paths can be efficiently constructed and used to solve a problem relating to the distributed simulation of linear arrays and cycles in a parallel machine whose interconnection network is Q(^k_n) even in the presence of a faulty processor.3. We define an interconnection network AQ(^k_n) which we call the augmented k-ary n-cube by extending a k-ary n-cube in a manner analogous to the existing extension of an n-dimensional hypercube to an n-dimensional augmented cube. We prove that the augmented k-ary n-cube Q(^k_n) has a number of attractive properties (in the context of parallel computing). For example, we show that the augmented k-ary n-cube Q(^k_n) - is a Cayley graph (and so is vertex-symmetric); has connectivity 4n - 2, and is such that we can build a set of 4n - 2 mutually disjoint paths joining any two distinct vertices so that the path of maximal length has length at most max{{n- l)k- (n-2), k + 7}; has diameter [(^k) / (_3)] + [(^k - 1) /( _3)], when n = 2; and has diameter at most (^k) / (_4) (n+ 1), for n ≥ 3 and k even, and at most [(^k)/ (_4) (n + 1) + (^n) / (_4), for n ^, for n ≥ 3 and k odd.4. We present an algorithm which given a source node and a set of n - 1 target nodes in the (n, k)-star graph S(_n,k) where all nodes are distinct, builds a collection of n - 1 node-disjoint paths, one from each target node to the source. The collection of paths output from the algorithm is such that each path has length at most 6k - 7, and the algorithm has time complexity O(k(^3)n(^4)).
APA, Harvard, Vancouver, ISO, and other styles
3

Kim, Young Man. "Some problems in parallel and distributed computing /." The Ohio State University, 1992. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487776210795651.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Freeh, Vincent William 1959. "Software support for distributed and parallel computing." Diss., The University of Arizona, 1996. http://hdl.handle.net/10150/290588.

Full text
Abstract:
This dissertation addresses creating portable and efficient parallel programs for scientific computing. Both of these aspects are important. Portability means the program can execute on any parallel machine. Efficiency means there is little or no penalty for using our solution instead of hand-coded, architecture-specific programs. Although parallel programming is necessarily more difficult than sequential programming, it is currently more complicated than it has to be. The Filaments package provides fine-grain parallelism and a shared memory programming model. It can be viewed as a "least common denominator" for parallel scientific computing. Fine-grain parallelism supports any number (even thousands) of threads, and shared memory provides a natural programming model. Consequently, the combination allows the programmer to concentrate on the application and not the architecture of the target machine. The Filaments package makes extensive use of run-time decision making. Run-time decision making has several advantages. First, it is often possible to make a better decision because more information is available at run time. Second, run-time decision making can obviate the need for complex, often intractable, static analysis. Moreover, run-time decision making leads to much of the package's efficiency.
APA, Harvard, Vancouver, ISO, and other styles
5

Jin, Xiaoming. "A practical realization of parallel disks for a distributed parallel computing system." [Gainesville, Fla.] : University of Florida, 2000. http://etd.fcla.edu/etd/uf/2000/ane5954/master.PDF.

Full text
Abstract:
Thesis (M.S.)--University of Florida, 2000.
Title from first page of PDF file. Document formatted into pages; contains ix, 41 p.; also contains graphics. Vita. Includes bibliographical references (p. 39-40).
APA, Harvard, Vancouver, ISO, and other styles
6

馬家駒 and Ka-kui Ma. "Transparent process migration for parallel Java computing." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2001. http://hub.hku.hk/bib/B31226474.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ma, Ka-kui. "Transparent process migration for parallel Java computing /." Hong Kong : University of Hong Kong, 2001. http://sunzi.lib.hku.hk/hkuto/record.jsp?B23589371.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Dutta, Sourav. "PERFORMANCE ESTIMATION AND SCHEDULING FOR PARALLEL PROGRAMS WITH CRITICAL SECTIONS." OpenSIUC, 2017. https://opensiuc.lib.siu.edu/dissertations/1353.

Full text
Abstract:
A fundamental problem in multithreaded parallel programs is the partial serialization that is imposed due to the presence of mutual exclusion variables or critical sections. In this work we investigate a model that considers the threads consisting of an equal number L of functional blocks, where each functional block has the same duration and either accesses a critical section or executes non-critical code. We derived formulas to estimate the average time spent in a critical section in presence of synchronization barrier and in absence of it. We also develop and establish the optimality of a fast polynomial-time algorithm to find a schedule with the shortest makespan for any number of threads and for any number of critical sections for the case of L = 2. For the general case L > 2, which is NP-complete, we present a competitive heuristic and provide experimental comparisons with the ideal integer linear programming (ILP) formulation.
APA, Harvard, Vancouver, ISO, and other styles
9

Winter, Stephen Charles. "A distributed reduction architecture for real-time computing." Thesis, University of Westminster, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.238722.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Valente, Fredy Joao. "An integrated parallel/distributed environment for high performance computing." Thesis, University of Southampton, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.362138.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Distribute and Parallel Computing"

1

Hobbs, Michael, Andrzej M. Goscinski, and Wanlei Zhou, eds. Distributed and Parallel Computing. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11564621.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Nagamalai, Dhinaharan, Eric Renault, and Murugan Dhanuskodi, eds. Advances in Parallel Distributed Computing. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-24037-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Y, Zomaya Albert, ed. Parallel and distributed computing handbook. New York: McGraw-Hill, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

1960-, Pan Yi, and Yang Laurence Tianruo, eds. Applied parallel and distributed computing. New York: Nova Science Publishers, 2005.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Füsun, Özgüner, Erçal Fikret, North Atlantic Treaty Organization. Scientific Affairs Division., and NATO Advanced Study Institute on Parallel Computing on Distributed Memory Multiprocessors (1991 : Bilkent University), eds. Parallel computing on distributed memory multiprocessors. Berlin: Springer-Verlag, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Özgüner, Füsun, and Fikret Erçal, eds. Parallel Computing on Distributed Memory Multiprocessors. Berlin, Heidelberg: Springer Berlin Heidelberg, 1993. http://dx.doi.org/10.1007/978-3-642-58066-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Qi, Luo, ed. Parallel and Distributed Computing and Networks. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-22706-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Prasad, Sushil K., Anshul Gupta, Arnold Rosenberg, Alan Sussman, and Charles Weems, eds. Topics in Parallel and Distributed Computing. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-93109-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Topping, B. H. V., and P. Iványi, eds. Parallel, Distributed and Grid Computing for Engineering. Stirlingshire, UK: Saxe-Coburg Publications, 2009. http://dx.doi.org/10.4203/csets.21.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Shen, Hong, Yingpeng Sang, Yong Zhang, Nong Xiao, Hamid R. Arabnia, Geoffrey Fox, Ajay Gupta, and Manu Malek, eds. Parallel and Distributed Computing, Applications and Technologies. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-96772-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Distribute and Parallel Computing"

1

Torres, Jordi, Eduard Ayguadé, Jesús Labarta, and Mateo Valero. "Align and distribute-based linear loop transformations." In Languages and Compilers for Parallel Computing, 321–39. Berlin, Heidelberg: Springer Berlin Heidelberg, 1994. http://dx.doi.org/10.1007/3-540-57659-2_19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Fahringer, Thomas. "Tools for Parallel and Distributed Computing." In Parallel Computing, 81–115. London: Springer London, 2009. http://dx.doi.org/10.1007/978-1-84882-409-6_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Erciyes, K. "Parallel and Distributed Computing." In Computational Biology, 51–77. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-24966-7_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hariri, S., and M. Parashar. "Parallel and Distributed Computing." In Tools and Environments for Parallel and Distributed Computing, 1–10. Hoboken, NJ, USA: John Wiley & Sons, Inc., 2004. http://dx.doi.org/10.1002/0471474835.ch1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kim, Dongmin, and Salim Hariri. "Parallel and Distributed Computing Environment." In Virtual Computing, 13–23. Boston, MA: Springer US, 2001. http://dx.doi.org/10.1007/978-1-4615-1553-1_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Falsafi, Babak, Samuel Midkiff, JackB Dennis, JackB Dennis, Amol Ghoting, Roy H. Campbell, Christof Klausecker, et al. "Distributed Computer." In Encyclopedia of Parallel Computing, 573. Boston, MA: Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-09766-4_2274.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Eberle, Hans. "Switcherland — A scalable interconnection structure for distributed computing." In Parallel Computation, 36–49. Berlin, Heidelberg: Springer Berlin Heidelberg, 1996. http://dx.doi.org/10.1007/3-540-61695-0_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Sunderam, Vaidy. "Virtualization in Parallel Distributed Computing." In Recent Advances in Parallel Virtual Machine and Message Passing Interface, 6. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11557265_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Schreiner, Wolfgang, Károly Bósa, Andreas Langegger, Thomas Leitner, Bernhard Moser, Szilárd Páll, Volkmar Wieser, and Wolfram Wöß. "Parallel, Distributed, and Grid Computing." In Hagenberg Research, 333–78. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-02127-5_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Petitet, A., H. Casanova, J. Dongarra, Y. Robert, and R. C. Whaley. "Parallel and Distributed Scientific Computing." In Handbook on Parallel and Distributed Processing, 464–504. Berlin, Heidelberg: Springer Berlin Heidelberg, 2000. http://dx.doi.org/10.1007/978-3-662-04303-5_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Distribute and Parallel Computing"

1

Garland, Michael. "Parallel computing with CUDA." In Distributed Processing (IPDPS). IEEE, 2010. http://dx.doi.org/10.1109/ipdps.2010.5470378.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Doolan, Daniel, Sabin Tabirca, and Laurence Yang. "Mobile Parallel Computing." In 2006 Fifth International Symposium on Parallel and Distributed Computing. IEEE, 2006. http://dx.doi.org/10.1109/ispdc.2006.33.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Tutsch, Dietmar. "Reconfigurable parallel computing." In 2010 1st International Conference on Parallel, Distributed and Grid Computing (PDGC 2010). IEEE, 2010. http://dx.doi.org/10.1109/pdgc.2010.5679961.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Crews, Thad. "Session details: Distributed/parallel computing." In SIGCSE04: Technical Symposium on Computer Science Education 2004. New York, NY, USA: ACM, 2004. http://dx.doi.org/10.1145/3244218.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Rashid, Zryan Najat, Subhi R. M. Zebari, Karzan Hussein Sharif, and Karwan Jacksi. "Distributed Cloud Computing and Distributed Parallel Computing: A Review." In 2018 International Conference on Advanced Science and Engineering (ICOASE). IEEE, 2018. http://dx.doi.org/10.1109/icoase.2018.8548937.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Chou, Yu-Cheng, David Ko, and Harry H. Cheng. "Mobile Agent Based Autonomic Dynamic Parallel Computing." In ASME 2009 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2009. http://dx.doi.org/10.1115/detc2009-87750.

Full text
Abstract:
Parallel computing is widely adotped in scientific and engineering applications to enhance the efficiency. Moreover, there are increasing research interests focusing on utilizing distributed networked computers for parallel computing. The Message Passing Interface (MPI) standard was designed to support portability and platform independence of a developed parallel program. However, the procedure to start an MPI-based parallel computation among distributed computers lacks autonomicity and flexibility. This article presents an autonomic dynamic parallel computing framework that provides autonomicity and flexibility that are important and necessary to some parallel computing applications involving resource constrained and heterogeneous platforms. In this framework, an MPI parallel computing environment consisting of multiple computing entities is dynamically established through inter-agent communications using the IEEE Foundation for Intelligent Physical Agents (FIPA) compliant Agent Communication Language (ACL) messages. For each computing entity in the MPI parallel computing environment, a load-balanced MPI program C source code along with the MPI environment configuration statements are dynamically composed as a mobile agent code. A mobile agent wrapping the mobile agent code is created and sent to the computing entity where the mobile agent code is retrieved and interpretively executed. An example of autonomic parallel matrix multiplication is given to demonstrate the self-configuration and self-optimization properties of the presented framework.
APA, Harvard, Vancouver, ISO, and other styles
7

Yu, Yuan, Pradeep Kumar Gunda, and Michael Isard. "Distributed aggregation for data-parallel computing." In the ACM SIGOPS 22nd symposium. New York, New York, USA: ACM Press, 2009. http://dx.doi.org/10.1145/1629575.1629600.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

"Session PD: Parallel & distributed computing." In 2014 9th International Conference on Computer Engineering & Systems (ICCES). IEEE, 2014. http://dx.doi.org/10.1109/icces.2014.7030947.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Foley, Samantha S., and Joshua Hursey. "OnRamp to parallel and distributed computing." In the Workshop. New York, New York, USA: ACM Press, 2015. http://dx.doi.org/10.1145/2831425.2831426.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Baker, Mark, Matthew Grove, and Aamir Shafi. "Parallel and Distributed Computing with Java." In 2006 Fifth International Symposium on Parallel and Distributed Computing. IEEE, 2006. http://dx.doi.org/10.1109/ispdc.2006.38.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Distribute and Parallel Computing"

1

Kaplansky, I., and Richard M. Karp. Parallel and Distributed Computing. Fort Belvoir, VA: Defense Technical Information Center, December 1986. http://dx.doi.org/10.21236/ada182935.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kaplansky, Irving, and Richard Karp. Parallel and Distributed Computing. Fort Belvoir, VA: Defense Technical Information Center, December 1986. http://dx.doi.org/10.21236/ada176477.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Leighton, Tom. Parallel and Distributed Computing Combinatorial Algorithms. Fort Belvoir, VA: Defense Technical Information Center, October 1993. http://dx.doi.org/10.21236/ada277333.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Nerode, Anil. Fellowship in Parallel and Distributed Computing. Fort Belvoir, VA: Defense Technical Information Center, July 1990. http://dx.doi.org/10.21236/ada225926.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Sunderam, V. PVM (Parallel Virtual Machine): A framework for parallel distributed computing. Office of Scientific and Technical Information (OSTI), January 1989. http://dx.doi.org/10.2172/5347567.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Chen, H., J. Hutchins, and J. Brandt. Evaluation of DEC`s GIGAswitch for distributed parallel computing. Office of Scientific and Technical Information (OSTI), October 1993. http://dx.doi.org/10.2172/10188486.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

George, Alan D. Parallel and Distributed Computing Architectures and Algorithms for Fault-Tolerant Sonar Arrays. Fort Belvoir, VA: Defense Technical Information Center, January 1999. http://dx.doi.org/10.21236/ada359698.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

hariri, salim. International ACM Symposium on High Performance Parallel and Distributed Computing Conference for 2017, 2018, and 2019. Office of Scientific and Technical Information (OSTI), January 2022. http://dx.doi.org/10.2172/1841180.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Smith, Bradley W. Distributed Computing for Signal Processing: Modeling of Asynchronous Parallel Computation. Appendix G. On the Design and Modeling of Special Purpose Parallel Processing Systems. Fort Belvoir, VA: Defense Technical Information Center, May 1985. http://dx.doi.org/10.21236/ada167622.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Pratt, T. J., L. G. Martinez, M. O. Vahle, T. V. Archuleta, and V. K. Williams. Sandia`s network for SC `97: Supporting visualization, distributed cluster computing, and production data networking with a wide area high performance parallel asynchronous transfer mode (ATM) network. Office of Scientific and Technical Information (OSTI), May 1998. http://dx.doi.org/10.2172/658446.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography