Littérature scientifique sur le sujet « Distribute and Parallel Computing »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Distribute and Parallel Computing ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Articles de revues sur le sujet "Distribute and Parallel Computing"

1

Umar, A. « Distributed And Parallel Computing ». IEEE Concurrency 6, no 4 (octobre 1998) : 80–81. http://dx.doi.org/10.1109/mcc.1998.736439.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Ramsay, A. « Distributed versus parallel computing ». Artificial Intelligence Review 1, no 1 (mars 1986) : 11–25. http://dx.doi.org/10.1007/bf01988525.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Wismüller, Roland. « Parallel and distributed computing ». Software Focus 2, no 3 (septembre 2001) : 124. http://dx.doi.org/10.1002/swf.44.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Sun, Qi, et Hui Yan Zhao. « Design of Distribute Monitoring Platform Base on Cloud Computing ». Applied Mechanics and Materials 687-691 (novembre 2014) : 1076–79. http://dx.doi.org/10.4028/www.scientific.net/amm.687-691.1076.

Texte intégral
Résumé :
Based on cloud computing distributed network measurement system compared to traditional measurement infrastructure, the use of cloud computing platform measurement data stored in massive large virtual resource pool to ensure the reliability of data storage and scalability, re-use cloud computing platform parallel processing mechanism, the mass measurement data for fast, concurrent analytical processing and data mining. Measuring probe supports a variety of different measurement algorithms deployed to support a variety of data acquisition formats, in the measurement method provides a congestion response policies and load balancing strategies.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Gao, Tie Liang, Jiao Li, Jun Peng Zhang et Bing Jie Shi. « The Research of MapReduce on the Cloud Computing ». Applied Mechanics and Materials 182-183 (juin 2012) : 2127–30. http://dx.doi.org/10.4028/www.scientific.net/amm.182-183.2127.

Texte intégral
Résumé :
MapReduce is a kind of model of program that is use in the parallel computing about large scale data muster in the Cloud Computing[1] , it mainly consist of map and reduce . MapReduce is tremendously convenient for the programmer who can’t familiar with the parallel program .These people use the MapReduce to run their program on the distribute system. This paper mainly research the model and process and theory of MapReduce .
Styles APA, Harvard, Vancouver, ISO, etc.
6

Egorov, Alexander, Natalya Krupenina et Lyubov Tyndykar. « The parallel approach to issue of operational management optimization problem on transport gateway system ». E3S Web of Conferences 203 (2020) : 05003. http://dx.doi.org/10.1051/e3sconf/202020305003.

Texte intégral
Résumé :
The universal parallelization software shell for joint data processing, implemented in combination with a distributed computing system, is considered. The research purpose – to find the most effective solution for the navigable canal management information system organizing. One optimization option is to increase computer devices computing power by combining them into a single computing cluster. The management optimizing task of a locked shipping channel for execution to adapt in a multi-threaded environment is proposed with constraints on a technologically feasible schedule. In article shows algorithms and gives recommendations for their application in the subtasks formation in parallel processing case, as well as on a separate thread. The proposed approach to building a tree of options allows you to optimally distribute the load between all resources multi-threaded system any structure.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Myint, Khin Nyein, Myo Hein Zaw et Win Thanda Aung. « Parallel and Distributed Computing Using MPI on Raspberry Pi Cluster ». International Journal of Future Computer and Communication 9, no 1 (mars 2020) : 18–22. http://dx.doi.org/10.18178/ijfcc.2020.9.1.559.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Mukaddes, A. M. M., et Ryuji Shioya. « Parallel Performance of Domain Decomposition Method on Distributed Computing Environment ». International Journal of Engineering and Technology 2, no 1 (2010) : 28–34. http://dx.doi.org/10.7763/ijet.2010.v2.95.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Stankovic. « Introduction—Parallel and Distributed Computing ». IEEE Transactions on Computers C-36, no 4 (avril 1987) : 385–86. http://dx.doi.org/10.1109/tc.1987.1676919.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Sunderam, V. S., et G. A. Geist. « Heterogeneous parallel and distributed computing ». Parallel Computing 25, no 13-14 (décembre 1999) : 1699–721. http://dx.doi.org/10.1016/s0167-8191(99)00088-5.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Thèses sur le sujet "Distribute and Parallel Computing"

1

Xu, Lei. « Cellular distributed and parallel computing ». Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:88ffe124-c2fd-4144-86fe-47b35f4908bd.

Texte intégral
Résumé :
This thesis focuses on novel approaches to distributed and parallel computing that are inspired by the mechanism and functioning of biological cells. We refer to this concept as cellular distributed and parallel computing which focuses on three important principles: simplicity, parallelism, and locality. We first give a parallel polynomial-time solution to the constraint satisfaction problem (CSP) based on a theoretical model of cellular distributed and parallel computing, which is known as neural-like P systems (or neural-like membrane systems). We then design a class of simple neural-like P systems to solve the fundamental maximal independent set (MIS) selection problem efficiently in a distributed way, by drawing inspiration from the way that developing cells in the fruit fly become specialised. Building on the novel bio-inspired approach to distributed MIS selection, we propose a new simple randomised algorithm for another fundamental distributed computing problem: the distributed greedy colouring (GC) problem. We then propose an improved distributed MIS selection algorithm that incorporates for the first time another important feature of the biological system: adapting the probabilities used at each node based on local feedback from neighbouring nodes. The improved distributed MIS selection algorithm is again extended to solve the distributed greedy colouring problem. Both improved algorithms are simple and robust and work under very restrictive conditions, moreover, they both achieve state-of-the-art performance in terms of their worst-case time complexity and message complexity. Given any n-node graph with maximum degree Delta, the expected time complexity of our improved distributed MIS selection algorithm is O(log n) and the message complexity per node is O(1). The expected time complexity of our improved distributed greedy colouring algorithm is O(Delta + log n) and the message complexity per node is again O(1). Finally, we provide some experimental results to illustrate the time and message complexity of our proposed algorithms in practice. In particular, we show experimentally that the number of colours used by our distributed greedy colouring algorithms turns out to be optimal or near-optimal for many standard graph colouring benchmarks, so they provide effective simple heuristic approaches to computing a colouring with a small number of colours.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Xiang, Yonghong. « Interconnection networks for parallel and distributed computing ». Thesis, Durham University, 2008. http://etheses.dur.ac.uk/2156/.

Texte intégral
Résumé :
Parallel computers are generally either shared-memory machines or distributed- memory machines. There are currently technological limitations on shared-memory architectures and so parallel computers utilizing a large number of processors tend tube distributed-memory machines. We are concerned solely with distributed-memory multiprocessors. In such machines, the dominant factor inhibiting faster global computations is inter-processor communication. Communication is dependent upon the topology of the interconnection network, the routing mechanism, the flow control policy, and the method of switching. We are concerned with issues relating to the topology of the interconnection network. The choice of how we connect processors in a distributed-memory multiprocessor is a fundamental design decision. There are numerous, often conflicting, considerations to bear in mind. However, there does not exist an interconnection network that is optimal on all counts and trade-offs have to be made. A multitude of interconnection networks have been proposed with each of these networks having some good (topological) properties and some not so good. Existing noteworthy networks include trees, fat-trees, meshes, cube-connected cycles, butterflies, Möbius cubes, hypercubes, augmented cubes, k-ary n-cubes, twisted cubes, n-star graphs, (n, k)-star graphs, alternating group graphs, de Bruijn networks, and bubble-sort graphs, to name but a few. We will mainly focus on k-ary n-cubes and (n, k)-star graphs in this thesis. Meanwhile, we propose a new interconnection network called augmented k-ary n- cubes. The following results are given in the thesis.1. Let k ≥ 4 be even and let n ≥ 2. Consider a faulty k-ary n-cube Q(^k_n) in which the number of node faults f(_n) and the number of link faults f(_e) are such that f(_n) + f(_e) ≤ 2n - 2. We prove that given any two healthy nodes s and e of Q(^k_n), there is a path from s to e of length at least k(^n) - 2f(_n) - 1 (resp. k(^n) - 2f(_n) - 2) if the nodes s and e have different (resp. the same) parities (the parity of a node Q(^k_n) in is the sum modulo 2 of the elements in the n-tuple over 0, 1, ∙∙∙ , k - 1 representing the node). Our result is optimal in the sense that there are pairs of nodes and fault configurations for which these bounds cannot be improved, and it answers questions recently posed by Yang, Tan and Hsu, and by Fu. Furthermore, we extend known results, obtained by Kim and Park, for the case when n = 2.2. We give precise solutions to problems posed by Wang, An, Pan, Wang and Qu and by Hsieh, Lin and Huang. In particular, we show that Q(^k_n) is bi-panconnected and edge-bipancyclic, when k ≥ 3 and n ≥ 2, and we also show that when k is odd, Q(^k_n) is m-panconnected, for m = (^n(k - 1) + 2k - 6’ / ‘_2), and (k -1) pancyclic (these bounds are optimal). We introduce a path-shortening technique, called progressive shortening, and strengthen existing results, showing that when paths are formed using progressive shortening then these paths can be efficiently constructed and used to solve a problem relating to the distributed simulation of linear arrays and cycles in a parallel machine whose interconnection network is Q(^k_n) even in the presence of a faulty processor.3. We define an interconnection network AQ(^k_n) which we call the augmented k-ary n-cube by extending a k-ary n-cube in a manner analogous to the existing extension of an n-dimensional hypercube to an n-dimensional augmented cube. We prove that the augmented k-ary n-cube Q(^k_n) has a number of attractive properties (in the context of parallel computing). For example, we show that the augmented k-ary n-cube Q(^k_n) - is a Cayley graph (and so is vertex-symmetric); has connectivity 4n - 2, and is such that we can build a set of 4n - 2 mutually disjoint paths joining any two distinct vertices so that the path of maximal length has length at most max{{n- l)k- (n-2), k + 7}; has diameter [(^k) / (_3)] + [(^k - 1) /( _3)], when n = 2; and has diameter at most (^k) / (_4) (n+ 1), for n ≥ 3 and k even, and at most [(^k)/ (_4) (n + 1) + (^n) / (_4), for n ^, for n ≥ 3 and k odd.4. We present an algorithm which given a source node and a set of n - 1 target nodes in the (n, k)-star graph S(_n,k) where all nodes are distinct, builds a collection of n - 1 node-disjoint paths, one from each target node to the source. The collection of paths output from the algorithm is such that each path has length at most 6k - 7, and the algorithm has time complexity O(k(^3)n(^4)).
Styles APA, Harvard, Vancouver, ISO, etc.
3

Kim, Young Man. « Some problems in parallel and distributed computing / ». The Ohio State University, 1992. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487776210795651.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Freeh, Vincent William 1959. « Software support for distributed and parallel computing ». Diss., The University of Arizona, 1996. http://hdl.handle.net/10150/290588.

Texte intégral
Résumé :
This dissertation addresses creating portable and efficient parallel programs for scientific computing. Both of these aspects are important. Portability means the program can execute on any parallel machine. Efficiency means there is little or no penalty for using our solution instead of hand-coded, architecture-specific programs. Although parallel programming is necessarily more difficult than sequential programming, it is currently more complicated than it has to be. The Filaments package provides fine-grain parallelism and a shared memory programming model. It can be viewed as a "least common denominator" for parallel scientific computing. Fine-grain parallelism supports any number (even thousands) of threads, and shared memory provides a natural programming model. Consequently, the combination allows the programmer to concentrate on the application and not the architecture of the target machine. The Filaments package makes extensive use of run-time decision making. Run-time decision making has several advantages. First, it is often possible to make a better decision because more information is available at run time. Second, run-time decision making can obviate the need for complex, often intractable, static analysis. Moreover, run-time decision making leads to much of the package's efficiency.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Jin, Xiaoming. « A practical realization of parallel disks for a distributed parallel computing system ». [Gainesville, Fla.] : University of Florida, 2000. http://etd.fcla.edu/etd/uf/2000/ane5954/master.PDF.

Texte intégral
Résumé :
Thesis (M.S.)--University of Florida, 2000.
Title from first page of PDF file. Document formatted into pages; contains ix, 41 p.; also contains graphics. Vita. Includes bibliographical references (p. 39-40).
Styles APA, Harvard, Vancouver, ISO, etc.
6

馬家駒 et Ka-kui Ma. « Transparent process migration for parallel Java computing ». Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2001. http://hub.hku.hk/bib/B31226474.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Ma, Ka-kui. « Transparent process migration for parallel Java computing / ». Hong Kong : University of Hong Kong, 2001. http://sunzi.lib.hku.hk/hkuto/record.jsp?B23589371.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Dutta, Sourav. « PERFORMANCE ESTIMATION AND SCHEDULING FOR PARALLEL PROGRAMS WITH CRITICAL SECTIONS ». OpenSIUC, 2017. https://opensiuc.lib.siu.edu/dissertations/1353.

Texte intégral
Résumé :
A fundamental problem in multithreaded parallel programs is the partial serialization that is imposed due to the presence of mutual exclusion variables or critical sections. In this work we investigate a model that considers the threads consisting of an equal number L of functional blocks, where each functional block has the same duration and either accesses a critical section or executes non-critical code. We derived formulas to estimate the average time spent in a critical section in presence of synchronization barrier and in absence of it. We also develop and establish the optimality of a fast polynomial-time algorithm to find a schedule with the shortest makespan for any number of threads and for any number of critical sections for the case of L = 2. For the general case L > 2, which is NP-complete, we present a competitive heuristic and provide experimental comparisons with the ideal integer linear programming (ILP) formulation.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Winter, Stephen Charles. « A distributed reduction architecture for real-time computing ». Thesis, University of Westminster, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.238722.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Valente, Fredy Joao. « An integrated parallel/distributed environment for high performance computing ». Thesis, University of Southampton, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.362138.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Livres sur le sujet "Distribute and Parallel Computing"

1

Hobbs, Michael, Andrzej M. Goscinski et Wanlei Zhou, dir. Distributed and Parallel Computing. Berlin, Heidelberg : Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11564621.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Nagamalai, Dhinaharan, Eric Renault et Murugan Dhanuskodi, dir. Advances in Parallel Distributed Computing. Berlin, Heidelberg : Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-24037-9.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Y, Zomaya Albert, dir. Parallel and distributed computing handbook. New York : McGraw-Hill, 1996.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

1960-, Pan Yi, et Yang Laurence Tianruo, dir. Applied parallel and distributed computing. New York : Nova Science Publishers, 2005.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Füsun, Özgüner, Erçal Fikret, North Atlantic Treaty Organization. Scientific Affairs Division. et NATO Advanced Study Institute on Parallel Computing on Distributed Memory Multiprocessors (1991 : Bilkent University), dir. Parallel computing on distributed memory multiprocessors. Berlin : Springer-Verlag, 1993.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Özgüner, Füsun, et Fikret Erçal, dir. Parallel Computing on Distributed Memory Multiprocessors. Berlin, Heidelberg : Springer Berlin Heidelberg, 1993. http://dx.doi.org/10.1007/978-3-642-58066-6.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Qi, Luo, dir. Parallel and Distributed Computing and Networks. Berlin, Heidelberg : Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-22706-6.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Prasad, Sushil K., Anshul Gupta, Arnold Rosenberg, Alan Sussman et Charles Weems, dir. Topics in Parallel and Distributed Computing. Cham : Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-93109-8.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Topping, B. H. V., et P. Iványi, dir. Parallel, Distributed and Grid Computing for Engineering. Stirlingshire, UK : Saxe-Coburg Publications, 2009. http://dx.doi.org/10.4203/csets.21.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Shen, Hong, Yingpeng Sang, Yong Zhang, Nong Xiao, Hamid R. Arabnia, Geoffrey Fox, Ajay Gupta et Manu Malek, dir. Parallel and Distributed Computing, Applications and Technologies. Cham : Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-96772-7.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Chapitres de livres sur le sujet "Distribute and Parallel Computing"

1

Torres, Jordi, Eduard Ayguadé, Jesús Labarta et Mateo Valero. « Align and distribute-based linear loop transformations ». Dans Languages and Compilers for Parallel Computing, 321–39. Berlin, Heidelberg : Springer Berlin Heidelberg, 1994. http://dx.doi.org/10.1007/3-540-57659-2_19.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Fahringer, Thomas. « Tools for Parallel and Distributed Computing ». Dans Parallel Computing, 81–115. London : Springer London, 2009. http://dx.doi.org/10.1007/978-1-84882-409-6_3.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Erciyes, K. « Parallel and Distributed Computing ». Dans Computational Biology, 51–77. Cham : Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-24966-7_4.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Hariri, S., et M. Parashar. « Parallel and Distributed Computing ». Dans Tools and Environments for Parallel and Distributed Computing, 1–10. Hoboken, NJ, USA : John Wiley & Sons, Inc., 2004. http://dx.doi.org/10.1002/0471474835.ch1.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Kim, Dongmin, et Salim Hariri. « Parallel and Distributed Computing Environment ». Dans Virtual Computing, 13–23. Boston, MA : Springer US, 2001. http://dx.doi.org/10.1007/978-1-4615-1553-1_2.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Falsafi, Babak, Samuel Midkiff, JackB Dennis, JackB Dennis, Amol Ghoting, Roy H. Campbell, Christof Klausecker et al. « Distributed Computer ». Dans Encyclopedia of Parallel Computing, 573. Boston, MA : Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-09766-4_2274.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Eberle, Hans. « Switcherland — A scalable interconnection structure for distributed computing ». Dans Parallel Computation, 36–49. Berlin, Heidelberg : Springer Berlin Heidelberg, 1996. http://dx.doi.org/10.1007/3-540-61695-0_4.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Sunderam, Vaidy. « Virtualization in Parallel Distributed Computing ». Dans Recent Advances in Parallel Virtual Machine and Message Passing Interface, 6. Berlin, Heidelberg : Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11557265_4.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Schreiner, Wolfgang, Károly Bósa, Andreas Langegger, Thomas Leitner, Bernhard Moser, Szilárd Páll, Volkmar Wieser et Wolfram Wöß. « Parallel, Distributed, and Grid Computing ». Dans Hagenberg Research, 333–78. Berlin, Heidelberg : Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-02127-5_8.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Petitet, A., H. Casanova, J. Dongarra, Y. Robert et R. C. Whaley. « Parallel and Distributed Scientific Computing ». Dans Handbook on Parallel and Distributed Processing, 464–504. Berlin, Heidelberg : Springer Berlin Heidelberg, 2000. http://dx.doi.org/10.1007/978-3-662-04303-5_10.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Actes de conférences sur le sujet "Distribute and Parallel Computing"

1

Garland, Michael. « Parallel computing with CUDA ». Dans Distributed Processing (IPDPS). IEEE, 2010. http://dx.doi.org/10.1109/ipdps.2010.5470378.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Doolan, Daniel, Sabin Tabirca et Laurence Yang. « Mobile Parallel Computing ». Dans 2006 Fifth International Symposium on Parallel and Distributed Computing. IEEE, 2006. http://dx.doi.org/10.1109/ispdc.2006.33.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Tutsch, Dietmar. « Reconfigurable parallel computing ». Dans 2010 1st International Conference on Parallel, Distributed and Grid Computing (PDGC 2010). IEEE, 2010. http://dx.doi.org/10.1109/pdgc.2010.5679961.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Crews, Thad. « Session details : Distributed/parallel computing ». Dans SIGCSE04 : Technical Symposium on Computer Science Education 2004. New York, NY, USA : ACM, 2004. http://dx.doi.org/10.1145/3244218.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Rashid, Zryan Najat, Subhi R. M. Zebari, Karzan Hussein Sharif et Karwan Jacksi. « Distributed Cloud Computing and Distributed Parallel Computing : A Review ». Dans 2018 International Conference on Advanced Science and Engineering (ICOASE). IEEE, 2018. http://dx.doi.org/10.1109/icoase.2018.8548937.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Chou, Yu-Cheng, David Ko et Harry H. Cheng. « Mobile Agent Based Autonomic Dynamic Parallel Computing ». Dans ASME 2009 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2009. http://dx.doi.org/10.1115/detc2009-87750.

Texte intégral
Résumé :
Parallel computing is widely adotped in scientific and engineering applications to enhance the efficiency. Moreover, there are increasing research interests focusing on utilizing distributed networked computers for parallel computing. The Message Passing Interface (MPI) standard was designed to support portability and platform independence of a developed parallel program. However, the procedure to start an MPI-based parallel computation among distributed computers lacks autonomicity and flexibility. This article presents an autonomic dynamic parallel computing framework that provides autonomicity and flexibility that are important and necessary to some parallel computing applications involving resource constrained and heterogeneous platforms. In this framework, an MPI parallel computing environment consisting of multiple computing entities is dynamically established through inter-agent communications using the IEEE Foundation for Intelligent Physical Agents (FIPA) compliant Agent Communication Language (ACL) messages. For each computing entity in the MPI parallel computing environment, a load-balanced MPI program C source code along with the MPI environment configuration statements are dynamically composed as a mobile agent code. A mobile agent wrapping the mobile agent code is created and sent to the computing entity where the mobile agent code is retrieved and interpretively executed. An example of autonomic parallel matrix multiplication is given to demonstrate the self-configuration and self-optimization properties of the presented framework.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Yu, Yuan, Pradeep Kumar Gunda et Michael Isard. « Distributed aggregation for data-parallel computing ». Dans the ACM SIGOPS 22nd symposium. New York, New York, USA : ACM Press, 2009. http://dx.doi.org/10.1145/1629575.1629600.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

« Session PD : Parallel & ; distributed computing ». Dans 2014 9th International Conference on Computer Engineering & Systems (ICCES). IEEE, 2014. http://dx.doi.org/10.1109/icces.2014.7030947.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Foley, Samantha S., et Joshua Hursey. « OnRamp to parallel and distributed computing ». Dans the Workshop. New York, New York, USA : ACM Press, 2015. http://dx.doi.org/10.1145/2831425.2831426.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Baker, Mark, Matthew Grove et Aamir Shafi. « Parallel and Distributed Computing with Java ». Dans 2006 Fifth International Symposium on Parallel and Distributed Computing. IEEE, 2006. http://dx.doi.org/10.1109/ispdc.2006.38.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Rapports d'organisations sur le sujet "Distribute and Parallel Computing"

1

Kaplansky, I., et Richard M. Karp. Parallel and Distributed Computing. Fort Belvoir, VA : Defense Technical Information Center, décembre 1986. http://dx.doi.org/10.21236/ada182935.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Kaplansky, Irving, et Richard Karp. Parallel and Distributed Computing. Fort Belvoir, VA : Defense Technical Information Center, décembre 1986. http://dx.doi.org/10.21236/ada176477.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Leighton, Tom. Parallel and Distributed Computing Combinatorial Algorithms. Fort Belvoir, VA : Defense Technical Information Center, octobre 1993. http://dx.doi.org/10.21236/ada277333.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Nerode, Anil. Fellowship in Parallel and Distributed Computing. Fort Belvoir, VA : Defense Technical Information Center, juillet 1990. http://dx.doi.org/10.21236/ada225926.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Sunderam, V. PVM (Parallel Virtual Machine) : A framework for parallel distributed computing. Office of Scientific and Technical Information (OSTI), janvier 1989. http://dx.doi.org/10.2172/5347567.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Chen, H., J. Hutchins et J. Brandt. Evaluation of DEC`s GIGAswitch for distributed parallel computing. Office of Scientific and Technical Information (OSTI), octobre 1993. http://dx.doi.org/10.2172/10188486.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

George, Alan D. Parallel and Distributed Computing Architectures and Algorithms for Fault-Tolerant Sonar Arrays. Fort Belvoir, VA : Defense Technical Information Center, janvier 1999. http://dx.doi.org/10.21236/ada359698.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

hariri, salim. International ACM Symposium on High Performance Parallel and Distributed Computing Conference for 2017, 2018, and 2019. Office of Scientific and Technical Information (OSTI), janvier 2022. http://dx.doi.org/10.2172/1841180.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Smith, Bradley W. Distributed Computing for Signal Processing : Modeling of Asynchronous Parallel Computation. Appendix G. On the Design and Modeling of Special Purpose Parallel Processing Systems. Fort Belvoir, VA : Defense Technical Information Center, mai 1985. http://dx.doi.org/10.21236/ada167622.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Pratt, T. J., L. G. Martinez, M. O. Vahle, T. V. Archuleta et V. K. Williams. Sandia`s network for SC `97 : Supporting visualization, distributed cluster computing, and production data networking with a wide area high performance parallel asynchronous transfer mode (ATM) network. Office of Scientific and Technical Information (OSTI), mai 1998. http://dx.doi.org/10.2172/658446.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie