Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Distribute and Parallel Computing.

Articles de revues sur le sujet « Distribute and Parallel Computing »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleurs articles de revues pour votre recherche sur le sujet « Distribute and Parallel Computing ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les articles de revues sur diverses disciplines et organisez correctement votre bibliographie.

1

Asma, Anjum, and Parveen Asma. "Optimized load balancing mechanism in parallel computing for workflow in cloud computing environment." International Journal of Reconfigurable and Embedded Systems (IJRES) 12, no. 2 (2023): 276–86. https://doi.org/10.11591/ijres.v12.i2.pp276-286.

Texte intégral
Résumé :
Cloud computing gives on-demand access to computing resources in metered and powerfully adapted way; it empowers the client to get access to fast and flexible resources through virtualization and widely adaptable for various applications. Further, to provide assurance of productive computation, scheduling of task is very much important in cloud infrastructure environment. Moreover, the main aim of task execution phenomena is to reduce the execution time and reserve infrastructure; further, considering huge application, workflow scheduling has drawn fine attention in business as well as scienti
Styles APA, Harvard, Vancouver, ISO, etc.
2

Chang, Furong, Hao Guo, Farhan Ullah, Haochen Wang, Yue Zhao, and Haitian Zhang. "Near-Data Source Graph Partitioning." Electronics 13, no. 22 (2024): 4455. http://dx.doi.org/10.3390/electronics13224455.

Texte intégral
Résumé :
Recently, numerous graph partitioning approaches have been proposed to distribute a big graph to machines in a cluster for distributed computing. Due to heavy communication overhead, these graph partitioning approaches always suffered from long ingress times. Also, heavy communication overhead not only limits the scalability of distributed graph-parallel computing platforms but also reduces the overall performance of clusters. In order to address this problem, this work proposed a near-data source parallel graph partitioning approach noted as NDGP. In NDGP, an edge was preferentially distribut
Styles APA, Harvard, Vancouver, ISO, etc.
3

Sakariya, Harsh Bipinbhai, and Ganesh D. "Taxonomy of Load Balancing Strategies in Distributed Systems." International Journal of Innovative Research in Computer and Communication Engineering 12, no. 03 (2024): 1796–802. http://dx.doi.org/10.15680/ijircce.2024.1203070.

Texte intégral
Résumé :
Large-scale parallel and distributed computing systems are becoming more popular as a result of falling hardware prices and improvements in computer networking technologies. Improved performance and resource sharing are potential benefits of distributed computing systems. We have provided a summary of distributed computing in this essay. The differences between parallel and distributed computing, terms related to distributed computing, task distribution in distributed computing, performance metrics in distributed computing systems, parallel distributed algorithm models, benefits of distributed
Styles APA, Harvard, Vancouver, ISO, etc.
4

Nanuru Yagamurthy, Deepak, and Rajesh Azmeera. "Advances and Challenges in Parallel and Distributed Computing." International Journal of Science and Research (IJSR) 8, no. 1 (2019): 2262–66. http://dx.doi.org/10.21275/sr24517152409.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Anjum, Asma, and Asma Parveen. "Optimized load balancing mechanism in parallel computing for workflow in cloud computing environment." International Journal of Reconfigurable and Embedded Systems (IJRES) 12, no. 2 (2023): 276. http://dx.doi.org/10.11591/ijres.v12.i2.pp276-286.

Texte intégral
Résumé :
Cloud computing gives on-demand access to computing resources in metered and powerfully adapted way; it empowers the client to get access to fast and flexible resources through virtualization and widely adaptable for various applications. Further, to provide assurance of productive computation, scheduling of task is very much important in cloud infrastructure environment. Moreover, the main aim of task execution phenomena is to reduce the execution time and reserve infrastructure; further, considering huge application, workflow scheduling has drawn fine attention in business as well as scienti
Styles APA, Harvard, Vancouver, ISO, etc.
6

Sun, Qi, and Hui Yan Zhao. "Design of Distribute Monitoring Platform Base on Cloud Computing." Applied Mechanics and Materials 687-691 (November 2014): 1076–79. http://dx.doi.org/10.4028/www.scientific.net/amm.687-691.1076.

Texte intégral
Résumé :
Based on cloud computing distributed network measurement system compared to traditional measurement infrastructure, the use of cloud computing platform measurement data stored in massive large virtual resource pool to ensure the reliability of data storage and scalability, re-use cloud computing platform parallel processing mechanism, the mass measurement data for fast, concurrent analytical processing and data mining. Measuring probe supports a variety of different measurement algorithms deployed to support a variety of data acquisition formats, in the measurement method provides a congestion
Styles APA, Harvard, Vancouver, ISO, etc.
7

Umar, A. "Distributed And Parallel Computing." IEEE Concurrency 6, no. 4 (1998): 80–81. http://dx.doi.org/10.1109/mcc.1998.736439.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Ramsay, A. "Distributed versus parallel computing." Artificial Intelligence Review 1, no. 1 (1986): 11–25. http://dx.doi.org/10.1007/bf01988525.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Wismüller, Roland. "Parallel and distributed computing." Software Focus 2, no. 3 (2001): 124. http://dx.doi.org/10.1002/swf.44.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Sewaiwar, Aanchal, and Utkarsh Sharma. "Grid scheduling: Comparative study of MACO & TABU search." COMPUSOFT: An International Journal of Advanced Computer Technology 03, no. 06 (2014): 825–30. https://doi.org/10.5281/zenodo.14742548.

Texte intégral
Résumé :
Grid computing is progressively considered as a Next-generation computational platform that supports wide-area parallel and distributed computing. Scheduling jobs to resources in grid computing is difficult due to the distributed and heterogeneous nature of the resources. In Grid computingfinding optimal schedules for such an environment is (in general) an NP-hard problem, and so heuristic technique must be used. The aim of grid task scheduling is to achieve highsystem throughput and to distribute various computing resources to applications. Many different algorithms have been proposed to solv
Styles APA, Harvard, Vancouver, ISO, etc.
11

Gao, Tie Liang, Jiao Li, Jun Peng Zhang, and Bing Jie Shi. "The Research of MapReduce on the Cloud Computing." Applied Mechanics and Materials 182-183 (June 2012): 2127–30. http://dx.doi.org/10.4028/www.scientific.net/amm.182-183.2127.

Texte intégral
Résumé :
MapReduce is a kind of model of program that is use in the parallel computing about large scale data muster in the Cloud Computing[1] , it mainly consist of map and reduce . MapReduce is tremendously convenient for the programmer who can’t familiar with the parallel program .These people use the MapReduce to run their program on the distribute system. This paper mainly research the model and process and theory of MapReduce .
Styles APA, Harvard, Vancouver, ISO, etc.
12

Egorov, Alexander, Natalya Krupenina, and Lyubov Tyndykar. "The parallel approach to issue of operational management optimization problem on transport gateway system." E3S Web of Conferences 203 (2020): 05003. http://dx.doi.org/10.1051/e3sconf/202020305003.

Texte intégral
Résumé :
The universal parallelization software shell for joint data processing, implemented in combination with a distributed computing system, is considered. The research purpose – to find the most effective solution for the navigable canal management information system organizing. One optimization option is to increase computer devices computing power by combining them into a single computing cluster. The management optimizing task of a locked shipping channel for execution to adapt in a multi-threaded environment is proposed with constraints on a technologically feasible schedule. In article shows
Styles APA, Harvard, Vancouver, ISO, etc.
13

Smt, Ambikatai Vamanrao Mittapally*. "BALANCING LOAD USING PARALLEL COMPUTING AND DISTRIBUTED COMPUTING." INTERNATIONAL JOURNAL OF ENGINEERING SCIENCES & RESEARCH TECHNOLOGY 7, no. 4 (2018): 77–81. https://doi.org/10.5281/zenodo.1215396.

Texte intégral
Résumé :
In computing, load balancing improves the distribution of workloads across multiple computing resources, such as computers, a computer cluster, network links, central processing units, or disk drives.Load balancing aims to optimize resource use, maximize throughput, minimize response time, and avoid overload of any single resource.Parallel computing is a type of computation in which many calculations or the execution of processes are carried out simultaneously. Large problems can often be divided into smaller
Styles APA, Harvard, Vancouver, ISO, etc.
14

Stankovic. "Introduction—Parallel and Distributed Computing." IEEE Transactions on Computers C-36, no. 4 (1987): 385–86. http://dx.doi.org/10.1109/tc.1987.1676919.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
15

Sunderam, V. S., and G. A. Geist. "Heterogeneous parallel and distributed computing." Parallel Computing 25, no. 13-14 (1999): 1699–721. http://dx.doi.org/10.1016/s0167-8191(99)00088-5.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
16

Dalcin, Lisandro D., Rodrigo R. Paz, Pablo A. Kler, and Alejandro Cosimo. "Parallel distributed computing using Python." Advances in Water Resources 34, no. 9 (2011): 1124–39. http://dx.doi.org/10.1016/j.advwatres.2011.04.013.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
17

Myint, Khin Nyein, Myo Hein Zaw, and Win Thanda Aung. "Parallel and Distributed Computing Using MPI on Raspberry Pi Cluster." International Journal of Future Computer and Communication 9, no. 1 (2020): 18–22. http://dx.doi.org/10.18178/ijfcc.2020.9.1.559.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
18

Mukaddes, A. M. M., and Ryuji Shioya. "Parallel Performance of Domain Decomposition Method on Distributed Computing Environment." International Journal of Engineering and Technology 2, no. 1 (2010): 28–34. http://dx.doi.org/10.7763/ijet.2010.v2.95.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
19

Zhao, Long Fang, Di Bo Hou, Ping Jie Huang, Xiao Xuan Song, Lin Xu, and Guang Xin Zhang. "Parallel Computing for Water Quality Equation Based on JPPF." Applied Mechanics and Materials 295-298 (February 2013): 2061–68. http://dx.doi.org/10.4028/www.scientific.net/amm.295-298.2061.

Texte intégral
Résumé :
In order to meet the requirement to quickly solve the water quality equation for an unexpected water pollution incident, this dissertation, based on the paralleled algorithm under Java Parallel Processing Framework (JPPF), does research about the process to dynamically decomposes the iterative process into calculation tasks and distribute the tasks by the API of JPPF to the paralleled nodes for calculation. And the simulation result of one-dimension water quality equation shows that parallel computing method could reduce the time complexity from to , not only resulting in a significant improve
Styles APA, Harvard, Vancouver, ISO, etc.
20

Wolfer, James. "Embedding Topical Elements of Parallel Programming, Computer Graphics, and Artificial Intelligence across the Undergraduate CS Required Courses." International Journal of Engineering Pedagogy (iJEP) 5, no. 1 (2015): 27. http://dx.doi.org/10.3991/ijep.v5i1.4090.

Texte intégral
Résumé :
Traditionally, topics such as parallel computing, computer graphics, and artificial intelligence have been taught as stand-alone courses in the computing curriculum. Often these are elective courses, limiting the material to the subset of students choosing to take the course. Recently there has been movement to distribute topics across the curriculum in order to ensure that all graduates have been exposed to concepts such as parallel computing. Previous work described an attempt to systematically weave a tapestry of topics into the undergraduate computing curriculum. This paper reviews that wo
Styles APA, Harvard, Vancouver, ISO, etc.
21

SCHEININE, ALAN LOUIS. "PARALLEL COMPUTING AT CRS4." International Journal of Modern Physics C 04, no. 06 (1993): 1315–21. http://dx.doi.org/10.1142/s0129183193001038.

Texte intégral
Résumé :
An overview is given of parallel computing work being done at CRS4 (Centro di Ricerca, Sviluppo e Studi Superiori in Sardegna). Parallel computation projects include: parallelization of a simulation of the interaction of high energy particles with matter (GEANT), domain decomposition for numerical solution of partial differential equations, seismic migration for oil prospecting, finite-element structural analysis, parallel molecular dynamics, a C++ library for distributed processing of specific functions, and real-time visualization of a computer simulation that runs as distributed processes.
Styles APA, Harvard, Vancouver, ISO, etc.
22

Lastovetsky, Alexey. "Special issue of Journal of Parallel and Distributed Computing: Heterogeneity in parallel and distributed computing." Journal of Parallel and Distributed Computing 72, no. 10 (2012): 1397. http://dx.doi.org/10.1016/j.jpdc.2012.06.002.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
23

Soumya, Ray and Ajanta De Sarkar. "EXECUTION ANALYSIS OF LOAD BALANCING ALGORITHMS IN CLOUD COMPUTING ENVIRONMENT." International Journal on Cloud Computing: Services and Architecture (IJCCSA) 2, October (2018): 01–13. https://doi.org/10.5281/zenodo.1437775.

Texte intégral
Résumé :
The concept oft Cloud computing has significantly changed the field of parallel and distributed computing systems today. Cloud computing enables a wide range of users to access distributed, scalable, virtualized hardware and/or software infrastructure over the Internet. Load balancing is a methodology to distribute workload across multiple computers, or other resources over the network links to achieve optimal resource utilization, maximize throughput, minimum response time, and avoid overload. With recent advent of technology, resource control or load balancing in cloud computing is main chal
Styles APA, Harvard, Vancouver, ISO, etc.
24

Pan, Lei, Ming Kin Lai, Koji Noguchi, Javid J. Huseynov, Lubomir F. Bic, and Michael B. Dillencourt. "Distributed Parallel Computing Using Navigational Programming." International Journal of Parallel Programming 32, no. 1 (2004): 1–37. http://dx.doi.org/10.1023/b:ijpp.0000015563.36375.17.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
25

Lastovetsky, Alexey. "Heterogeneity in parallel and distributed computing." Journal of Parallel and Distributed Computing 73, no. 12 (2013): 1523–24. http://dx.doi.org/10.1016/j.jpdc.2013.08.010.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
26

Hertzberger, L. O. "Trends in parallel and distributed computing." Future Generation Computer Systems 7, no. 1 (1991): 31–40. http://dx.doi.org/10.1016/0167-739x(91)90014-o.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
27

Ozturan, Can, and Dan Grigoras. "Guest Editorial: Parallel and Distributed Computing." International Journal of Parallel Programming 39, no. 5 (2011): 582–83. http://dx.doi.org/10.1007/s10766-011-0166-5.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
28

Szymczyk, Magdalena, and Piotr Szymczyk. "Matlab and Parallel Computing." Image Processing & Communications 17, no. 4 (2012): 207–16. http://dx.doi.org/10.2478/v10248-012-0048-5.

Texte intégral
Résumé :
Abstract The MATLAB is a technical computing language used in a variety of fields, such as control systems, image and signal processing, visualization, financial process simulations in an easy-to-use environment. MATLAB offers "toolboxes" which are specialized libraries for variety scientific domains, and a simplified interface to high-performance libraries (LAPACK, BLAS, FFTW too). Now MATLAB is enriched by the possibility of parallel computing with the Parallel Computing ToolboxTM and MATLAB Distributed Computing ServerTM. In this article we present some of the key features of MATLAB paralle
Styles APA, Harvard, Vancouver, ISO, etc.
29

Vijayaraj, M., R. Malar Vizhi, P. Chandrakala, Laith H. Alzubaidi, Khasanov Muzaffar, and R. Senthilkumar. "Parallel and Distributed Computing for High-Performance Applications." E3S Web of Conferences 399 (2023): 04039. http://dx.doi.org/10.1051/e3sconf/202339904039.

Texte intégral
Résumé :
The study of parallel and distributed computing has become an important area in computer science because it makes it possible to create high-performance software that can effectively handle challenging computational tasks. In terms of their use in the world of high-performance applications, parallel and distributed computing techniques are given a thorough introduction in this study. The partitioning of computational processes into smaller subtasks that may be completed concurrently on numerous processors or computers is the core idea underpinning parallel and distributed computing. This strat
Styles APA, Harvard, Vancouver, ISO, etc.
30

Shang, Zhi. "Large-Scale CFD Parallel Computing Dealing with Massive Mesh." Journal of Engineering 2013 (2013): 1–6. http://dx.doi.org/10.1155/2013/850148.

Texte intégral
Résumé :
In order to run CFD codes more efficiently on large scales, the parallel computing has to be employed. For example, in industrial scales, it usually uses tens of thousands of mesh cells to capture the details of complex geometries. How to distribute these mesh cells among the multiprocessors for obtaining a good parallel computing performance (HPC) is really a challenge. Due to dealing with the massive mesh cells, it is difficult for the CFD codes without parallel optimizations to handle this kind of large-scale computing. Some of the open source mesh partitioning software packages, such as Me
Styles APA, Harvard, Vancouver, ISO, etc.
31

Malathi, K., Dr R. Anandan, and Dr J. Frank Vijay. "Cloud Environment Task Scheduling Optimization of Modified Genetic Algorithm." Journal of Internet Services and Information Security 13, no. 1 (2023): 34–43. http://dx.doi.org/10.58346/jisis.2023.i1.004.

Texte intégral
Résumé :
From the availability of resources to the accomplishment of tasks, cloud computing is a development of supercomputing. One of the most trustworthy paradigms in computing technology is built on internet-based parallel and distributed computing models. Optimization algorithms can be used to distribute user workloads to provided logical resources termed 'Virtual Machines' in the cloud computing system, which is a major aspect of resource management (VM). A fundamental challenge in cloud computing is the dynamic heterogeneity of resources and workloads, which necessitates efficient task scheduling
Styles APA, Harvard, Vancouver, ISO, etc.
32

Dai, Fei, Md Akbar Hossain, and Yi Wang. "State of the Art in Parallel and Distributed Systems: Emerging Trends and Challenges." Electronics 14, no. 4 (2025): 677. https://doi.org/10.3390/electronics14040677.

Texte intégral
Résumé :
Driven by rapid advancements in interconnection, packaging, integration, and computing technologies, parallel and distributed systems have significantly evolved in recent years. These systems have become essential for addressing modern computational demands, offering enhanced processing power, scalability, and resource efficiency. This paper provides a comprehensive overview of parallel and distributed systems, exploring their interrelationships, their key distinctions, and the emerging trends shaping their evolution. We analyse four parallel computing paradigms—heterogeneous computing, quantu
Styles APA, Harvard, Vancouver, ISO, etc.
33

Spahi, Enis, and D. Altilar. "ITU-PRP: Parallel and Distributed Computing Middleware for Java Developers." International Journal of Business & Technology 3, no. 1 (2014): 2–13. http://dx.doi.org/10.33107/ijbte.2014.3.1.01.

Texte intégral
Résumé :
ITU-PRP provides a Parallel Programming Framework for Java Developers on which they can adapt their sequential application code to operate on a distributed multi-host parallel environment. Developers would implement parallel models, such as Loop Parallelism, Divide and Conquer, Master-Slave and Fork-Join by the help of an API Library provided under framework. Produced parallel applications would be submitted to a middleware called Parallel Running Platform (PRP), on which parallel resources for parallel processing are being organized and performed. The middleware creates Task Plans (TP) accord
Styles APA, Harvard, Vancouver, ISO, etc.
34

Chi, Song. "Application of MATLAB Parallel Programming Technology." Applied Mechanics and Materials 602-605 (August 2014): 3787–90. http://dx.doi.org/10.4028/www.scientific.net/amm.602-605.3787.

Texte intégral
Résumé :
The parallel application program development technology and process is analyzed based on the MATLAB parallel and distributed computing toolbox. Hereby, the comparison of the serial computing and the parallel computing is done by computer simulations, and the parallel computing program design and develop method is proposed. The simulations results show that, the parallel computing technology has many advantages in the calculation of high intensive and it is convenience of the parallel application development using MATLAB.
Styles APA, Harvard, Vancouver, ISO, etc.
35

Gowanlock, Michael. "Teaching parallel and distributed computing using data-intensive computing modules." Journal of Parallel and Distributed Computing 202 (August 2025): 105093. https://doi.org/10.1016/j.jpdc.2025.105093.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
36

Siwik, Leszek, Dawid Kala, Mateusz Godzik, Wojciech Turek, Aleksander Byrski, and Marek Kisiel-Dorohinicki. "Mobile Cloud for Parallel and Distributed Green Computing." Journal of Telecommunications and Information Technology 4 (December 20, 2017): 60–70. http://dx.doi.org/10.26636/jtit.2017.111817.

Texte intégral
Résumé :
Mobile Computing and Mobile Cloud Computing are the areas where intensive research is observed. The “mobility” landscape (devices, technologies, apps, etc.) evolves so fast that definitions and taxonomies do not catch up with so dynamic changes and there is still an ambiguity in definitions and common understanding of basic ideas and models. This research focuses on Mobile Cloud understood as parallel and distributed system consisting of a collection of interconnected (and virtualized) mobile devices dynamically provisioned and presented as one unified computing resource. This paper focuses on th
Styles APA, Harvard, Vancouver, ISO, etc.
37

Yang, Ning, Shiaaulir Wang, and Paul Schonfeld. "Simulation-Based Scheduling of Waterway Projects Using a Parallel Genetic Algorithm." International Journal of Operations Research and Information Systems 6, no. 1 (2015): 49–63. http://dx.doi.org/10.4018/ijoris.2015010104.

Texte intégral
Résumé :
A Parallel Genetic Algorithm (PGA) is used for a simulation-based optimization of waterway project schedules. This PGA is designed to distribute a Genetic Algorithm application over multiple processors in order to speed up the solution search procedure for a very large combinational problem. The proposed PGA is based on a global parallel model, which is also called a master-slave model. A Message-Passing Interface (MPI) is used in developing the parallel computing program. A case study is presented, whose results show how the adaption of a simulation-based optimization algorithm to parallel co
Styles APA, Harvard, Vancouver, ISO, etc.
38

Sitsylitsyn, Y. O., V. V. Osadchyi, V. S. Kruglyk, and O. H. Kuzminska. "Modeling training content for software engineers in parallel computing." Journal of Physics: Conference Series 2611, no. 1 (2023): 012017. http://dx.doi.org/10.1088/1742-6596/2611/1/012017.

Texte intégral
Résumé :
Abstract This study proposes a robust framework for the training of software engineers specializing in parallel computing. We first curated essential content for parallel computing education based on international standards and evolving recommendations from Computing Curricula. We then systematically structured the content and designed a well-defined learning pathway for aspiring software engineers. Concurrently, we conducted a comprehensive assessment of the current state of training for parallel computing in Ukrainian higher education institutions. We analyzed bachelor’s programs in Informat
Styles APA, Harvard, Vancouver, ISO, etc.
39

Chard, Kyle, James Muns, Richard Wai, and S. Tucker Taft. "Language support for parallel and distributed computing." ACM SIGAda Ada Letters 40, no. 2 (2021): 51–54. http://dx.doi.org/10.1145/3463478.3463481.

Texte intégral
Résumé :
Language constructs that support parallel computing are relatively well recognized at this point, with features such as parallel loops (optionally with reduction operators), divide-and-conquer parallelism, and general parallel blocks. But what language features would make distributed computing safer and more productive? Is it helpful to be able to specify on what node a computation should take place, and on what node data should reside, or is that overspecification? We don't normally expect a user of a parallel programming language to specify what core is used for a given iteration of a loop,
Styles APA, Harvard, Vancouver, ISO, etc.
40

ZHOU, Xu, Kai LU, Xiaoping WANG, et al. "Deterministic Message Passing for Distributed Parallel Computing." IEICE Transactions on Information and Systems E96.D, no. 5 (2013): 1068–77. http://dx.doi.org/10.1587/transinf.e96.d.1068.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
41

Kopysov, S. P., A. B. Ponomarev, and V. N. Rychkov. "Model computation mesh for parallel distributed computing." Vestnik Udmurtskogo Universiteta. Matematika. Mekhanika. Komp'yuternye Nauki, no. 2 (April 2008): 194–96. http://dx.doi.org/10.20537/vm080264.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
42

Yoshikawa, T., and H. Matsuoka. "Optical interconnections for parallel and distributed computing." Proceedings of the IEEE 88, no. 6 (2000): 849–55. http://dx.doi.org/10.1109/5.867698.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
43

Ranganatban, N. "Parallel And Distributed Computing Handbook [Book Review]." IEEE Concurrency 5, no. 3 (1997): 74–75. http://dx.doi.org/10.1109/mcc.1997.605922.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
44

Zomaya, A. Y., T. El-Ghazawi, and O. Frieder. "Parallel and distributed computing for data mining." IEEE Concurrency 7, no. 4 (1999): 11–13. http://dx.doi.org/10.1109/mcc.1999.806974.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
45

Yang, Laurence T., Yi Pan, and Minyi Guo. "Parallel and distributed scientific and engineering computing." Parallel Computing 29, no. 11-12 (2003): 1505–8. http://dx.doi.org/10.1016/j.parco.2003.08.001.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
46

D’Ambra, Pasqua, Marco Danelutto, and Daniela di Serafino. "Advanced environments for parallel and distributed computing." Parallel Computing 28, no. 12 (2002): 1635–36. http://dx.doi.org/10.1016/s0167-8191(02)00183-7.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
47

Cap, Clemens H., and Volker Strumpen. "Efficient parallel computing in distributed workstation environments." Parallel Computing 19, no. 11 (1993): 1221–34. http://dx.doi.org/10.1016/0167-8191(93)90028-j.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
48

Iványi, P., J. Kruis, T. Kozubek, W. Gentzsch, and B. H. V. Topping. "Civil-Comp: Parallel, Distributed and Cloud Computing." Advances in Engineering Software 103 (January 2017): 12. http://dx.doi.org/10.1016/j.advengsoft.2016.11.002.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
49

Sunderam, V. S. "PVM: A framework for parallel distributed computing." Concurrency: Practice and Experience 2, no. 4 (1990): 315–39. http://dx.doi.org/10.1002/cpe.4330020404.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
50

Pan, Linqiang, and Tseren-Onolt Ishdorj. "P Systems with Active Membranes and Separation Rules." JUCS - Journal of Universal Computer Science 10, no. (5) (2004): 630–49. https://doi.org/10.3217/jucs-010-05-0630.

Texte intégral
Résumé :
The P systems are a class of distributed parallel computing devices of a biochemical type. In this paper, a new definition of separation rules in P systems with active membranes is given. Under the new definition, the efficiency and universality of P systems with active membranes and separation rules instead of division rules are investigated.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!