Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Distribute and Parallel Computing.

Articles de revues sur le sujet « Distribute and Parallel Computing »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleurs articles de revues pour votre recherche sur le sujet « Distribute and Parallel Computing ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les articles de revues sur diverses disciplines et organisez correctement votre bibliographie.

1

Umar, A. « Distributed And Parallel Computing ». IEEE Concurrency 6, no 4 (octobre 1998) : 80–81. http://dx.doi.org/10.1109/mcc.1998.736439.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Ramsay, A. « Distributed versus parallel computing ». Artificial Intelligence Review 1, no 1 (mars 1986) : 11–25. http://dx.doi.org/10.1007/bf01988525.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Wismüller, Roland. « Parallel and distributed computing ». Software Focus 2, no 3 (septembre 2001) : 124. http://dx.doi.org/10.1002/swf.44.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Sun, Qi, et Hui Yan Zhao. « Design of Distribute Monitoring Platform Base on Cloud Computing ». Applied Mechanics and Materials 687-691 (novembre 2014) : 1076–79. http://dx.doi.org/10.4028/www.scientific.net/amm.687-691.1076.

Texte intégral
Résumé :
Based on cloud computing distributed network measurement system compared to traditional measurement infrastructure, the use of cloud computing platform measurement data stored in massive large virtual resource pool to ensure the reliability of data storage and scalability, re-use cloud computing platform parallel processing mechanism, the mass measurement data for fast, concurrent analytical processing and data mining. Measuring probe supports a variety of different measurement algorithms deployed to support a variety of data acquisition formats, in the measurement method provides a congestion response policies and load balancing strategies.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Gao, Tie Liang, Jiao Li, Jun Peng Zhang et Bing Jie Shi. « The Research of MapReduce on the Cloud Computing ». Applied Mechanics and Materials 182-183 (juin 2012) : 2127–30. http://dx.doi.org/10.4028/www.scientific.net/amm.182-183.2127.

Texte intégral
Résumé :
MapReduce is a kind of model of program that is use in the parallel computing about large scale data muster in the Cloud Computing[1] , it mainly consist of map and reduce . MapReduce is tremendously convenient for the programmer who can’t familiar with the parallel program .These people use the MapReduce to run their program on the distribute system. This paper mainly research the model and process and theory of MapReduce .
Styles APA, Harvard, Vancouver, ISO, etc.
6

Egorov, Alexander, Natalya Krupenina et Lyubov Tyndykar. « The parallel approach to issue of operational management optimization problem on transport gateway system ». E3S Web of Conferences 203 (2020) : 05003. http://dx.doi.org/10.1051/e3sconf/202020305003.

Texte intégral
Résumé :
The universal parallelization software shell for joint data processing, implemented in combination with a distributed computing system, is considered. The research purpose – to find the most effective solution for the navigable canal management information system organizing. One optimization option is to increase computer devices computing power by combining them into a single computing cluster. The management optimizing task of a locked shipping channel for execution to adapt in a multi-threaded environment is proposed with constraints on a technologically feasible schedule. In article shows algorithms and gives recommendations for their application in the subtasks formation in parallel processing case, as well as on a separate thread. The proposed approach to building a tree of options allows you to optimally distribute the load between all resources multi-threaded system any structure.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Myint, Khin Nyein, Myo Hein Zaw et Win Thanda Aung. « Parallel and Distributed Computing Using MPI on Raspberry Pi Cluster ». International Journal of Future Computer and Communication 9, no 1 (mars 2020) : 18–22. http://dx.doi.org/10.18178/ijfcc.2020.9.1.559.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Mukaddes, A. M. M., et Ryuji Shioya. « Parallel Performance of Domain Decomposition Method on Distributed Computing Environment ». International Journal of Engineering and Technology 2, no 1 (2010) : 28–34. http://dx.doi.org/10.7763/ijet.2010.v2.95.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Stankovic. « Introduction—Parallel and Distributed Computing ». IEEE Transactions on Computers C-36, no 4 (avril 1987) : 385–86. http://dx.doi.org/10.1109/tc.1987.1676919.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Sunderam, V. S., et G. A. Geist. « Heterogeneous parallel and distributed computing ». Parallel Computing 25, no 13-14 (décembre 1999) : 1699–721. http://dx.doi.org/10.1016/s0167-8191(99)00088-5.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
11

Dalcin, Lisandro D., Rodrigo R. Paz, Pablo A. Kler et Alejandro Cosimo. « Parallel distributed computing using Python ». Advances in Water Resources 34, no 9 (septembre 2011) : 1124–39. http://dx.doi.org/10.1016/j.advwatres.2011.04.013.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
12

Zhao, Long Fang, Di Bo Hou, Ping Jie Huang, Xiao Xuan Song, Lin Xu et Guang Xin Zhang. « Parallel Computing for Water Quality Equation Based on JPPF ». Applied Mechanics and Materials 295-298 (février 2013) : 2061–68. http://dx.doi.org/10.4028/www.scientific.net/amm.295-298.2061.

Texte intégral
Résumé :
In order to meet the requirement to quickly solve the water quality equation for an unexpected water pollution incident, this dissertation, based on the paralleled algorithm under Java Parallel Processing Framework (JPPF), does research about the process to dynamically decomposes the iterative process into calculation tasks and distribute the tasks by the API of JPPF to the paralleled nodes for calculation. And the simulation result of one-dimension water quality equation shows that parallel computing method could reduce the time complexity from to , not only resulting in a significant improvement in calculation speed, but in a higher reliability and stability.
Styles APA, Harvard, Vancouver, ISO, etc.
13

SCHEININE, ALAN LOUIS. « PARALLEL COMPUTING AT CRS4 ». International Journal of Modern Physics C 04, no 06 (décembre 1993) : 1315–21. http://dx.doi.org/10.1142/s0129183193001038.

Texte intégral
Résumé :
An overview is given of parallel computing work being done at CRS4 (Centro di Ricerca, Sviluppo e Studi Superiori in Sardegna). Parallel computation projects include: parallelization of a simulation of the interaction of high energy particles with matter (GEANT), domain decomposition for numerical solution of partial differential equations, seismic migration for oil prospecting, finite-element structural analysis, parallel molecular dynamics, a C++ library for distributed processing of specific functions, and real-time visualization of a computer simulation that runs as distributed processes.
Styles APA, Harvard, Vancouver, ISO, etc.
14

Lastovetsky, Alexey. « Special issue of Journal of Parallel and Distributed Computing : Heterogeneity in parallel and distributed computing ». Journal of Parallel and Distributed Computing 72, no 10 (octobre 2012) : 1397. http://dx.doi.org/10.1016/j.jpdc.2012.06.002.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
15

Wolfer, James. « Embedding Topical Elements of Parallel Programming, Computer Graphics, and Artificial Intelligence across the Undergraduate CS Required Courses ». International Journal of Engineering Pedagogy (iJEP) 5, no 1 (11 février 2015) : 27. http://dx.doi.org/10.3991/ijep.v5i1.4090.

Texte intégral
Résumé :
Traditionally, topics such as parallel computing, computer graphics, and artificial intelligence have been taught as stand-alone courses in the computing curriculum. Often these are elective courses, limiting the material to the subset of students choosing to take the course. Recently there has been movement to distribute topics across the curriculum in order to ensure that all graduates have been exposed to concepts such as parallel computing. Previous work described an attempt to systematically weave a tapestry of topics into the undergraduate computing curriculum. This paper reviews that work and expands it with representative examples of assignments, demonstrations, and results as well as describing how the tools and examples deployed for these classes have a residual effect on classes such as Comptuer Literacy.
Styles APA, Harvard, Vancouver, ISO, etc.
16

Szymczyk, Magdalena, et Piotr Szymczyk. « Matlab and Parallel Computing ». Image Processing & ; Communications 17, no 4 (1 décembre 2012) : 207–16. http://dx.doi.org/10.2478/v10248-012-0048-5.

Texte intégral
Résumé :
Abstract The MATLAB is a technical computing language used in a variety of fields, such as control systems, image and signal processing, visualization, financial process simulations in an easy-to-use environment. MATLAB offers "toolboxes" which are specialized libraries for variety scientific domains, and a simplified interface to high-performance libraries (LAPACK, BLAS, FFTW too). Now MATLAB is enriched by the possibility of parallel computing with the Parallel Computing ToolboxTM and MATLAB Distributed Computing ServerTM. In this article we present some of the key features of MATLAB parallel applications focused on using GPU processors for image processing.
Styles APA, Harvard, Vancouver, ISO, etc.
17

Pan, Lei, Ming Kin Lai, Koji Noguchi, Javid J. Huseynov, Lubomir F. Bic et Michael B. Dillencourt. « Distributed Parallel Computing Using Navigational Programming ». International Journal of Parallel Programming 32, no 1 (février 2004) : 1–37. http://dx.doi.org/10.1023/b:ijpp.0000015563.36375.17.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
18

Lastovetsky, Alexey. « Heterogeneity in parallel and distributed computing ». Journal of Parallel and Distributed Computing 73, no 12 (décembre 2013) : 1523–24. http://dx.doi.org/10.1016/j.jpdc.2013.08.010.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
19

Hertzberger, L. O. « Trends in parallel and distributed computing ». Future Generation Computer Systems 7, no 1 (octobre 1991) : 31–40. http://dx.doi.org/10.1016/0167-739x(91)90014-o.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
20

Ozturan, Can, et Dan Grigoras. « Guest Editorial : Parallel and Distributed Computing ». International Journal of Parallel Programming 39, no 5 (25 février 2011) : 582–83. http://dx.doi.org/10.1007/s10766-011-0166-5.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
21

Shang, Zhi. « Large-Scale CFD Parallel Computing Dealing with Massive Mesh ». Journal of Engineering 2013 (2013) : 1–6. http://dx.doi.org/10.1155/2013/850148.

Texte intégral
Résumé :
In order to run CFD codes more efficiently on large scales, the parallel computing has to be employed. For example, in industrial scales, it usually uses tens of thousands of mesh cells to capture the details of complex geometries. How to distribute these mesh cells among the multiprocessors for obtaining a good parallel computing performance (HPC) is really a challenge. Due to dealing with the massive mesh cells, it is difficult for the CFD codes without parallel optimizations to handle this kind of large-scale computing. Some of the open source mesh partitioning software packages, such as Metis, ParMetis, Scotch, PT-Scotch, and Zoltan, are able to deal with the distribution of large number of mesh cells. Therefore they were employed as the parallel optimization tools ported into Code_Saturne, an open source CFD code, for testing if they can solve the issue of dealing with massive mesh cells for CFD codes. Through the studies, it was found that the mesh partitioning optimization software packages can help CFD codes not only deal with massive mesh cells but also have a good HPC.
Styles APA, Harvard, Vancouver, ISO, etc.
22

Malathi, K., Dr R. Anandan et Dr J. Frank Vijay. « Cloud Environment Task Scheduling Optimization of Modified Genetic Algorithm ». Journal of Internet Services and Information Security 13, no 1 (7 janvier 2023) : 34–43. http://dx.doi.org/10.58346/jisis.2023.i1.004.

Texte intégral
Résumé :
From the availability of resources to the accomplishment of tasks, cloud computing is a development of supercomputing. One of the most trustworthy paradigms in computing technology is built on internet-based parallel and distributed computing models. Optimization algorithms can be used to distribute user workloads to provided logical resources termed 'Virtual Machines' in the cloud computing system, which is a major aspect of resource management (VM). A fundamental challenge in cloud computing is the dynamic heterogeneity of resources and workloads, which necessitates efficient task scheduling and distribution. It is possible that task scheduling in distributed environments will improve our understanding of workflow scheduling, independent task scheduling that takes into account security and execution time for applications, trust between various system entities, and improved system utilisation and energy efficiency, among other things. The goal of this research is to contribute to these advancements in these areas: An independent task scheduling system based on genetics is presented to obtain the best outcomes in terms of time and resource consumption while allocating tasks to resources in accordance with the task's security needs. Various meta-heuristic algorithms, such as Genetic Algorithm, are currently being used to solve task scheduling difficulties.
Styles APA, Harvard, Vancouver, ISO, etc.
23

Spahi, Enis, et D. Altilar. « ITU-PRP : Parallel and Distributed Computing Middleware for Java Developers ». International Journal of Business & ; Technology 3, no 1 (novembre 2014) : 2–13. http://dx.doi.org/10.33107/ijbte.2014.3.1.01.

Texte intégral
Résumé :
ITU-PRP provides a Parallel Programming Framework for Java Developers on which they can adapt their sequential application code to operate on a distributed multi-host parallel environment. Developers would implement parallel models, such as Loop Parallelism, Divide and Conquer, Master-Slave and Fork-Join by the help of an API Library provided under framework. Produced parallel applications would be submitted to a middleware called Parallel Running Platform (PRP), on which parallel resources for parallel processing are being organized and performed. The middleware creates Task Plans (TP) according to application’s parallel model, assigns best available resource Hosts, in order to perform fast parallel processing. Task Plans will be created dynamically in real time according to resources actual utilization status or availability, instead of predefined/preconfigured task plans. ITU-PRP achieves better efficiency on parallel processing over big data sets and distributes divided base data to multiple hosts to be operated by Coarse-Grained parallelism. According to this model distributed parallel tasks would operate independently with minimal interaction until processing ends.
Styles APA, Harvard, Vancouver, ISO, etc.
24

Siwik, Leszek, Dawid Kala, Mateusz Godzik, Wojciech Turek, Aleksander Byrski et Marek Kisiel-Dorohinicki. « Mobile Cloud for Parallel and Distributed Green Computing ». Journal of Telecommunications and Information Technology 4 (20 décembre 2017) : 60–70. http://dx.doi.org/10.26636/jtit.2017.111817.

Texte intégral
Résumé :
Mobile Computing and Mobile Cloud Computing are the areas where intensive research is observed. The “mobility” landscape (devices, technologies, apps, etc.) evolves so fast that definitions and taxonomies do not catch up with so dynamic changes and there is still an ambiguity in definitions and common understanding of basic ideas and models. This research focuses on Mobile Cloud understood as parallel and distributed system consisting of a collection of interconnected (and virtualized) mobile devices dynamically provisioned and presented as one unified computing resource. This paper focuses on the mobile green computing cloud applied for parallel and distributed computations and consisting of outdated, abandoned or no longer needed smartphones being able to set up a powerful computing cluster. Besides showing the general idea and background, an actual computing cluster is constructed and its scalability and efficiency is checked versus the results obtained from the virtualized set of smartphones. All the experiments are performed using a dedicated software framework constructed in order to leverage the nolonger-needed smartphones, creating a computing cloud.
Styles APA, Harvard, Vancouver, ISO, etc.
25

Chard, Kyle, James Muns, Richard Wai et S. Tucker Taft. « Language support for parallel and distributed computing ». ACM SIGAda Ada Letters 40, no 2 (27 avril 2021) : 51–54. http://dx.doi.org/10.1145/3463478.3463481.

Texte intégral
Résumé :
Language constructs that support parallel computing are relatively well recognized at this point, with features such as parallel loops (optionally with reduction operators), divide-and-conquer parallelism, and general parallel blocks. But what language features would make distributed computing safer and more productive? Is it helpful to be able to specify on what node a computation should take place, and on what node data should reside, or is that overspecification? We don't normally expect a user of a parallel programming language to specify what core is used for a given iteration of a loop, nor which data should be moved into which core's cache. Generally the compiler and the run-time manage the allocation of cores, and the hardware worries about the cache. But in a distributed world, communication costs can easily outweigh computation costs in a poorly designed application. This panel will discuss various language features, some of which already exist to support parallel computing, and how they could be enhanced or generalized to support distributed computing safely and efficiently.
Styles APA, Harvard, Vancouver, ISO, etc.
26

ZHOU, Xu, Kai LU, Xiaoping WANG, Wenzhe ZHANG, Kai ZHANG, Xu LI et Gen LI. « Deterministic Message Passing for Distributed Parallel Computing ». IEICE Transactions on Information and Systems E96.D, no 5 (2013) : 1068–77. http://dx.doi.org/10.1587/transinf.e96.d.1068.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
27

Kopysov, S. P., A. B. Ponomarev et V. N. Rychkov. « Model computation mesh for parallel distributed computing ». Vestnik Udmurtskogo Universiteta. Matematika. Mekhanika. Komp'yuternye Nauki, no 2 (avril 2008) : 194–96. http://dx.doi.org/10.20537/vm080264.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
28

Yoshikawa, T., et H. Matsuoka. « Optical interconnections for parallel and distributed computing ». Proceedings of the IEEE 88, no 6 (juin 2000) : 849–55. http://dx.doi.org/10.1109/5.867698.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
29

Ranganatban, N. « Parallel And Distributed Computing Handbook [Book Review] ». IEEE Concurrency 5, no 3 (juillet 1997) : 74–75. http://dx.doi.org/10.1109/mcc.1997.605922.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
30

Zomaya, A. Y., T. El-Ghazawi et O. Frieder. « Parallel and distributed computing for data mining ». IEEE Concurrency 7, no 4 (octobre 1999) : 11–13. http://dx.doi.org/10.1109/mcc.1999.806974.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
31

Yang, Laurence T., Yi Pan et Minyi Guo. « Parallel and distributed scientific and engineering computing ». Parallel Computing 29, no 11-12 (novembre 2003) : 1505–8. http://dx.doi.org/10.1016/j.parco.2003.08.001.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
32

D’Ambra, Pasqua, Marco Danelutto et Daniela di Serafino. « Advanced environments for parallel and distributed computing ». Parallel Computing 28, no 12 (décembre 2002) : 1635–36. http://dx.doi.org/10.1016/s0167-8191(02)00183-7.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
33

Cap, Clemens H., et Volker Strumpen. « Efficient parallel computing in distributed workstation environments ». Parallel Computing 19, no 11 (novembre 1993) : 1221–34. http://dx.doi.org/10.1016/0167-8191(93)90028-j.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
34

Iványi, P., J. Kruis, T. Kozubek, W. Gentzsch et B. H. V. Topping. « Civil-Comp : Parallel, Distributed and Cloud Computing ». Advances in Engineering Software 103 (janvier 2017) : 12. http://dx.doi.org/10.1016/j.advengsoft.2016.11.002.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
35

Sunderam, V. S. « PVM : A framework for parallel distributed computing ». Concurrency : Practice and Experience 2, no 4 (décembre 1990) : 315–39. http://dx.doi.org/10.1002/cpe.4330020404.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
36

Chi, Song. « Application of MATLAB Parallel Programming Technology ». Applied Mechanics and Materials 602-605 (août 2014) : 3787–90. http://dx.doi.org/10.4028/www.scientific.net/amm.602-605.3787.

Texte intégral
Résumé :
The parallel application program development technology and process is analyzed based on the MATLAB parallel and distributed computing toolbox. Hereby, the comparison of the serial computing and the parallel computing is done by computer simulations, and the parallel computing program design and develop method is proposed. The simulations results show that, the parallel computing technology has many advantages in the calculation of high intensive and it is convenience of the parallel application development using MATLAB.
Styles APA, Harvard, Vancouver, ISO, etc.
37

Yang, Ning, Shiaaulir Wang et Paul Schonfeld. « Simulation-Based Scheduling of Waterway Projects Using a Parallel Genetic Algorithm ». International Journal of Operations Research and Information Systems 6, no 1 (janvier 2015) : 49–63. http://dx.doi.org/10.4018/ijoris.2015010104.

Texte intégral
Résumé :
A Parallel Genetic Algorithm (PGA) is used for a simulation-based optimization of waterway project schedules. This PGA is designed to distribute a Genetic Algorithm application over multiple processors in order to speed up the solution search procedure for a very large combinational problem. The proposed PGA is based on a global parallel model, which is also called a master-slave model. A Message-Passing Interface (MPI) is used in developing the parallel computing program. A case study is presented, whose results show how the adaption of a simulation-based optimization algorithm to parallel computing can greatly reduce computation time. Additional techniques which are found to further improve the PGA performance include: (1) choosing an appropriate task distribution method, (2) distributing simulation replications instead of different solutions, (3) avoiding the simulation of duplicate solutions, (4) avoiding running multiple simulations simultaneously in shared-memory processors, and (5) avoiding using multiple processors which belong to different clusters (physical sub-networks).
Styles APA, Harvard, Vancouver, ISO, etc.
38

Karpas, Erez, Tomer Sagi, Carmel Domshlak, Avigdor Gal, Avi Mendelson et Moshe Tennenholtz. « Data-Parallel Computing Meets STRIPS ». Proceedings of the AAAI Conference on Artificial Intelligence 27, no 1 (30 juin 2013) : 474–80. http://dx.doi.org/10.1609/aaai.v27i1.8590.

Texte intégral
Résumé :
The increased demand for distributed computations on “big data” has led to solutions such as SCOPE, DryadLINQ, Pig, and Hive, which allow the user to specify queries in an SQL-like language, enriched with sets of user-defined operators. The lack of exact semantics for user-defined operators interferes with the query optimization process, thus putting the burden of suggesting, at least partial, query plans on the user. In an attempt to ease this burden, we propose a formal model that allows for data-parallel program synthesis (DPPS) in a semantically well-defined manner. We show that this model generalizes existing frameworks for data-parallel computation, while providing the flexibility of query plan generation that is currently absent from these frameworks. In particular, we show how existing, off-the-shelf, AI planning tools can be used for solving DPPS tasks.
Styles APA, Harvard, Vancouver, ISO, etc.
39

Liu, Fudong, Zheng Shan et Yihang Chen. « Parallel Nonnegative Matrix Factorization with Manifold Regularization ». Journal of Electrical and Computer Engineering 2018 (2018) : 1–10. http://dx.doi.org/10.1155/2018/6270816.

Texte intégral
Résumé :
Nonnegative matrix factorization (NMF) decomposes a high-dimensional nonnegative matrix into the product of two reduced dimensional nonnegative matrices. However, conventional NMF neither qualifies large-scale datasets as it maintains all data in memory nor preserves the geometrical structure of data which is needed in some practical tasks. In this paper, we propose a parallel NMF with manifold regularization method (PNMF-M) to overcome the aforementioned deficiencies by parallelizing the manifold regularized NMF on distributed computing system. In particular, PNMF-M distributes both data samples and factor matrices to multiple computing nodes instead of loading the whole dataset in a single node and updates both factor matrices locally on each node. In this way, PNMF-M succeeds to resolve the pressure of memory consumption for large-scale datasets and to speed up the computation by parallelization. For constructing the adjacency matrix in manifold regularization, we propose a two-step distributed graph construction method, which is proved to be equivalent to the batch construction method. Experimental results on popular text corpora and image datasets demonstrate that PNMF-M significantly improves both scalability and time efficiency of conventional NMF thanks to the parallelization on distributed computing system; meanwhile it significantly enhances the representation ability of conventional NMF thanks to the incorporated manifold regularization.
Styles APA, Harvard, Vancouver, ISO, etc.
40

Surendranath, H., et M. Dunbar. « Parallel Computing for Tire Simulations ». Tire Science and Technology 39, no 3 (1 septembre 2011) : 193–209. http://dx.doi.org/10.2346/1.3637743.

Texte intégral
Résumé :
Abstract Over the last few decades, finite element analysis has become an integral part of the overall tire design process. Engineers need to perform a number of different simulations to evaluate new designs and study the effect of proposed design changes. However, tires pose formidable simulation challenges due to the presence of highly nonlinear rubber compounds, embedded reinforcements, complex tread geometries, rolling contact, and large deformations. Accurate simulation requires careful consideration of these factors, resulting in the extensive turnaround time, often times prolonging the design cycle. Therefore, it is extremely critical to explore means to reduce the turnaround time while producing reliable results. Compute clusters have recently become a cost effective means to perform high performance computing (HPC). Distributed memory parallel solvers designed to take advantage of compute clusters have become increasingly popular. In this paper, we examine the use of HPC for various tire simulations and demonstrate how it can significantly reduce simulation turnaround time. Abaqus/Standard is used for routine tire simulations like footprint and steady state rolling. Abaqus/Explicit is used for transient rolling and hydroplaning simulations. The run times and scaling data corresponding to models of various sizes and complexity are presented.
Styles APA, Harvard, Vancouver, ISO, etc.
41

Gorodnyaya, Lidia. « FUNCTIONAL PROGRAMMING FOR PARALLEL COMPUTING ». Bulletin of the Novosibirsk Computing Center. Series : Computer Science, no 45 (2021) : 29–48. http://dx.doi.org/10.31144/bncc.cs.2542-1972.2021.n45.p29-48.

Texte intégral
Résumé :
The paper is devoted to modern trends in the application of functional programming to the problems of organizing parallel computations. Functional programming is considered as a meta-paradigm for solving the problems of developing multi-threaded programs for multiprocessor complexes and distributed systems, as well as for solving the problems associated with rapid IT development. The semantic and pragmatic principles of functional programming and consequences of these principles are described. The paradigm analysis of programming languages and systems is used, which allows assessing their similarities and differences. Taking into account these features is necessary when predicting the course of application processes, as well as when planning the study and organization of program development. There are reasons to believe that functional programming is capable of improving program performance through its adaptability to modeling and prototyping. A variety of features and characteristics inherent in the development and debugging of long-lived parallel computing programs is shown. The author emphasizes the prospects of functional programming as a universal technique for solving complex problems burdened with difficult to verify and poorly compatible requirements. A brief outline of the requirements for a multiparadigm parallel programming language is given.
Styles APA, Harvard, Vancouver, ISO, etc.
42

Gichamo, Tseganeh Z., et David G. Tarboton. « UEB parallel : Distributed snow accumulation and melt modeling using parallel computing ». Environmental Modelling & ; Software 125 (mars 2020) : 104614. http://dx.doi.org/10.1016/j.envsoft.2019.104614.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
43

Lin, Hong. « A Case Study of Teaching Parallel and Distributed Computing Topics on a Computer Cluster ». Journal of Cases on Information Technology 16, no 2 (avril 2014) : 58–71. http://dx.doi.org/10.4018/jcit.2014040105.

Texte intégral
Résumé :
This paper presents the establishment of cluster computing lab at a minority serving institution that aims to provide computing resources to support undergraduate computer science curriculum. The computing resources of the cluster are managed by a job distribution environment that allows the users to upload, compile, and run their jobs. The job distribution software distributes the submitted jobs to the computing nodes of the cluster. The authors will present a case study of using this platform to teach parallel and distributed computing topics in the operating system course. The evaluation of the teaching effectiveness is presented thereafter.
Styles APA, Harvard, Vancouver, ISO, etc.
44

Jones, Jeff. « Mechanisms Inducing Parallel Computation in a Model of Physarum polycephalum Transport Networks ». Parallel Processing Letters 25, no 01 (mars 2015) : 1540004. http://dx.doi.org/10.1142/s0129626415400046.

Texte intégral
Résumé :
The giant amoeboid organism true slime mould Physarum polycephalum dynamically adapts its body plan in response to changing environmental conditions and its protoplasmic transport network is used to distribute nutrients within the organism. These networks are efficient in terms of network length and network resilience and are parallel approximations of a range of proximity graphs and plane division problems. The complex parallel distributed computation exhibited by this simple organism has since served as an inspiration for intensive research into distributed computing and robotics within the last decade. P. polycephalum may be considered as a spatially represented parallel unconventional computing substrate, but how can this ‘computer’ be programmed? In this paper we examine and catalogue individual low-level mechanisms which may be used to induce network formation and adaptation in a multi-agent model of P. polycephalum. These mechanisms include those intrinsic to the model (particle sensor angle, rotation angle, and scaling parameters) and those mediated by the environment (stimulus location, distance, angle, concentration, engulfment and consumption of nutrients, and the presence of simulated light irradiation, repellents and obstacles). The mechanisms induce a concurrent integration of chemoattractant and chemorepellent gradients diffusing within the 2D lattice upon which the agent population resides, stimulating growth, movement, morphological adaptation and network minimisation. Chemoattractant gradients, and their modulation by the engulfment and consumption of nutrients by the model population, represent an efficient outsourcing of spatial computation. The mechanisms may prove useful in understanding the search strategies and adaptation of distributed organisms within their environment, in understanding the minimal requirements for complex adaptive behaviours, and in developing methods of spatially programming parallel unconventional computers and robotic devices.
Styles APA, Harvard, Vancouver, ISO, etc.
45

Wei, Chih-Chiang, et Tzu-Hao Chou. « Typhoon Quantitative Rainfall Prediction from Big Data Analytics by Using the Apache Hadoop Spark Parallel Computing Framework ». Atmosphere 11, no 8 (17 août 2020) : 870. http://dx.doi.org/10.3390/atmos11080870.

Texte intégral
Résumé :
Situated in the main tracks of typhoons in the Northwestern Pacific Ocean, Taiwan frequently encounters disasters from heavy rainfall during typhoons. Accurate and timely typhoon rainfall prediction is an imperative topic that must be addressed. The purpose of this study was to develop a Hadoop Spark distribute framework based on big-data technology, to accelerate the computation of typhoon rainfall prediction models. This study used deep neural networks (DNNs) and multiple linear regressions (MLRs) in machine learning, to establish rainfall prediction models and evaluate rainfall prediction accuracy. The Hadoop Spark distributed cluster-computing framework was the big-data technology used. The Hadoop Spark framework consisted of the Hadoop Distributed File System, MapReduce framework, and Spark, which was used as a new-generation technology to improve the efficiency of the distributed computing. The research area was Northern Taiwan, which contains four surface observation stations as the experimental sites. This study collected 271 typhoon events (from 1961 to 2017). The following results were obtained: (1) in machine-learning computation, prediction errors increased with prediction duration in the DNN and MLR models; and (2) the system of Hadoop Spark framework was faster than the standalone systems (single I7 central processing unit (CPU) and single E3 CPU). When complex computation is required in a model (e.g., DNN model parameter calibration), the big-data-based Hadoop Spark framework can be used to establish highly efficient computation environments. In summary, this study successfully used the big-data Hadoop Spark framework with machine learning, to develop rainfall prediction models with effectively improved computing efficiency. Therefore, the proposed system can solve problems regarding real-time typhoon rainfall prediction with high timeliness and accuracy.
Styles APA, Harvard, Vancouver, ISO, etc.
46

Lavanya K., L. S. S. Reddy et B. Eswara Reddy. « Distributed Based Serial Regression Multiple Imputation for High Dimensional Multivariate Data in Multicore Environment of Cloud ». International Journal of Ambient Computing and Intelligence 10, no 2 (avril 2019) : 63–79. http://dx.doi.org/10.4018/ijaci.2019040105.

Texte intégral
Résumé :
Multiple imputations (MI) are predominantly applied in such processes that are involved in the transaction of huge chunks of missing data. Multivariate data that follow traditional statistical models undergoes great suffering for the inadequate availability of pertinent data. The field of distributed computing research faces the biggest hurdle in the form of insufficient high dimensional multivariate data. It mainly deals with the analysis of parallel input problems found in the cloud computing network in general and evaluation of high-performance computing in particular. In fact, it is a tough task to utilize parallel multiple input methods for accomplishing remarkable performance as well as allowing huge datasets achieves scale. In this regard, it is essential that a credible data system is developed and a decomposition strategy is used to partition workload in the entire process for minimum data dependence. Subsequently, a moderate synchronization and/or meager communication liability is followed for placing parallel impute methods for achieving scale as well as more processes. The present article proposes many novel applications for better efficiency. As the first step, this article suggests distributed-oriented serial regression multiple imputation for enhancing the efficiency of imputation task in high dimensional multivariate normal data. As the next step, the processes done in three diverse with parallel back ends viz. Multiple imputation that used the socket method to serve serial regression and the Fork Method to distribute work over workers, and also same work experiments in dynamic structure with a load balance mechanism. In the end, the set of distributed MI methods are used to experimentally analyze amplitude of imputation scores spanning across three probable scenarios in the range of 1:500. Further, the study makes an important observation that due to the efficiency of numerous imputation methods, the data is arranged proportionately in a missing range of 10% to 50%, low to high, while dealing with data between 1000 and 100,000 samples. The experiments are done in a cloud environment and demonstrate that it is possible to generate a decent speed by lessening the repetitive communication between processors.
Styles APA, Harvard, Vancouver, ISO, etc.
47

Liddell, Heather M., D. Parkinson, G. S. Hodgson et P. Dzwig. « Parallel Computing Applications and Financial Modelling ». Scientific Programming 12, no 2 (2004) : 81–90. http://dx.doi.org/10.1155/2004/404575.

Texte intégral
Résumé :
At Queen Mary, University of London, we have over twenty years of experience in Parallel Computing Applications, mostly on "massively parallel systems", such as the Distributed Array Processors (DAPs). The applications in which we were involved included design of numerical subroutine libraries, Finite Element software, graphics tools, the physics of organic materials, medical imaging, computer vision and more recently, Financial modelling. Two of the projects related to the latter are described in this paper, namely Portfolio Optimisation and Financial Risk Assessment.
Styles APA, Harvard, Vancouver, ISO, etc.
48

Seredyński, Franciszek. « New Trends in Parallel and Distributed Evolutionary Computing ». Fundamenta Informaticae 35, no 1-4 (1998) : 211–30. http://dx.doi.org/10.3233/fi-1998-35123412.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
49

Hanuliak, Michal. « Unified Analytical Models of Parallel and Distributed Computing ». American Journal of Networks and Communications 3, no 1 (2014) : 1. http://dx.doi.org/10.11648/j.ajnc.20140301.11.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
50

Lathia, M. « A Useful Resource for Parallel and Distributed Computing ». IEEE Distributed Systems Online 6, no 4 (avril 2005) : 4. http://dx.doi.org/10.1109/mdso.2005.19.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie