To see the other types of publications on this topic, follow the link: Distribute and Parallel Computing.

Journal articles on the topic 'Distribute and Parallel Computing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Distribute and Parallel Computing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Umar, A. "Distributed And Parallel Computing." IEEE Concurrency 6, no. 4 (October 1998): 80–81. http://dx.doi.org/10.1109/mcc.1998.736439.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ramsay, A. "Distributed versus parallel computing." Artificial Intelligence Review 1, no. 1 (March 1986): 11–25. http://dx.doi.org/10.1007/bf01988525.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wismüller, Roland. "Parallel and distributed computing." Software Focus 2, no. 3 (September 2001): 124. http://dx.doi.org/10.1002/swf.44.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Sun, Qi, and Hui Yan Zhao. "Design of Distribute Monitoring Platform Base on Cloud Computing." Applied Mechanics and Materials 687-691 (November 2014): 1076–79. http://dx.doi.org/10.4028/www.scientific.net/amm.687-691.1076.

Full text
Abstract:
Based on cloud computing distributed network measurement system compared to traditional measurement infrastructure, the use of cloud computing platform measurement data stored in massive large virtual resource pool to ensure the reliability of data storage and scalability, re-use cloud computing platform parallel processing mechanism, the mass measurement data for fast, concurrent analytical processing and data mining. Measuring probe supports a variety of different measurement algorithms deployed to support a variety of data acquisition formats, in the measurement method provides a congestion response policies and load balancing strategies.
APA, Harvard, Vancouver, ISO, and other styles
5

Gao, Tie Liang, Jiao Li, Jun Peng Zhang, and Bing Jie Shi. "The Research of MapReduce on the Cloud Computing." Applied Mechanics and Materials 182-183 (June 2012): 2127–30. http://dx.doi.org/10.4028/www.scientific.net/amm.182-183.2127.

Full text
Abstract:
MapReduce is a kind of model of program that is use in the parallel computing about large scale data muster in the Cloud Computing[1] , it mainly consist of map and reduce . MapReduce is tremendously convenient for the programmer who can’t familiar with the parallel program .These people use the MapReduce to run their program on the distribute system. This paper mainly research the model and process and theory of MapReduce .
APA, Harvard, Vancouver, ISO, and other styles
6

Egorov, Alexander, Natalya Krupenina, and Lyubov Tyndykar. "The parallel approach to issue of operational management optimization problem on transport gateway system." E3S Web of Conferences 203 (2020): 05003. http://dx.doi.org/10.1051/e3sconf/202020305003.

Full text
Abstract:
The universal parallelization software shell for joint data processing, implemented in combination with a distributed computing system, is considered. The research purpose – to find the most effective solution for the navigable canal management information system organizing. One optimization option is to increase computer devices computing power by combining them into a single computing cluster. The management optimizing task of a locked shipping channel for execution to adapt in a multi-threaded environment is proposed with constraints on a technologically feasible schedule. In article shows algorithms and gives recommendations for their application in the subtasks formation in parallel processing case, as well as on a separate thread. The proposed approach to building a tree of options allows you to optimally distribute the load between all resources multi-threaded system any structure.
APA, Harvard, Vancouver, ISO, and other styles
7

Myint, Khin Nyein, Myo Hein Zaw, and Win Thanda Aung. "Parallel and Distributed Computing Using MPI on Raspberry Pi Cluster." International Journal of Future Computer and Communication 9, no. 1 (March 2020): 18–22. http://dx.doi.org/10.18178/ijfcc.2020.9.1.559.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Mukaddes, A. M. M., and Ryuji Shioya. "Parallel Performance of Domain Decomposition Method on Distributed Computing Environment." International Journal of Engineering and Technology 2, no. 1 (2010): 28–34. http://dx.doi.org/10.7763/ijet.2010.v2.95.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Stankovic. "Introduction—Parallel and Distributed Computing." IEEE Transactions on Computers C-36, no. 4 (April 1987): 385–86. http://dx.doi.org/10.1109/tc.1987.1676919.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Sunderam, V. S., and G. A. Geist. "Heterogeneous parallel and distributed computing." Parallel Computing 25, no. 13-14 (December 1999): 1699–721. http://dx.doi.org/10.1016/s0167-8191(99)00088-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Dalcin, Lisandro D., Rodrigo R. Paz, Pablo A. Kler, and Alejandro Cosimo. "Parallel distributed computing using Python." Advances in Water Resources 34, no. 9 (September 2011): 1124–39. http://dx.doi.org/10.1016/j.advwatres.2011.04.013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Zhao, Long Fang, Di Bo Hou, Ping Jie Huang, Xiao Xuan Song, Lin Xu, and Guang Xin Zhang. "Parallel Computing for Water Quality Equation Based on JPPF." Applied Mechanics and Materials 295-298 (February 2013): 2061–68. http://dx.doi.org/10.4028/www.scientific.net/amm.295-298.2061.

Full text
Abstract:
In order to meet the requirement to quickly solve the water quality equation for an unexpected water pollution incident, this dissertation, based on the paralleled algorithm under Java Parallel Processing Framework (JPPF), does research about the process to dynamically decomposes the iterative process into calculation tasks and distribute the tasks by the API of JPPF to the paralleled nodes for calculation. And the simulation result of one-dimension water quality equation shows that parallel computing method could reduce the time complexity from to , not only resulting in a significant improvement in calculation speed, but in a higher reliability and stability.
APA, Harvard, Vancouver, ISO, and other styles
13

SCHEININE, ALAN LOUIS. "PARALLEL COMPUTING AT CRS4." International Journal of Modern Physics C 04, no. 06 (December 1993): 1315–21. http://dx.doi.org/10.1142/s0129183193001038.

Full text
Abstract:
An overview is given of parallel computing work being done at CRS4 (Centro di Ricerca, Sviluppo e Studi Superiori in Sardegna). Parallel computation projects include: parallelization of a simulation of the interaction of high energy particles with matter (GEANT), domain decomposition for numerical solution of partial differential equations, seismic migration for oil prospecting, finite-element structural analysis, parallel molecular dynamics, a C++ library for distributed processing of specific functions, and real-time visualization of a computer simulation that runs as distributed processes.
APA, Harvard, Vancouver, ISO, and other styles
14

Lastovetsky, Alexey. "Special issue of Journal of Parallel and Distributed Computing: Heterogeneity in parallel and distributed computing." Journal of Parallel and Distributed Computing 72, no. 10 (October 2012): 1397. http://dx.doi.org/10.1016/j.jpdc.2012.06.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Wolfer, James. "Embedding Topical Elements of Parallel Programming, Computer Graphics, and Artificial Intelligence across the Undergraduate CS Required Courses." International Journal of Engineering Pedagogy (iJEP) 5, no. 1 (February 11, 2015): 27. http://dx.doi.org/10.3991/ijep.v5i1.4090.

Full text
Abstract:
Traditionally, topics such as parallel computing, computer graphics, and artificial intelligence have been taught as stand-alone courses in the computing curriculum. Often these are elective courses, limiting the material to the subset of students choosing to take the course. Recently there has been movement to distribute topics across the curriculum in order to ensure that all graduates have been exposed to concepts such as parallel computing. Previous work described an attempt to systematically weave a tapestry of topics into the undergraduate computing curriculum. This paper reviews that work and expands it with representative examples of assignments, demonstrations, and results as well as describing how the tools and examples deployed for these classes have a residual effect on classes such as Comptuer Literacy.
APA, Harvard, Vancouver, ISO, and other styles
16

Szymczyk, Magdalena, and Piotr Szymczyk. "Matlab and Parallel Computing." Image Processing & Communications 17, no. 4 (December 1, 2012): 207–16. http://dx.doi.org/10.2478/v10248-012-0048-5.

Full text
Abstract:
Abstract The MATLAB is a technical computing language used in a variety of fields, such as control systems, image and signal processing, visualization, financial process simulations in an easy-to-use environment. MATLAB offers "toolboxes" which are specialized libraries for variety scientific domains, and a simplified interface to high-performance libraries (LAPACK, BLAS, FFTW too). Now MATLAB is enriched by the possibility of parallel computing with the Parallel Computing ToolboxTM and MATLAB Distributed Computing ServerTM. In this article we present some of the key features of MATLAB parallel applications focused on using GPU processors for image processing.
APA, Harvard, Vancouver, ISO, and other styles
17

Pan, Lei, Ming Kin Lai, Koji Noguchi, Javid J. Huseynov, Lubomir F. Bic, and Michael B. Dillencourt. "Distributed Parallel Computing Using Navigational Programming." International Journal of Parallel Programming 32, no. 1 (February 2004): 1–37. http://dx.doi.org/10.1023/b:ijpp.0000015563.36375.17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Lastovetsky, Alexey. "Heterogeneity in parallel and distributed computing." Journal of Parallel and Distributed Computing 73, no. 12 (December 2013): 1523–24. http://dx.doi.org/10.1016/j.jpdc.2013.08.010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Hertzberger, L. O. "Trends in parallel and distributed computing." Future Generation Computer Systems 7, no. 1 (October 1991): 31–40. http://dx.doi.org/10.1016/0167-739x(91)90014-o.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Ozturan, Can, and Dan Grigoras. "Guest Editorial: Parallel and Distributed Computing." International Journal of Parallel Programming 39, no. 5 (February 25, 2011): 582–83. http://dx.doi.org/10.1007/s10766-011-0166-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Shang, Zhi. "Large-Scale CFD Parallel Computing Dealing with Massive Mesh." Journal of Engineering 2013 (2013): 1–6. http://dx.doi.org/10.1155/2013/850148.

Full text
Abstract:
In order to run CFD codes more efficiently on large scales, the parallel computing has to be employed. For example, in industrial scales, it usually uses tens of thousands of mesh cells to capture the details of complex geometries. How to distribute these mesh cells among the multiprocessors for obtaining a good parallel computing performance (HPC) is really a challenge. Due to dealing with the massive mesh cells, it is difficult for the CFD codes without parallel optimizations to handle this kind of large-scale computing. Some of the open source mesh partitioning software packages, such as Metis, ParMetis, Scotch, PT-Scotch, and Zoltan, are able to deal with the distribution of large number of mesh cells. Therefore they were employed as the parallel optimization tools ported into Code_Saturne, an open source CFD code, for testing if they can solve the issue of dealing with massive mesh cells for CFD codes. Through the studies, it was found that the mesh partitioning optimization software packages can help CFD codes not only deal with massive mesh cells but also have a good HPC.
APA, Harvard, Vancouver, ISO, and other styles
22

Malathi, K., Dr R. Anandan, and Dr J. Frank Vijay. "Cloud Environment Task Scheduling Optimization of Modified Genetic Algorithm." Journal of Internet Services and Information Security 13, no. 1 (January 7, 2023): 34–43. http://dx.doi.org/10.58346/jisis.2023.i1.004.

Full text
Abstract:
From the availability of resources to the accomplishment of tasks, cloud computing is a development of supercomputing. One of the most trustworthy paradigms in computing technology is built on internet-based parallel and distributed computing models. Optimization algorithms can be used to distribute user workloads to provided logical resources termed 'Virtual Machines' in the cloud computing system, which is a major aspect of resource management (VM). A fundamental challenge in cloud computing is the dynamic heterogeneity of resources and workloads, which necessitates efficient task scheduling and distribution. It is possible that task scheduling in distributed environments will improve our understanding of workflow scheduling, independent task scheduling that takes into account security and execution time for applications, trust between various system entities, and improved system utilisation and energy efficiency, among other things. The goal of this research is to contribute to these advancements in these areas: An independent task scheduling system based on genetics is presented to obtain the best outcomes in terms of time and resource consumption while allocating tasks to resources in accordance with the task's security needs. Various meta-heuristic algorithms, such as Genetic Algorithm, are currently being used to solve task scheduling difficulties.
APA, Harvard, Vancouver, ISO, and other styles
23

Spahi, Enis, and D. Altilar. "ITU-PRP: Parallel and Distributed Computing Middleware for Java Developers." International Journal of Business & Technology 3, no. 1 (November 2014): 2–13. http://dx.doi.org/10.33107/ijbte.2014.3.1.01.

Full text
Abstract:
ITU-PRP provides a Parallel Programming Framework for Java Developers on which they can adapt their sequential application code to operate on a distributed multi-host parallel environment. Developers would implement parallel models, such as Loop Parallelism, Divide and Conquer, Master-Slave and Fork-Join by the help of an API Library provided under framework. Produced parallel applications would be submitted to a middleware called Parallel Running Platform (PRP), on which parallel resources for parallel processing are being organized and performed. The middleware creates Task Plans (TP) according to application’s parallel model, assigns best available resource Hosts, in order to perform fast parallel processing. Task Plans will be created dynamically in real time according to resources actual utilization status or availability, instead of predefined/preconfigured task plans. ITU-PRP achieves better efficiency on parallel processing over big data sets and distributes divided base data to multiple hosts to be operated by Coarse-Grained parallelism. According to this model distributed parallel tasks would operate independently with minimal interaction until processing ends.
APA, Harvard, Vancouver, ISO, and other styles
24

Siwik, Leszek, Dawid Kala, Mateusz Godzik, Wojciech Turek, Aleksander Byrski, and Marek Kisiel-Dorohinicki. "Mobile Cloud for Parallel and Distributed Green Computing." Journal of Telecommunications and Information Technology 4 (December 20, 2017): 60–70. http://dx.doi.org/10.26636/jtit.2017.111817.

Full text
Abstract:
Mobile Computing and Mobile Cloud Computing are the areas where intensive research is observed. The “mobility” landscape (devices, technologies, apps, etc.) evolves so fast that definitions and taxonomies do not catch up with so dynamic changes and there is still an ambiguity in definitions and common understanding of basic ideas and models. This research focuses on Mobile Cloud understood as parallel and distributed system consisting of a collection of interconnected (and virtualized) mobile devices dynamically provisioned and presented as one unified computing resource. This paper focuses on the mobile green computing cloud applied for parallel and distributed computations and consisting of outdated, abandoned or no longer needed smartphones being able to set up a powerful computing cluster. Besides showing the general idea and background, an actual computing cluster is constructed and its scalability and efficiency is checked versus the results obtained from the virtualized set of smartphones. All the experiments are performed using a dedicated software framework constructed in order to leverage the nolonger-needed smartphones, creating a computing cloud.
APA, Harvard, Vancouver, ISO, and other styles
25

Chard, Kyle, James Muns, Richard Wai, and S. Tucker Taft. "Language support for parallel and distributed computing." ACM SIGAda Ada Letters 40, no. 2 (April 27, 2021): 51–54. http://dx.doi.org/10.1145/3463478.3463481.

Full text
Abstract:
Language constructs that support parallel computing are relatively well recognized at this point, with features such as parallel loops (optionally with reduction operators), divide-and-conquer parallelism, and general parallel blocks. But what language features would make distributed computing safer and more productive? Is it helpful to be able to specify on what node a computation should take place, and on what node data should reside, or is that overspecification? We don't normally expect a user of a parallel programming language to specify what core is used for a given iteration of a loop, nor which data should be moved into which core's cache. Generally the compiler and the run-time manage the allocation of cores, and the hardware worries about the cache. But in a distributed world, communication costs can easily outweigh computation costs in a poorly designed application. This panel will discuss various language features, some of which already exist to support parallel computing, and how they could be enhanced or generalized to support distributed computing safely and efficiently.
APA, Harvard, Vancouver, ISO, and other styles
26

ZHOU, Xu, Kai LU, Xiaoping WANG, Wenzhe ZHANG, Kai ZHANG, Xu LI, and Gen LI. "Deterministic Message Passing for Distributed Parallel Computing." IEICE Transactions on Information and Systems E96.D, no. 5 (2013): 1068–77. http://dx.doi.org/10.1587/transinf.e96.d.1068.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Kopysov, S. P., A. B. Ponomarev, and V. N. Rychkov. "Model computation mesh for parallel distributed computing." Vestnik Udmurtskogo Universiteta. Matematika. Mekhanika. Komp'yuternye Nauki, no. 2 (April 2008): 194–96. http://dx.doi.org/10.20537/vm080264.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Yoshikawa, T., and H. Matsuoka. "Optical interconnections for parallel and distributed computing." Proceedings of the IEEE 88, no. 6 (June 2000): 849–55. http://dx.doi.org/10.1109/5.867698.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Ranganatban, N. "Parallel And Distributed Computing Handbook [Book Review]." IEEE Concurrency 5, no. 3 (July 1997): 74–75. http://dx.doi.org/10.1109/mcc.1997.605922.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Zomaya, A. Y., T. El-Ghazawi, and O. Frieder. "Parallel and distributed computing for data mining." IEEE Concurrency 7, no. 4 (October 1999): 11–13. http://dx.doi.org/10.1109/mcc.1999.806974.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Yang, Laurence T., Yi Pan, and Minyi Guo. "Parallel and distributed scientific and engineering computing." Parallel Computing 29, no. 11-12 (November 2003): 1505–8. http://dx.doi.org/10.1016/j.parco.2003.08.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

D’Ambra, Pasqua, Marco Danelutto, and Daniela di Serafino. "Advanced environments for parallel and distributed computing." Parallel Computing 28, no. 12 (December 2002): 1635–36. http://dx.doi.org/10.1016/s0167-8191(02)00183-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Cap, Clemens H., and Volker Strumpen. "Efficient parallel computing in distributed workstation environments." Parallel Computing 19, no. 11 (November 1993): 1221–34. http://dx.doi.org/10.1016/0167-8191(93)90028-j.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Iványi, P., J. Kruis, T. Kozubek, W. Gentzsch, and B. H. V. Topping. "Civil-Comp: Parallel, Distributed and Cloud Computing." Advances in Engineering Software 103 (January 2017): 12. http://dx.doi.org/10.1016/j.advengsoft.2016.11.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Sunderam, V. S. "PVM: A framework for parallel distributed computing." Concurrency: Practice and Experience 2, no. 4 (December 1990): 315–39. http://dx.doi.org/10.1002/cpe.4330020404.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Chi, Song. "Application of MATLAB Parallel Programming Technology." Applied Mechanics and Materials 602-605 (August 2014): 3787–90. http://dx.doi.org/10.4028/www.scientific.net/amm.602-605.3787.

Full text
Abstract:
The parallel application program development technology and process is analyzed based on the MATLAB parallel and distributed computing toolbox. Hereby, the comparison of the serial computing and the parallel computing is done by computer simulations, and the parallel computing program design and develop method is proposed. The simulations results show that, the parallel computing technology has many advantages in the calculation of high intensive and it is convenience of the parallel application development using MATLAB.
APA, Harvard, Vancouver, ISO, and other styles
37

Yang, Ning, Shiaaulir Wang, and Paul Schonfeld. "Simulation-Based Scheduling of Waterway Projects Using a Parallel Genetic Algorithm." International Journal of Operations Research and Information Systems 6, no. 1 (January 2015): 49–63. http://dx.doi.org/10.4018/ijoris.2015010104.

Full text
Abstract:
A Parallel Genetic Algorithm (PGA) is used for a simulation-based optimization of waterway project schedules. This PGA is designed to distribute a Genetic Algorithm application over multiple processors in order to speed up the solution search procedure for a very large combinational problem. The proposed PGA is based on a global parallel model, which is also called a master-slave model. A Message-Passing Interface (MPI) is used in developing the parallel computing program. A case study is presented, whose results show how the adaption of a simulation-based optimization algorithm to parallel computing can greatly reduce computation time. Additional techniques which are found to further improve the PGA performance include: (1) choosing an appropriate task distribution method, (2) distributing simulation replications instead of different solutions, (3) avoiding the simulation of duplicate solutions, (4) avoiding running multiple simulations simultaneously in shared-memory processors, and (5) avoiding using multiple processors which belong to different clusters (physical sub-networks).
APA, Harvard, Vancouver, ISO, and other styles
38

Karpas, Erez, Tomer Sagi, Carmel Domshlak, Avigdor Gal, Avi Mendelson, and Moshe Tennenholtz. "Data-Parallel Computing Meets STRIPS." Proceedings of the AAAI Conference on Artificial Intelligence 27, no. 1 (June 30, 2013): 474–80. http://dx.doi.org/10.1609/aaai.v27i1.8590.

Full text
Abstract:
The increased demand for distributed computations on “big data” has led to solutions such as SCOPE, DryadLINQ, Pig, and Hive, which allow the user to specify queries in an SQL-like language, enriched with sets of user-defined operators. The lack of exact semantics for user-defined operators interferes with the query optimization process, thus putting the burden of suggesting, at least partial, query plans on the user. In an attempt to ease this burden, we propose a formal model that allows for data-parallel program synthesis (DPPS) in a semantically well-defined manner. We show that this model generalizes existing frameworks for data-parallel computation, while providing the flexibility of query plan generation that is currently absent from these frameworks. In particular, we show how existing, off-the-shelf, AI planning tools can be used for solving DPPS tasks.
APA, Harvard, Vancouver, ISO, and other styles
39

Liu, Fudong, Zheng Shan, and Yihang Chen. "Parallel Nonnegative Matrix Factorization with Manifold Regularization." Journal of Electrical and Computer Engineering 2018 (2018): 1–10. http://dx.doi.org/10.1155/2018/6270816.

Full text
Abstract:
Nonnegative matrix factorization (NMF) decomposes a high-dimensional nonnegative matrix into the product of two reduced dimensional nonnegative matrices. However, conventional NMF neither qualifies large-scale datasets as it maintains all data in memory nor preserves the geometrical structure of data which is needed in some practical tasks. In this paper, we propose a parallel NMF with manifold regularization method (PNMF-M) to overcome the aforementioned deficiencies by parallelizing the manifold regularized NMF on distributed computing system. In particular, PNMF-M distributes both data samples and factor matrices to multiple computing nodes instead of loading the whole dataset in a single node and updates both factor matrices locally on each node. In this way, PNMF-M succeeds to resolve the pressure of memory consumption for large-scale datasets and to speed up the computation by parallelization. For constructing the adjacency matrix in manifold regularization, we propose a two-step distributed graph construction method, which is proved to be equivalent to the batch construction method. Experimental results on popular text corpora and image datasets demonstrate that PNMF-M significantly improves both scalability and time efficiency of conventional NMF thanks to the parallelization on distributed computing system; meanwhile it significantly enhances the representation ability of conventional NMF thanks to the incorporated manifold regularization.
APA, Harvard, Vancouver, ISO, and other styles
40

Surendranath, H., and M. Dunbar. "Parallel Computing for Tire Simulations." Tire Science and Technology 39, no. 3 (September 1, 2011): 193–209. http://dx.doi.org/10.2346/1.3637743.

Full text
Abstract:
Abstract Over the last few decades, finite element analysis has become an integral part of the overall tire design process. Engineers need to perform a number of different simulations to evaluate new designs and study the effect of proposed design changes. However, tires pose formidable simulation challenges due to the presence of highly nonlinear rubber compounds, embedded reinforcements, complex tread geometries, rolling contact, and large deformations. Accurate simulation requires careful consideration of these factors, resulting in the extensive turnaround time, often times prolonging the design cycle. Therefore, it is extremely critical to explore means to reduce the turnaround time while producing reliable results. Compute clusters have recently become a cost effective means to perform high performance computing (HPC). Distributed memory parallel solvers designed to take advantage of compute clusters have become increasingly popular. In this paper, we examine the use of HPC for various tire simulations and demonstrate how it can significantly reduce simulation turnaround time. Abaqus/Standard is used for routine tire simulations like footprint and steady state rolling. Abaqus/Explicit is used for transient rolling and hydroplaning simulations. The run times and scaling data corresponding to models of various sizes and complexity are presented.
APA, Harvard, Vancouver, ISO, and other styles
41

Gorodnyaya, Lidia. "FUNCTIONAL PROGRAMMING FOR PARALLEL COMPUTING." Bulletin of the Novosibirsk Computing Center. Series: Computer Science, no. 45 (2021): 29–48. http://dx.doi.org/10.31144/bncc.cs.2542-1972.2021.n45.p29-48.

Full text
Abstract:
The paper is devoted to modern trends in the application of functional programming to the problems of organizing parallel computations. Functional programming is considered as a meta-paradigm for solving the problems of developing multi-threaded programs for multiprocessor complexes and distributed systems, as well as for solving the problems associated with rapid IT development. The semantic and pragmatic principles of functional programming and consequences of these principles are described. The paradigm analysis of programming languages and systems is used, which allows assessing their similarities and differences. Taking into account these features is necessary when predicting the course of application processes, as well as when planning the study and organization of program development. There are reasons to believe that functional programming is capable of improving program performance through its adaptability to modeling and prototyping. A variety of features and characteristics inherent in the development and debugging of long-lived parallel computing programs is shown. The author emphasizes the prospects of functional programming as a universal technique for solving complex problems burdened with difficult to verify and poorly compatible requirements. A brief outline of the requirements for a multiparadigm parallel programming language is given.
APA, Harvard, Vancouver, ISO, and other styles
42

Gichamo, Tseganeh Z., and David G. Tarboton. "UEB parallel: Distributed snow accumulation and melt modeling using parallel computing." Environmental Modelling & Software 125 (March 2020): 104614. http://dx.doi.org/10.1016/j.envsoft.2019.104614.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Lin, Hong. "A Case Study of Teaching Parallel and Distributed Computing Topics on a Computer Cluster." Journal of Cases on Information Technology 16, no. 2 (April 2014): 58–71. http://dx.doi.org/10.4018/jcit.2014040105.

Full text
Abstract:
This paper presents the establishment of cluster computing lab at a minority serving institution that aims to provide computing resources to support undergraduate computer science curriculum. The computing resources of the cluster are managed by a job distribution environment that allows the users to upload, compile, and run their jobs. The job distribution software distributes the submitted jobs to the computing nodes of the cluster. The authors will present a case study of using this platform to teach parallel and distributed computing topics in the operating system course. The evaluation of the teaching effectiveness is presented thereafter.
APA, Harvard, Vancouver, ISO, and other styles
44

Jones, Jeff. "Mechanisms Inducing Parallel Computation in a Model of Physarum polycephalum Transport Networks." Parallel Processing Letters 25, no. 01 (March 2015): 1540004. http://dx.doi.org/10.1142/s0129626415400046.

Full text
Abstract:
The giant amoeboid organism true slime mould Physarum polycephalum dynamically adapts its body plan in response to changing environmental conditions and its protoplasmic transport network is used to distribute nutrients within the organism. These networks are efficient in terms of network length and network resilience and are parallel approximations of a range of proximity graphs and plane division problems. The complex parallel distributed computation exhibited by this simple organism has since served as an inspiration for intensive research into distributed computing and robotics within the last decade. P. polycephalum may be considered as a spatially represented parallel unconventional computing substrate, but how can this ‘computer’ be programmed? In this paper we examine and catalogue individual low-level mechanisms which may be used to induce network formation and adaptation in a multi-agent model of P. polycephalum. These mechanisms include those intrinsic to the model (particle sensor angle, rotation angle, and scaling parameters) and those mediated by the environment (stimulus location, distance, angle, concentration, engulfment and consumption of nutrients, and the presence of simulated light irradiation, repellents and obstacles). The mechanisms induce a concurrent integration of chemoattractant and chemorepellent gradients diffusing within the 2D lattice upon which the agent population resides, stimulating growth, movement, morphological adaptation and network minimisation. Chemoattractant gradients, and their modulation by the engulfment and consumption of nutrients by the model population, represent an efficient outsourcing of spatial computation. The mechanisms may prove useful in understanding the search strategies and adaptation of distributed organisms within their environment, in understanding the minimal requirements for complex adaptive behaviours, and in developing methods of spatially programming parallel unconventional computers and robotic devices.
APA, Harvard, Vancouver, ISO, and other styles
45

Wei, Chih-Chiang, and Tzu-Hao Chou. "Typhoon Quantitative Rainfall Prediction from Big Data Analytics by Using the Apache Hadoop Spark Parallel Computing Framework." Atmosphere 11, no. 8 (August 17, 2020): 870. http://dx.doi.org/10.3390/atmos11080870.

Full text
Abstract:
Situated in the main tracks of typhoons in the Northwestern Pacific Ocean, Taiwan frequently encounters disasters from heavy rainfall during typhoons. Accurate and timely typhoon rainfall prediction is an imperative topic that must be addressed. The purpose of this study was to develop a Hadoop Spark distribute framework based on big-data technology, to accelerate the computation of typhoon rainfall prediction models. This study used deep neural networks (DNNs) and multiple linear regressions (MLRs) in machine learning, to establish rainfall prediction models and evaluate rainfall prediction accuracy. The Hadoop Spark distributed cluster-computing framework was the big-data technology used. The Hadoop Spark framework consisted of the Hadoop Distributed File System, MapReduce framework, and Spark, which was used as a new-generation technology to improve the efficiency of the distributed computing. The research area was Northern Taiwan, which contains four surface observation stations as the experimental sites. This study collected 271 typhoon events (from 1961 to 2017). The following results were obtained: (1) in machine-learning computation, prediction errors increased with prediction duration in the DNN and MLR models; and (2) the system of Hadoop Spark framework was faster than the standalone systems (single I7 central processing unit (CPU) and single E3 CPU). When complex computation is required in a model (e.g., DNN model parameter calibration), the big-data-based Hadoop Spark framework can be used to establish highly efficient computation environments. In summary, this study successfully used the big-data Hadoop Spark framework with machine learning, to develop rainfall prediction models with effectively improved computing efficiency. Therefore, the proposed system can solve problems regarding real-time typhoon rainfall prediction with high timeliness and accuracy.
APA, Harvard, Vancouver, ISO, and other styles
46

Lavanya K., L. S. S. Reddy, and B. Eswara Reddy. "Distributed Based Serial Regression Multiple Imputation for High Dimensional Multivariate Data in Multicore Environment of Cloud." International Journal of Ambient Computing and Intelligence 10, no. 2 (April 2019): 63–79. http://dx.doi.org/10.4018/ijaci.2019040105.

Full text
Abstract:
Multiple imputations (MI) are predominantly applied in such processes that are involved in the transaction of huge chunks of missing data. Multivariate data that follow traditional statistical models undergoes great suffering for the inadequate availability of pertinent data. The field of distributed computing research faces the biggest hurdle in the form of insufficient high dimensional multivariate data. It mainly deals with the analysis of parallel input problems found in the cloud computing network in general and evaluation of high-performance computing in particular. In fact, it is a tough task to utilize parallel multiple input methods for accomplishing remarkable performance as well as allowing huge datasets achieves scale. In this regard, it is essential that a credible data system is developed and a decomposition strategy is used to partition workload in the entire process for minimum data dependence. Subsequently, a moderate synchronization and/or meager communication liability is followed for placing parallel impute methods for achieving scale as well as more processes. The present article proposes many novel applications for better efficiency. As the first step, this article suggests distributed-oriented serial regression multiple imputation for enhancing the efficiency of imputation task in high dimensional multivariate normal data. As the next step, the processes done in three diverse with parallel back ends viz. Multiple imputation that used the socket method to serve serial regression and the Fork Method to distribute work over workers, and also same work experiments in dynamic structure with a load balance mechanism. In the end, the set of distributed MI methods are used to experimentally analyze amplitude of imputation scores spanning across three probable scenarios in the range of 1:500. Further, the study makes an important observation that due to the efficiency of numerous imputation methods, the data is arranged proportionately in a missing range of 10% to 50%, low to high, while dealing with data between 1000 and 100,000 samples. The experiments are done in a cloud environment and demonstrate that it is possible to generate a decent speed by lessening the repetitive communication between processors.
APA, Harvard, Vancouver, ISO, and other styles
47

Liddell, Heather M., D. Parkinson, G. S. Hodgson, and P. Dzwig. "Parallel Computing Applications and Financial Modelling." Scientific Programming 12, no. 2 (2004): 81–90. http://dx.doi.org/10.1155/2004/404575.

Full text
Abstract:
At Queen Mary, University of London, we have over twenty years of experience in Parallel Computing Applications, mostly on "massively parallel systems", such as the Distributed Array Processors (DAPs). The applications in which we were involved included design of numerical subroutine libraries, Finite Element software, graphics tools, the physics of organic materials, medical imaging, computer vision and more recently, Financial modelling. Two of the projects related to the latter are described in this paper, namely Portfolio Optimisation and Financial Risk Assessment.
APA, Harvard, Vancouver, ISO, and other styles
48

Seredyński, Franciszek. "New Trends in Parallel and Distributed Evolutionary Computing." Fundamenta Informaticae 35, no. 1-4 (1998): 211–30. http://dx.doi.org/10.3233/fi-1998-35123412.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Hanuliak, Michal. "Unified Analytical Models of Parallel and Distributed Computing." American Journal of Networks and Communications 3, no. 1 (2014): 1. http://dx.doi.org/10.11648/j.ajnc.20140301.11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Lathia, M. "A Useful Resource for Parallel and Distributed Computing." IEEE Distributed Systems Online 6, no. 4 (April 2005): 4. http://dx.doi.org/10.1109/mdso.2005.19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography