Journal articles on the topic 'Large-scale parallel simulations'

To see the other types of publications on this topic, follow the link: Large-scale parallel simulations.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Large-scale parallel simulations.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Kwon, Sung Jin, Young Min Lee, and Se Young Im. "Parallel Computation of Large-Scale Molecular Dynamics Simulations." Key Engineering Materials 326-328 (December 2006): 341–44. http://dx.doi.org/10.4028/www.scientific.net/kem.326-328.341.

Full text
Abstract:
A large-scale parallel computation is extremely important for MD (molecular dynamics) simulation, particularly in dealing with atomistic systems with realistic size comparable to macroscopic continuum scale. We present a new approach for parallel computation of MD simulation. The entire system domain under consideration is divided into many Eulerian subdomains, each of which is surrounded with its own buffer layer and to which its own processor is assigned. This leads to an efficient tracking of each molecule, even when the molecules move out of its subdomain. Several numerical examples are provided to demonstrate the effectiveness of this computation scheme.
APA, Harvard, Vancouver, ISO, and other styles
2

Eller, Paul R., Jing-Ru C. Cheng, Hung V. Nguyen, and Robert S. Maier. "Improving parallel performance of large-scale watershed simulations." Procedia Computer Science 1, no. 1 (May 2010): 801–8. http://dx.doi.org/10.1016/j.procs.2010.04.086.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Walther, Jens H., and Ivo F. Sbalzarini. "Large‐scale parallel discrete element simulations of granular flow." Engineering Computations 26, no. 6 (August 21, 2009): 688–97. http://dx.doi.org/10.1108/02644400910975478.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Berrone, Stefano, Sandra Pieraccini, Stefano Scialò, and Fabio Vicini. "A Parallel Solver for Large Scale DFN Flow Simulations." SIAM Journal on Scientific Computing 37, no. 3 (January 2015): C285—C306. http://dx.doi.org/10.1137/140984014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Fujimoto, Y., N. Fukuda, and T. Akabane. "Massively parallel architectures for large scale neural network simulations." IEEE Transactions on Neural Networks 3, no. 6 (1992): 876–88. http://dx.doi.org/10.1109/72.165590.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Cytowski, Maciej, and Zuzanna Szymanska. "Large-Scale Parallel Simulations of 3D Cell Colony Dynamics." Computing in Science & Engineering 16, no. 5 (September 2014): 86–95. http://dx.doi.org/10.1109/mcse.2014.2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kurowski, Krzysztof, Tomasz Piontek, Piotr Kopta, Mariusz Mamoński, and Bartosz Bosak. "Parallel Large Scale Simulations in the PL-Grid Environment." Computational Methods in Science and Technology Special Issue, no. 1 (2010): 47–56. http://dx.doi.org/10.12921/cmst.2010.si.01.47-56.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Polizzi, Eric, and Ahmed Sameh. "Parallel Algorithms for Large-Scale Nanoelectronics Simulations Using NESSIE." Journal of Computational Electronics 3, no. 3-4 (October 2004): 363–66. http://dx.doi.org/10.1007/s10825-004-7078-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Flanigan, M., and P. Tamayo. "Parallel cluster labeling for large-scale Monte Carlo simulations." Physica A: Statistical Mechanics and its Applications 215, no. 4 (May 1995): 461–80. http://dx.doi.org/10.1016/0378-4371(95)00019-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Radeke, Charles A., Benjamin J. Glasser, and Johannes G. Khinast. "Large-scale powder mixer simulations using massively parallel GPUarchitectures." Chemical Engineering Science 65, no. 24 (December 2010): 6435–42. http://dx.doi.org/10.1016/j.ces.2010.09.035.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

KADAU, KAI, TIMOTHY C. GERMANN, and PETER S. LOMDAHL. "LARGE-SCALE MOLECULAR-DYNAMICS SIMULATION OF 19 BILLION PARTICLES." International Journal of Modern Physics C 15, no. 01 (January 2004): 193–201. http://dx.doi.org/10.1142/s0129183104005590.

Full text
Abstract:
We have performed parallel large-scale molecular-dynamics simulations on the QSC-machine at Los Alamos. The good scalability of the SPaSM code is demonstrated together with its capability of efficient data analysis for enormous system sizes up to 19 000 416 964 particles. Furthermore, we introduce a newly-developed graphics package that renders in a very efficient parallel way a huge number of spheres necessary for the visualization of atomistic simulations. These abilities pave the way for future atomistic large-scale simulations of physical problems with system sizes on the μ-scale.
APA, Harvard, Vancouver, ISO, and other styles
12

Stagg, A. K., D. D. Cline, G. F. Carey, and J. N. Shadid. "Parallel, scalable parabolized Navier-Stokes solver for large-scale simulations." AIAA Journal 33, no. 1 (January 1995): 102–8. http://dx.doi.org/10.2514/3.12338.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

MacFarland, Tom, H. M. P. Couchman, F. R. Pearce, and Jakob Pichlmeier. "A new parallel code for very large-scale cosmological simulations." New Astronomy 3, no. 8 (December 1998): 687–705. http://dx.doi.org/10.1016/s1384-1076(98)00033-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

UEHARA, H., S. KAWAHARA, N. OHNO, M. FURUICHI, F. ARAKI, and A. KAGEYAMA. "MovieMaker: a parallel movie-making software for large-scale simulations." Journal of Plasma Physics 72, no. 06 (December 2006): 841. http://dx.doi.org/10.1017/s0022377806004995.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

MÜNKEL, CHRISTIAN. "LARGE SCALE SIMULATIONS OF THE KINETIC ISING MODEL." International Journal of Modern Physics C 04, no. 06 (December 1993): 1137–45. http://dx.doi.org/10.1142/s0129183193000896.

Full text
Abstract:
We present Monte Carlo simulation results for the dynamical critical exponent z of the two- and three-dimensional kinetic Ising model. The z-values were calculated from the magnetization relaxation from an ordered state into the equilibrium state at Tc for very large systems with up to (169984)2 and (3072)3 spins. To our knowledge, these are the largest Ising-systems simulated todate. We also report the successful simulation of very large lattices on a massively parallel MIMD computer with high speedups of approximately 1000 and an efficiency of about 0.93.
APA, Harvard, Vancouver, ISO, and other styles
16

Madison, Richard, Abhinandan Jain, Christopher Lim, and Mark Maimone. "Performance Characterization of a Rover Navigation Algorithm Using Large-Scale Simulation." Scientific Programming 15, no. 2 (2007): 95–105. http://dx.doi.org/10.1155/2007/638280.

Full text
Abstract:
Autonomous rover navigation is a critical technology for robotic exploration of Mars. Simulation allows more extensive testing of such technologies than would be possible with hardware test beds alone. A large number of simulations, running in parallel, can test an algorithm under many different operating conditions to quickly identify the operational envelope of the technology and identify failure modes that were not discovered in more limited testing. GESTALT is the autonomous navigation algorithm developed for NASA's Mars rovers. ROAMS is a rover simulator developed to support the Mars program. We have integrated GESTALT into ROAMS to test closed-loop, autonomous navigation in simulation. We have developed a prototype capability to run many copies of ROAMS in parallel on a supercomputer, varying input parameters to rapidly explore GESTALT's performance across a parameter space. Using these tools, we have demonstrated that large scale simulation can identify performance limits and unexpected behaviors in an algorithm. Such parallel simulation was able to test approximately 500 parameter combinations in the time required for a single test on a hardware test bed.
APA, Harvard, Vancouver, ISO, and other styles
17

Heng, Yi, Lars Hoffmann, Sabine Griessbach, Thomas Rößler, and Olaf Stein. "Inverse transport modeling of volcanic sulfur dioxide emissions using large-scale simulations." Geoscientific Model Development 9, no. 4 (May 2, 2016): 1627–45. http://dx.doi.org/10.5194/gmd-9-1627-2016.

Full text
Abstract:
Abstract. An inverse transport modeling approach based on the concepts of sequential importance resampling and parallel computing is presented to reconstruct altitude-resolved time series of volcanic emissions, which often cannot be obtained directly with current measurement techniques. A new inverse modeling and simulation system, which implements the inversion approach with the Lagrangian transport model Massive-Parallel Trajectory Calculations (MPTRAC) is developed to provide reliable transport simulations of volcanic sulfur dioxide (SO2). In the inverse modeling system MPTRAC is used to perform two types of simulations, i.e., unit simulations for the reconstruction of volcanic emissions and final forward simulations. Both types of transport simulations are based on wind fields of the ERA-Interim meteorological reanalysis of the European Centre for Medium Range Weather Forecasts. The reconstruction of altitude-dependent SO2 emission time series is also based on Atmospheric InfraRed Sounder (AIRS) satellite observations. A case study for the eruption of the Nabro volcano, Eritrea, in June 2011, with complex emission patterns, is considered for method validation. Meteosat Visible and InfraRed Imager (MVIRI) near-real-time imagery data are used to validate the temporal development of the reconstructed emissions. Furthermore, the altitude distributions of the emission time series are compared with top and bottom altitude measurements of aerosol layers obtained by the Cloud–Aerosol Lidar with Orthogonal Polarization (CALIOP) and the Michelson Interferometer for Passive Atmospheric Sounding (MIPAS) satellite instruments. The final forward simulations provide detailed spatial and temporal information on the SO2 distributions of the Nabro eruption. By using the critical success index (CSI), the simulation results are evaluated with the AIRS observations. Compared to the results with an assumption of a constant flux of SO2 emissions, our inversion approach leads to an improvement of the mean CSI value from 8.1 to 21.4 % and the maximum CSI value from 32.3 to 52.4 %. The simulation results are also compared with those reported in other studies and good agreement is observed. Our new inverse modeling and simulation system is expected to become a useful tool to also study other volcanic eruption events.
APA, Harvard, Vancouver, ISO, and other styles
18

Meng, Wanwan, Yongguang Cheng, Jiayang Wu, Zhiyan Yang, Yunxian Zhu, and Shuai Shang. "GPU Acceleration of Hydraulic Transient Simulations of Large-Scale Water Supply Systems." Applied Sciences 9, no. 1 (December 27, 2018): 91. http://dx.doi.org/10.3390/app9010091.

Full text
Abstract:
Simulating hydraulic transients in ultra-long water (oil, gas) transmission or large-scale distribution systems are time-consuming, and exploring ways to improve the simulation efficiency is an essential research direction. The parallel implementation of the method of characteristics (MOC) on graphics processing unit (GPU) chips is a promising approach for accelerating the simulations, because GPU has a great parallelization ability for massive but simple computations, and the explicit and local features of MOC meet the features of GPU quite well. In this paper, we propose and verify a GPU implementation of MOC on a single chip for more efficient simulations of hydraulic transients. Details of GPU-MOC parallel strategies are introduced, and the accuracy and efficiency of the proposed method are verified by simulating the benchmark single pipe water hammer problem. The transient processes of a large scale water distribution system and a long-distance water transmission system are simulated to investigate the computing capability of the proposed method. The results show that GPU-MOC method can achieve significant performance gains, and the speedup ratios are up to hundreds compared to the traditional method. This preliminary work demonstrates that GPU-MOC parallel computing has great prospects in practical applications with large computing load.
APA, Harvard, Vancouver, ISO, and other styles
19

Saied, F., and G. Mahinthakumar. "Efficient parallel multigrid based solvers for large scale groundwater flow simulations." Computers & Mathematics with Applications 35, no. 7 (April 1998): 45–54. http://dx.doi.org/10.1016/s0898-1221(98)00031-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Giacalone, Joe. "Large‐Scale Hybrid Simulations of Particle Acceleration at a Parallel Shock." Astrophysical Journal 609, no. 1 (July 2004): 452–58. http://dx.doi.org/10.1086/421043.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Heng, Y., L. Hoffmann, S. Griessbach, T. Rößler, and O. Stein. "Inverse transport modeling of volcanic sulfur dioxide emissions using large-scale ensemble simulations." Geoscientific Model Development Discussions 8, no. 10 (October 21, 2015): 9103–46. http://dx.doi.org/10.5194/gmdd-8-9103-2015.

Full text
Abstract:
Abstract. An inverse transport modeling approach based on the concepts of sequential importance resampling and parallel computing is presented to reconstruct altitude-resolved time series of volcanic emissions, which often can not be obtained directly with current measurement techniques. A new inverse modeling and simulation system, which implements the inversion approach with the Lagrangian transport model Massive-Parallel Trajectory Calculations (MPTRAC) is developed to provide reliable transport simulations of volcanic sulfur dioxide (SO2). In the inverse modeling system MPTRAC is used to perform two types of simulations, i. e., large-scale ensemble simulations for the reconstruction of volcanic emissions and final transport simulations. The transport simulations are based on wind fields of the ERA-Interim meteorological reanalysis of the European Centre for Medium Range Weather Forecasts. The reconstruction of altitude-dependent SO2 emission time series is also based on Atmospheric Infrared Sounder (AIRS) satellite observations. A case study for the eruption of the Nabro volcano, Eritrea, in June 2011, with complex emission patterns, is considered for method validation. Meteosat Visible and InfraRed Imager (MVIRI) near-real-time imagery data are used to validate the temporal development of the reconstructed emissions. Furthermore, the altitude distributions of the emission time series are compared with top and bottom altitude measurements of aerosol layers obtained by the Cloud–Aerosol Lidar with Orthogonal Polarization (CALIOP) and the Michelson Interferometer for Passive Atmospheric Sounding (MIPAS) satellite instruments. The final transport simulations provide detailed spatial and temporal information on the SO2 distributions of the Nabro eruption. The SO2 column densities from the simulations are in good qualitative agreement with the AIRS observations. Our new inverse modeling and simulation system is expected to become a useful tool to also study other volcanic eruption events.
APA, Harvard, Vancouver, ISO, and other styles
22

Xiang, Yue, Peng Wang, Bo Yu, and Dongliang Sun. "GPU-accelerated hydraulic simulations of large-scale natural gas pipeline networks based on a two-level parallel process." Oil & Gas Science and Technology – Revue d’IFP Energies nouvelles 75 (2020): 86. http://dx.doi.org/10.2516/ogst/2020076.

Full text
Abstract:
The numerical simulation efficiency of large-scale natural gas pipeline network is usually unsatisfactory. In this paper, Graphics Processing Unit (GPU)-accelerated hydraulic simulations for large-scale natural gas pipeline networks are presented. First, based on the Decoupled Implicit Method for Efficient Network Simulation (DIMENS) method, presented in our previous study, a novel two-level parallel simulation process and the corresponding parallel numerical method for hydraulic simulations of natural gas pipeline networks are proposed. Then, the implementation of the two-level parallel simulation in GPU is introduced in detail. Finally, some numerical experiments are provided to test the performance of the proposed method. The results show that the proposed method has notable speedup. For five large-scale pipe networks, compared with the well-known commercial simulation software SPS, the speedup ratio of the proposed method is up to 57.57 with comparable calculation accuracy. It is more inspiring that the proposed method has strong adaptability to the large pipeline networks, the larger the pipeline network is, the larger speedup ratio of the proposed method is. The speedup ratio of the GPU method approximately linearly depends on the total discrete points of the network.
APA, Harvard, Vancouver, ISO, and other styles
23

HOEFLER, TORSTEN, TIMO SCHNEIDER, and ANDREW LUMSDAINE. "THE EFFECT OF NETWORK NOISE ON LARGE-SCALE COLLECTIVE COMMUNICATIONS." Parallel Processing Letters 19, no. 04 (December 2009): 573–93. http://dx.doi.org/10.1142/s0129626409000420.

Full text
Abstract:
The effect of operating system (OS) noise on the performance of large-scale applications is a growing concern and ameliorating the influence of OS noise is a subject of active research. A related problem is that of network noise that arises from the shared use of the interconnection network by parallel processes of different allocations or other background activities. To characterize the effect of network noise on parallel applications, we conducted a series of experiments with a specially crafted benchmark and simulations. Experimental results show a decrease in the communication performance of a parallel reduction operation by a factor of 2 on 246 nodes on an InfiniBand fat-tree and by several orders of magnitude on a BlueGene/P torus. Simulations show how influence of network noise grows with the system size. Although network noise is not as well-studied as OS noise, our results clearly show that it is an important factor that must be considered when running and analyzing large-scale applications.
APA, Harvard, Vancouver, ISO, and other styles
24

Zhang, Huajian, Xiao-Wei Guo, Chao Li, Qiao Liu, Hanwen Xu, and Jie Liu. "Accelerated Parallel Numerical Simulation of Large-Scale Nuclear Reactor Thermal Hydraulic Models by Renumbering Methods." Applied Sciences 12, no. 20 (October 11, 2022): 10193. http://dx.doi.org/10.3390/app122010193.

Full text
Abstract:
Numerical simulation of thermal hydraulics of nuclear reactors is widely concerned, but large-scale fluid simulation is still prohibited due to the complexity of components and huge computational effort. Some applications of open source CFD programs still have a large gap in terms of comprehensiveness of physical models, computational accuracy and computational efficiency compared with commercial CFD programs. Therefore, it is necessary to improve the computational performance of in-house CFD software (YHACT, the parallel analysis code of thermohydraulices) to obtain the processing capability of large-scale mesh data and better parallel efficiency. In this paper, we will form a unified framework of meshing and mesh renumbering for solving fluid dynamics problems with unstructured meshes. Meanwhile, the effective Greedy, RCM (reverse Cuthill-Mckee), and CQ (cell quotient) grid renumbering algorithms are integrated into YHACT software. An important judgment metric, named median point average distance (MDMP), is applied as the discriminant of sparse matrix quality to select the renumbering methods with better effect for different physical models. Finally, a parallel test of the turbulence model with 39.5 million grid volumes is performed using a pressurized water reactor engineering case component with 3*3 rod bundles. The computational results before and after renumbering are also compared to verify the robustness of the program. Experiments show that the CFD framework integrated in this paper can correctly perform simulations of the thermal engineering hydraulics of large nuclear reactors. The parallel size of the program reaches a maximum of 3072 processes. The renumbering acceleration effect reaches its maximum at a parallel scale of 1536 processes, 56.72%. It provides a basis for our future implementation of open-source CFD software that supports efficient large-scale parallel simulations.
APA, Harvard, Vancouver, ISO, and other styles
25

Sakai, Ko, Paul Sajda, Shih-Cheng Yen, and Leif H. Finkel. "Coarse-grain parallel computing for very large scale neural simulations in the NEXUS simulation environment." Computers in Biology and Medicine 27, no. 4 (July 1997): 257–66. http://dx.doi.org/10.1016/s0010-4825(96)00029-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Wang, Joseph, Yong Cao, Raed Kafafy, and Viktor Decyk. "Electric Propulsion Plume Simulations Using Parallel Computer." Scientific Programming 15, no. 2 (2007): 83–94. http://dx.doi.org/10.1155/2007/272431.

Full text
Abstract:
A parallel, three-dimensional electrostatic PIC code is developed for large-scale electric propulsion simulations using parallel supercomputers. This code uses a newly developed immersed-finite-element particle-in-cell (IFE-PIC) algorithm designed to handle complex boundary conditions accurately while maintaining the computational speed of the standard PIC code. Domain decomposition is used in both field solve and particle push to divide the computation among processors. Two simulations studies are presented to demonstrate the capability of the code. The first is a full particle simulation of near-thruster plume using real ion to electron mass ratio. The second is a high-resolution simulation of multiple ion thruster plume interactions for a realistic spacecraft using a domain enclosing the entire solar array panel. Performance benchmarks show that the IFE-PIC achieves a high parallel efficiency of ≥ 90%
APA, Harvard, Vancouver, ISO, and other styles
27

Cytowski, Maciej, and Zuzanna Szymanska. "Large-Scale Parallel Simulations of 3D Cell Colony Dynamics: The Cellular Environment." Computing in Science & Engineering 17, no. 5 (September 2015): 44–48. http://dx.doi.org/10.1109/mcse.2015.66.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Nakajima, Kengo. "Large-scale Simulations of 3D Groundwater Flow Using Parallel Geometric Multigrid Method." Procedia Computer Science 18 (2013): 1265–74. http://dx.doi.org/10.1016/j.procs.2013.05.293.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Zang, Tianwu, Linglin Yu, Chong Zhang, and Jianpeng Ma. "Parallel continuous simulated tempering and its applications in large-scale molecular simulations." Journal of Chemical Physics 141, no. 4 (July 28, 2014): 044113. http://dx.doi.org/10.1063/1.4890038.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Wang, Kai, Sang-Bae Kim, Jun Zhang, Kengo Nakajima, and Hiroshi Okuda. "Global and localized parallel preconditioning techniques for large scale solid Earth simulations." Future Generation Computer Systems 19, no. 4 (May 2003): 443–56. http://dx.doi.org/10.1016/s0167-739x(03)00030-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Lin, Y., F. Wang, and B. Liu. "Random number generators for large-scale parallel Monte Carlo simulations on FPGA." Journal of Computational Physics 360 (May 2018): 93–103. http://dx.doi.org/10.1016/j.jcp.2018.01.029.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Paik, Seung Hoon, Ji Joong Moon, Seung Jo Kim, and M. Lee. "Parallel performance of large scale impact simulations on Linux cluster super computer." Computers & Structures 84, no. 10-11 (April 2006): 732–41. http://dx.doi.org/10.1016/j.compstruc.2005.11.013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Stadler, J., R. Mikulla, and H. R. Trebin. "IMD: A Software Package for Molecular Dynamics Studies on Parallel Computers." International Journal of Modern Physics C 08, no. 05 (October 1997): 1131–40. http://dx.doi.org/10.1142/s0129183197000990.

Full text
Abstract:
We report on implementation and performance of the program IMD, designed for short range molecular dynamics simulations on massively parallel computers. After a short explanation of the cell-based algorithm, its extension to parallel computers as well as two variants of the communication scheme are discussed. We provide performance numbers for simulations of different sizes and compare them with values found in the literature. Finally we describe two applications, namely a very large scale simulation with more than 1.23×109 atoms, to our knowledge the largest published MD simulation up to this day and a simulation of a crack propagating in a two-dimensional quasicrystal.
APA, Harvard, Vancouver, ISO, and other styles
34

Melapudi, Vikram, Balasubramaniam Shanker, Sudip Seal, and Srinivas Aluru. "A Scalable Parallel Wideband MLFMA for Efficient Electromagnetic Simulations on Large Scale Clusters." IEEE Transactions on Antennas and Propagation 59, no. 7 (July 2011): 2565–77. http://dx.doi.org/10.1109/tap.2011.2152311.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Uhlherr, Alfred, Stephen J. Leak, Nadia E. Adam, Per E. Nyberg, Manolis Doxastakis, Vlasis G. Mavrantzas, and Doros N. Theodorou. "Large scale atomistic polymer simulations using Monte Carlo methods for parallel vector processors." Computer Physics Communications 144, no. 1 (March 2002): 1–22. http://dx.doi.org/10.1016/s0010-4655(01)00464-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Nomura, Ken-ichi, Rajiv K. Kalia, Aiichiro Nakano, and Priya Vashishta. "A scalable parallel algorithm for large-scale reactive force-field molecular dynamics simulations." Computer Physics Communications 178, no. 2 (January 2008): 73–87. http://dx.doi.org/10.1016/j.cpc.2007.08.014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

ORAN, ELAINE S., and JAY P. BORIS. "Compressible Flow Simulations on a Massively Parallel Computer." International Journal of Modern Physics C 02, no. 01 (March 1991): 430–36. http://dx.doi.org/10.1142/s0129183191000640.

Full text
Abstract:
This paper describes model development and computations of multidimensional, highly compressible, time-dependent reacting on a Connection Machine (CM). We briefly discuss computational timings compared to a Cray YMP speed, optimal use of the hardware and software available, treatment of boundary conditions, and parallel solution of terms representing chemical reactions. In addition, we show the practical use of the system for large-scale reacting and nonreacting flows.
APA, Harvard, Vancouver, ISO, and other styles
38

McMillan, W., M. Woodgate, B. E. Richards, B. J. Gribben, K. J. Badcock, C. A. Masson, and F. Cantariti. "Demonstration of cluster computing for three-dimensional CFD simulations." Aeronautical Journal 103, no. 1027 (September 1999): 443–47. http://dx.doi.org/10.1017/s0001924000028037.

Full text
Abstract:
Abstract Motivated by a lack of sufficient local and national computing facilities for computational fluid dynamics simulations, the Affordable Systems Computing Unit (ASCU) was established to investigate low cost alternatives. The options considered have all involved cluster computing, a term which refers to the grouping of a number of components into a managed system capable of running both serial and parallel applications. The present work aims to demonstrate the utility of commodity processors for dedicated batch processing. The performance of the cluster has proved to be extremely cost effective, enabling large three dimensional flow simulations on a computer costing less than £25k sterling at current market prices. The experience gained on this system in terms of single node performance, message passing and parallel performance will be discussed. In particular, comparisons with the performance of other systems will be made. Several medium-large scale CFD simulations performed using the new cluster will be presented to demonstrate the potential of commodity processor based parallel computers for aerodynamic simulation.
APA, Harvard, Vancouver, ISO, and other styles
39

Pesce, Lorenzo L., Hyong C. Lee, Mark Hereld, Sid Visser, Rick L. Stevens, Albert Wildeman, and Wim van Drongelen. "Large-Scale Modeling of Epileptic Seizures: Scaling Properties of Two Parallel Neuronal Network Simulation Algorithms." Computational and Mathematical Methods in Medicine 2013 (2013): 1–10. http://dx.doi.org/10.1155/2013/182145.

Full text
Abstract:
Our limited understanding of the relationship between the behavior of individual neurons and large neuronal networks is an important limitation in current epilepsy research and may be one of the main causes of our inadequate ability to treat it. Addressing this problem directly via experiments is impossibly complex; thus, we have been developing and studying medium-large-scale simulations of detailed neuronal networks to guide us. Flexibility in the connection schemas and a complete description of the cortical tissue seem necessary for this purpose. In this paper we examine some of the basic issues encountered in these multiscale simulations. We have determined the detailed behavior of two such simulators on parallel computer systems. The observed memory and computation-time scaling behavior for a distributed memory implementation were very good over the range studied, both in terms of network sizes (2,000 to 400,000 neurons) and processor pool sizes (1 to 256 processors). Our simulations required between a few megabytes and about 150 gigabytes of RAM and lasted between a few minutes and about a week, well within the capability of most multinode clusters. Therefore, simulations of epileptic seizures on networks with millions of cells should be feasible on current supercomputers.
APA, Harvard, Vancouver, ISO, and other styles
40

Xia, Yong, Kuanquan Wang, and Henggui Zhang. "Parallel Optimization of 3D Cardiac Electrophysiological Model Using GPU." Computational and Mathematical Methods in Medicine 2015 (2015): 1–10. http://dx.doi.org/10.1155/2015/862735.

Full text
Abstract:
Large-scale 3D virtual heart model simulations are highly demanding in computational resources. This imposes a big challenge to the traditional computation resources based on CPU environment, which already cannot meet the requirement of the whole computation demands or are not easily available due to expensive costs. GPU as a parallel computing environment therefore provides an alternative to solve the large-scale computational problems of whole heart modeling. In this study, using a 3D sheep atrial model as a test bed, we developed a GPU-based simulation algorithm to simulate the conduction of electrical excitation waves in the 3D atria. In the GPU algorithm, a multicellular tissue model was split into two components: one is the single cell model (ordinary differential equation) and the other is the diffusion term of the monodomain model (partial differential equation). Such a decoupling enabled realization of the GPU parallel algorithm. Furthermore, several optimization strategies were proposed based on the features of the virtual heart model, which enabled a 200-fold speedup as compared to a CPU implementation. In conclusion, an optimized GPU algorithm has been developed that provides an economic and powerful platform for 3D whole heart simulations.
APA, Harvard, Vancouver, ISO, and other styles
41

Fratoni, Giulia, Brian Hamilton, and Dario D'Orazio. "Feasibility of a finite-difference time-domain model in large-scale acoustic simulations." Journal of the Acoustical Society of America 152, no. 1 (July 2022): 330–41. http://dx.doi.org/10.1121/10.0012218.

Full text
Abstract:
Wave-based techniques for room acoustics simulations are commonly applied to low frequency analysis and small-sized simplified environments. The constraints are generally the inherent computational cost and the challenging implementation of proper complex boundary conditions. Nevertheless, the application field of wave-based simulation methods has been extended in the latest research decades. With the aim of testing this potential, this work investigates the feasibility of a finite-difference time-domain (FDTD) code simulating large non-trivial geometries in wide frequency ranges. A representative sample of large coupled-volume opera houses allowed demonstration of the capability of the selected FDTD model to tackle such composite geometries up to 4 kHz. For such a demanding task, efficient calculation schemes and frequency-dependent boundary admittances are implemented in the simulation framework. The results of in situ acoustic measurements were used as benchmarks during the calibration process of three-dimensional virtual models. In parallel, acoustic simulations performed on the same halls through standard ray-tracing techniques enabled a systematic comparison between the two numerical approaches highlighting significant differences in terms of input data. The ability of the FDTD code to detect the typical acoustic scenarios occurring in coupled-volume halls is confirmed through multi-slope decay analysis and impulse responses' spectral content.
APA, Harvard, Vancouver, ISO, and other styles
42

He, Lisha, Jianjing Zheng, Yao Zheng, Jianjun Chen, Xuan Zhou, and Zhoufang Xiao. "Parallel algorithms for moving boundary problems by local remeshing." Engineering Computations 36, no. 8 (October 7, 2019): 2887–910. http://dx.doi.org/10.1108/ec-11-2018-0545.

Full text
Abstract:
Purpose The purpose of this paper is to develop parallel algorithms for moving boundary simulations by local remeshing and compose them to a fully parallel simulation cycle for the solution of problems with engineering interests. Design/methodology/approach The moving boundary problems are solved by unsteady flow computations coupled with six-degrees-of-freedom equations of rigid body motion. Parallel algorithms are developed for both computational fluid dynamics (CFD) solution and grid deformation steps. Meanwhile, a novel approach is developed for the parallelization of the local remeshing step. It inputs a distributed mesh after deformation, then marks low-quality elements to be deleted on the respective processors. After that, a parallel domain decomposition approach is used to repartition the hole mesh and then to redistribute the resulting sub-meshes onto all available processors. Then remesh individual sub-holes in parallel. Finally, the element redistribution is rebalanced. Findings If the CFD solver is parallelized while the remaining steps are executed in sequential, the performance bottleneck of such a simulation cycle is observed when the simulation of large-scale problem is executed. The developed parallel simulation cycle, in which all of time-consuming steps have been efficiently parallelized, could overcome these bottlenecks, in terms of both memory consumption and computing efficiency. Originality/value A fully parallel approach for moving boundary simulations by local remeshing is developed to solve large-scale problems. In the algorithm level, a novel parallel local remeshing algorithm is present. It repartitions distributed hole elements evenly onto all available processors and ensures the generation of a well-shaped inter-hole boundary always. Therefore, the subsequent remeshing step can fix the inter-hole boundary involves no communications.
APA, Harvard, Vancouver, ISO, and other styles
43

He, Wei-Jia, Xiao-Wei Huang, Ming-Lin Yang, and Xin-Qing Sheng. "MASSIVELY PARALLEL MULTILEVEL FAST MULTIPOLE ALGORITHM FOR EXTREMELY LARGE-SCALE ELECTROMAGNETIC SIMULATIONS: A REVIEW." Progress In Electromagnetics Research 173 (2022): 37–52. http://dx.doi.org/10.2528/pier22011202.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Osei-Kuffuor, Daniel, and Jean-Luc Fattebert. "A Scalable $O(N)$ Algorithm for Large-Scale Parallel First-Principles Molecular Dynamics Simulations." SIAM Journal on Scientific Computing 36, no. 4 (January 2014): C353—C375. http://dx.doi.org/10.1137/140956476.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Wang, Kun, Hui Liu, Jia Luo, and Zhangxin Chen. "Efficient CPR-type preconditioner and its adaptive strategies for large-scale parallel reservoir simulations." Journal of Computational and Applied Mathematics 328 (January 2018): 443–68. http://dx.doi.org/10.1016/j.cam.2017.07.022.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Wu, Guoqing, Haifeng Song, and Deye Lin. "A scalable parallel framework for microstructure analysis of large-scale molecular dynamics simulations data." Computational Materials Science 144 (March 2018): 322–30. http://dx.doi.org/10.1016/j.commatsci.2017.12.048.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Popov, Konstantin, Mahmoud Rafea, Fredrik Holmgren, Per Brand, Vladimir Vlassov, and Seif Haridi. "PARALLEL AGENT-BASED SIMULATION ON A CLUSTER OF WORKSTATIONS." Parallel Processing Letters 13, no. 04 (December 2003): 629–41. http://dx.doi.org/10.1142/s0129626403001562.

Full text
Abstract:
We discuss a parallel implementation of an agent-based simulation. Our approach allows to adapt a sequential simulator for large-scale simulation on a cluster of workstations. We target discrete-time simulation models that capture the behavior of Web users and Web sites. Web users are connected with each other in a graph resembling the social network. Web sites are also connected in a similar graph. Users are stateful entities. At each time step, they exhibit certain behaviour such as visiting bookmarked sites, exchanging information about Web sites in the "word-of-mouth" style, and updating bookmarks. The real-world phenomena of emerged aggregated behavior of the Internet population is studied. The system distributes data among workstations, which allows large-scale simulations infeasible on a stand-alone computer. The model properties cause traffic between workstations proportional to partition sizes. Network latency is hidden by concurrent simulation of multiple users. The system is implemented in Mozart that provides multithreading, dataflow variables, component-based software development, and network-transparency. Currently we can simulate up to 106 Web users on 104 Web sites using a cluster of 16 computers, which takes few seconds per simulation step, and for a problem of the same size, parallel simulation offers speedups between 11 and 14.
APA, Harvard, Vancouver, ISO, and other styles
48

Cokyasar, Taner, Felipe de Souza, Joshua Auld, and Omer Verbas. "Dynamic Ride-Matching for Large-Scale Transportation Systems." Transportation Research Record: Journal of the Transportation Research Board 2676, no. 3 (October 18, 2021): 172–82. http://dx.doi.org/10.1177/03611981211049422.

Full text
Abstract:
Efficient dynamic ride-matching (DRM) in large-scale transportation systems is a key driver in transport simulations to yield answers to challenging problems. Although the DRM problem is simple to solve, it quickly becomes a computationally challenging problem in large-scale transportation system simulations. Therefore, this study thoroughly examines the DRM problem dynamics and proposes an optimization-based solution framework to solve the problem efficiently. To benefit from parallel computing and reduce computational times, the problem’s network is divided into clusters utilizing a commonly used unsupervised machine learning algorithm along with a linear programming model. Then, these sub-problems are solved using another linear program to finalize the ride-matching. At the clustering level, the framework allows users adjusting cluster sizes to balance the trade-off between the computational time savings and the solution quality deviation. A case study in the Chicago Metropolitan Area, U.S., illustrates that the framework can reduce the average computational time by 58% at the cost of increasing the average pick up time by 26% compared with a system optimum, that is, non-clustered, approach. Another case study in a relatively small city, Bloomington, Illinois, U.S., shows that the framework provides quite similar results to the system-optimum approach in approximately 62% less computational time.
APA, Harvard, Vancouver, ISO, and other styles
49

Johnson, Timothy W. C., and John R. Rankin. "Performance of a Parallel Multi-Agent Simulation using Graphics Hardware." International Journal of Agent Technologies and Systems 6, no. 4 (October 2014): 72–91. http://dx.doi.org/10.4018/ijats.2014100104.

Full text
Abstract:
Large-scale Agent-Based Modelling and Simulation (ABMS) is a field of research that is becoming increasingly popular as researchers work to construct simulations at a higher level of complexity and realism than previously done. These systems can not only be difficult and time consuming to implement, but can also be constrained in their scope due to issues arising from a shortage of available processing power. This work simultaneously presents solutions to these two problems by demonstrating a model for ABMS that allows a developer to design their own simulation, which is then automatically converted into code capable of running on a mainstream Graphical Processing Unit (GPU). By harnessing the extra processing power afforded by the GPU this paper creates simulations that are capable of running in real-time with more autonomous agents than allowed by systems using traditional x86 processors.
APA, Harvard, Vancouver, ISO, and other styles
50

Ameen, Muhsin M., Xiaofeng Yang, Tang-Wei Kuo, and Sibendu Som. "Parallel methodology to capture cyclic variability in motored engines." International Journal of Engine Research 18, no. 4 (August 19, 2016): 366–77. http://dx.doi.org/10.1177/1468087416662544.

Full text
Abstract:
Numerical prediction of cycle-to-cycle variability in spark ignition engines is extremely challenging for two key reasons: (1) high-fidelity methods such as large eddy simulation are required to accurately capture the in-cylinder turbulent flow field and (2) cycle-to-cycle variability is experienced over long time scales, and hence, the simulations need to be performed for hundreds of consecutive cycles. In this study, a methodology is proposed to dissociate this long time-scale problem into several shorter time-scale problems, which can considerably reduce the computational time without sacrificing the fidelity of the simulations. The strategy is to perform multiple parallel simulations, each of which encompasses two to three cycles, by effectively perturbing the simulation parameters such as the initial and boundary conditions. The proposed methodology is validated for the prediction of cycle-to-cycle variability due to gas exchange in a motored transparent combustion chamber engine by comparing with particle image velocimetry measurements. It is shown that by perturbing the initial velocity field effectively based on the intensity of the in-cylinder turbulence, the mean and variance of the in-cylinder flow field are captured reasonably well. Adding perturbations in the initial pressure field and the boundary pressure improves the predictions. It is shown that this new approach is able to give accurate predictions of the flow field statistics in considerably less time than that required for the conventional approach of simulating consecutive engine cycles.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography