Academic literature on the topic 'Large-scale parallel simulations'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Large-scale parallel simulations.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Large-scale parallel simulations"

1

Kwon, Sung Jin, Young Min Lee, and Se Young Im. "Parallel Computation of Large-Scale Molecular Dynamics Simulations." Key Engineering Materials 326-328 (December 2006): 341–44. http://dx.doi.org/10.4028/www.scientific.net/kem.326-328.341.

Full text
Abstract:
A large-scale parallel computation is extremely important for MD (molecular dynamics) simulation, particularly in dealing with atomistic systems with realistic size comparable to macroscopic continuum scale. We present a new approach for parallel computation of MD simulation. The entire system domain under consideration is divided into many Eulerian subdomains, each of which is surrounded with its own buffer layer and to which its own processor is assigned. This leads to an efficient tracking of each molecule, even when the molecules move out of its subdomain. Several numerical examples are provided to demonstrate the effectiveness of this computation scheme.
APA, Harvard, Vancouver, ISO, and other styles
2

Eller, Paul R., Jing-Ru C. Cheng, Hung V. Nguyen, and Robert S. Maier. "Improving parallel performance of large-scale watershed simulations." Procedia Computer Science 1, no. 1 (May 2010): 801–8. http://dx.doi.org/10.1016/j.procs.2010.04.086.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Walther, Jens H., and Ivo F. Sbalzarini. "Large‐scale parallel discrete element simulations of granular flow." Engineering Computations 26, no. 6 (August 21, 2009): 688–97. http://dx.doi.org/10.1108/02644400910975478.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Berrone, Stefano, Sandra Pieraccini, Stefano Scialò, and Fabio Vicini. "A Parallel Solver for Large Scale DFN Flow Simulations." SIAM Journal on Scientific Computing 37, no. 3 (January 2015): C285—C306. http://dx.doi.org/10.1137/140984014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Fujimoto, Y., N. Fukuda, and T. Akabane. "Massively parallel architectures for large scale neural network simulations." IEEE Transactions on Neural Networks 3, no. 6 (1992): 876–88. http://dx.doi.org/10.1109/72.165590.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Cytowski, Maciej, and Zuzanna Szymanska. "Large-Scale Parallel Simulations of 3D Cell Colony Dynamics." Computing in Science & Engineering 16, no. 5 (September 2014): 86–95. http://dx.doi.org/10.1109/mcse.2014.2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kurowski, Krzysztof, Tomasz Piontek, Piotr Kopta, Mariusz Mamoński, and Bartosz Bosak. "Parallel Large Scale Simulations in the PL-Grid Environment." Computational Methods in Science and Technology Special Issue, no. 1 (2010): 47–56. http://dx.doi.org/10.12921/cmst.2010.si.01.47-56.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Polizzi, Eric, and Ahmed Sameh. "Parallel Algorithms for Large-Scale Nanoelectronics Simulations Using NESSIE." Journal of Computational Electronics 3, no. 3-4 (October 2004): 363–66. http://dx.doi.org/10.1007/s10825-004-7078-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Flanigan, M., and P. Tamayo. "Parallel cluster labeling for large-scale Monte Carlo simulations." Physica A: Statistical Mechanics and its Applications 215, no. 4 (May 1995): 461–80. http://dx.doi.org/10.1016/0378-4371(95)00019-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Radeke, Charles A., Benjamin J. Glasser, and Johannes G. Khinast. "Large-scale powder mixer simulations using massively parallel GPUarchitectures." Chemical Engineering Science 65, no. 24 (December 2010): 6435–42. http://dx.doi.org/10.1016/j.ces.2010.09.035.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Large-scale parallel simulations"

1

Benson, Kirk C. "Adaptive Control of Large-Scale Simulations." Diss., Georgia Institute of Technology, 2004. http://hdl.handle.net/1853/5002.

Full text
Abstract:
This thesis develops adaptive simulation control techniques that differentiate between competing system configurations. Here, a system is a real world environment under analysis. In this context, proposed modifications to a system denoted by different configurations are evaluated using large-scale hybrid simulation. Adaptive control techniques, using ranking and selection methods, compare the relative worth of competing configurations and use these comparisons to control the number of required simulation observations. Adaptive techniques necessitate embedded statistical computations suitable for the variety of data found in detailed simulations, including hybrid and agent-based simulations. These embedded statistical computations apply efficient sampling methods to collect data from simulations running on a network of workstations. The National Airspace System provides a test case for the application of these techniques to the analysis and design of complex systems, implemented here in the Reconfigurable Flight Simulator, a large-scale hybrid simulation. Implications of these techniques for the use of simulation as a design activity are also presented.
APA, Harvard, Vancouver, ISO, and other styles
2

Pulla, Gautam. "High Performance Computing Issues in Large-Scale Molecular Statics Simulations." Thesis, Virginia Tech, 1999. http://hdl.handle.net/10919/33206.

Full text
Abstract:
Successful application of parallel high performance computing to practical problems requires overcoming several challenges. These range from the need to make sequential and parallel improvements in programs to the implementation of software tools which create an environment that aids sharing of high performance hardware resources and limits losses caused by hardware and software failures. In this thesis we describe our approach to meeting these challenges in the context of a Molecular Statics code. We describe sequential and parallel optimizations made to the code and also a suite of tools constructed to facilitate the execution of the Molecular Statics program on a network of parallel machines with the aim of increasing resource sharing, fault tolerance and availability.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
3

Kamal, Tariq. "Computational Cost Analysis of Large-Scale Agent-Based Epidemic Simulations." Diss., Virginia Tech, 2016. http://hdl.handle.net/10919/82507.

Full text
Abstract:
Agent-based epidemic simulation (ABES) is a powerful and realistic approach for studying the impacts of disease dynamics and complex interventions on the spread of an infection in the population. Among many ABES systems, EpiSimdemics comes closest to the popular agent-based epidemic simulation systems developed by Eubank, Longini, Ferguson, and Parker. EpiSimdemics is a general framework that can model many reaction-diffusion processes besides the Susceptible-Exposed-Infectious-Recovered (SEIR) models. This model allows the study of complex systems as they interact, thus enabling researchers to model and observe the socio-technical trends and forces. Pandemic planning at the world level requires simulation of over 6 billion agents, where each agent has a unique set of demographics, daily activities, and behaviors. Moreover, the stochastic nature of epidemic models, the uncertainty in the initial conditions, and the variability of reactions require the computation of several replicates of a simulation for a meaningful study. Given the hard timelines to respond, running many replicates (15-25) of several configurations (10-100) (of these compute-heavy simulations) can only be possible on high-performance clusters (HPC). These agent-based epidemic simulations are irregular and show poor execution performance on high-performance clusters due to the evolutionary nature of their workload, large irregular communication and load imbalance. For increased utilization of HPC clusters, the simulation needs to be scalable. Many challenges arise when improving the performance of agent-based epidemic simulations on high-performance clusters. Firstly, large-scale graph-structured computation is central to the processing of these simulations, where the star-motif quality nodes (natural graphs) create large computational imbalances and communication hotspots. Secondly, the computation is performed by classes of tasks that are separated by global synchronization. The non-overlapping computations cause idle times, which introduce the load balancing and cost estimation challenges. Thirdly, the computation is overlapped with communication, which is difficult to measure using simple methods, thus making the cost estimation very challenging. Finally, the simulations are iterative and the workload (computation and communication) may change through iterations, as a result introducing load imbalances. This dissertation focuses on developing a cost estimation model and load balancing schemes to increase the runtime efficiency of agent-based epidemic simulations on high-performance clusters. While developing the cost model and load balancing schemes, we perform the static and dynamic load analysis of such simulations. We also statically quantified the computational and communication workloads in EpiSimdemics. We designed, developed and evaluated a cost model for estimating the execution cost of large-scale parallel agent-based epidemic simulations (and more generally for all constrained producer-consumer parallel algorithms). This cost model uses computational imbalances and communication latencies, and enables the cost estimation of those applications where the computation is performed by classes of tasks, separated by synchronization. It enables the performance analysis of parallel applications by computing its execution times on a number of partitions. Our evaluations show that the model is helpful in performance prediction, resource allocation and evaluation of load balancing schemes. As part of load balancing algorithms, we adopted the Metis library for partitioning bipartite graphs. We have also developed lower-overhead custom schemes called Colocation and MetColoc. We performed an evaluation of Metis, Colocation, and MetColoc. Our analysis showed that the MetColoc schemes gives a performance similar to Metis, but with half the partitioning overhead (runtime and memory). On the other hand, the Colocation scheme achieves a similar performance to Metis on a larger number of partitions, but at extremely lower partitioning overhead. Moreover, the memory requirements of Colocation scheme does not increase as we create more partitions. We have also performed the dynamic load analysis of agent-based epidemic simulations. For this, we studied the individual and joint effects of three disease parameter (transmissiblity, infection period and incubation period). We quantified the effects using an analytical equation with separate constants for SIS, SIR and SI disease models. The metric that we have developed in this work is useful for cost estimation of constrained producer-consumer algorithms, however, it has some limitations. The applicability of the metric is application, machine and data-specific. In the future, we plan to extend the metric to increase its applicability to a larger set of machine architectures, applications, and datasets.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
4

De, Grande Robson E. "Dynamic Load Balancing Schemes for Large-scale HLA-based Simulations." Thèse, Université d'Ottawa / University of Ottawa, 2012. http://hdl.handle.net/10393/23110.

Full text
Abstract:
Dynamic balancing of computation and communication load is vital for the execution stability and performance of distributed, parallel simulations deployed on shared, unreliable resources of large-scale environments. High Level Architecture (HLA) based simulations can experience a decrease in performance due to imbalances that are produced initially and/or during run-time. These imbalances are generated by the dynamic load changes of distributed simulations or by unknown, non-managed background processes resulting from the non-dedication of shared resources. Due to the dynamic execution characteristics of elements that compose distributed simulation applications, the computational load and interaction dependencies of each simulation entity change during run-time. These dynamic changes lead to an irregular load and communication distribution, which increases overhead of resources and execution delays. A static partitioning of load is limited to deterministic applications and is incapable of predicting the dynamic changes caused by distributed applications or by external background processes. Due to the relevance in dynamically balancing load for distributed simulations, many balancing approaches have been proposed in order to offer a sub-optimal balancing solution, but they are limited to certain simulation aspects, specific to determined applications, or unaware of HLA-based simulation characteristics. Therefore, schemes for balancing the communication and computational load during the execution of distributed simulations are devised, adopting a hierarchical architecture. First, in order to enable the development of such balancing schemes, a migration technique is also employed to perform reliable and low-latency simulation load transfers. Then, a centralized balancing scheme is designed; this scheme employs local and cluster monitoring mechanisms in order to observe the distributed load changes and identify imbalances, and it uses load reallocation policies to determine a distribution of load and minimize imbalances. As a measure to overcome the drawbacks of this scheme, such as bottlenecks, overheads, global synchronization, and single point of failure, a distributed redistribution algorithm is designed. Extensions of the distributed balancing scheme are also developed to improve the detection of and the reaction to load imbalances. These extensions introduce communication delay detection, migration latency awareness, self-adaptation, and load oscillation prediction in the load redistribution algorithm. Such developed balancing systems successfully improved the use of shared resources and increased distributed simulations' performance.
APA, Harvard, Vancouver, ISO, and other styles
5

Li, Qiang. "Simulations of turbulent boundary layers with heat transfer." Licentiate thesis, Stockholm : Skolan för teknikvetenskap, Kungliga Tekniska högskolan, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-11320.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Verma, Poonam Santosh. "Large Scale Computer Investigations of Non-Equilibrium Surface Growth for Surfaces From Parallel Discrete Event Simulations." MSSTATE, 2004. http://sun.library.msstate.edu/ETD-db/theses/available/etd-04192004-140532/.

Full text
Abstract:
The asymptotic scaling properties of conservative algorithms for parallel discrete-event simulations (e.g.: for spatially distributed parallel simulations of dynamic Monte Carlo for spin systems) of one-dimensional systems with system size $L$ is studied. The particular case studied here is the case of one or two elements assigned to each processor element. The previously studied case of one element per processor is reviewed, and the two elements per processor case is presented. The key concept is a simulated time horizon which is an evolving non equilibrium surface, specific for the particular algorithm. It is shown that the flat-substrate initial condition is responsible for the existence of an initial non-scaling regime. Various methods to deal with this non-scaling regime are documented, both the final successful method and unsuccessful attempts. The width of this time horizon relates to desynchronization in the system of processors. Universal properties of the conservative time horizon are derived by constructing a distribution of the interface width at saturation.
APA, Harvard, Vancouver, ISO, and other styles
7

Kelling, Jeffrey [Verfasser], Sibylle [Akademischer Betreuer] Gemming, Sibylle [Gutachter] Gemming, and Martin [Gutachter] Weigel. "Efficient Parallel Monte-Carlo Simulations for Large-Scale Studies of Surface Growth Processes / Jeffrey Kelling ; Gutachter: Sibylle Gemming, Martin Weigel ; Betreuer: Sibylle Gemming." Chemnitz : Technische Universität Chemnitz, 2018. http://d-nb.info/121482109X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Dad, Cherifa. "Méthodologie et algorithmes pour la distribution large échelle de co-simulations de systèmes complexes : application aux réseaux électriques intelligents (Smart Grids)." Electronic Thesis or Diss., CentraleSupélec, 2018. http://www.theses.fr/2018CSUP0004.

Full text
Abstract:
L’apparition des réseaux électriques intelligents, ou « Smart Grids », engendre de profonds changements dans le métier de la distribution d’électricité. En effet, ces réseaux voient apparaître de nouveaux usages (véhicules électriques, climatisation) et de nouveaux producteurs décentralisés (photovoltaïque, éolien), ce qui rend plus difficile le besoin d’équilibre entre l’offre et la demande en électricité et qui impose d’introduire une forme d’intelligence répartie entre leurs différents composants. Au vu de la complexité et de l’ampleur de la mise en oeuvre des Smart Grids, il convient tout d’abord de procéder à des simulations afin de valider leur fonctionnement. Pour ce faire, CentraleSupélec et EDF R&D (au sein de l’institut RISEGrid) ont développé DACCOSIM, une plate-forme de co-simulation s’appuyant sur la norme FMI1(Functional Mock-up Interface), permettant de concevoir et de mettre au point des réseaux électriques intelligents et de grandes tailles. Les composants clés de cette plate-forme sont représentés sous forme de boîtes grises appelées FMU (Functional Mock-up Unit). En outre, les simulateurs des systèmes physiques des Smart Grids peuvent faire des retours arrière en cas de problème dans leurs calculs, contrairement aux simulateurs événementiels (unités de contrôle) qui, bien souvent, ne peuvent qu’avancer dans le temps. Pour faire collaborer ces différents simulateurs, nous avons conçu une solution hybride prenant en considération les contraintes de tous les composants, et permettant d’identifier précisément les types d’événements auxquels le système est confronté. Cette étude a débouché sur une proposition d’évolution de la norme FMI. Par ailleurs, il est difficile de simuler rapidement et efficacement un Smart Grid, surtout lorsque le problème est à l’échelle nationale ou même régionale. Pour pallier ce manque, nous nous sommes focalisés sur la partie la plus gourmande en calcul, à savoir la co-simulation des dispositifs physiques. Ainsi, nous avons proposé des méthodologies, approches et algorithmes permettant de répartir efficacement et rapidement ces différentes FMU sur des architectures distribuées. L’implantation de ces algorithmes a déjà permis de co-simuler des cas métiers de grande taille sur un cluster de PC multi-coeurs. L’intégration de ces méthodes dans DACCOSIM permettraaux ingénieurs d’EDF de concevoir des « réseaux électriques intelligents de très grande taille » plus résistants aux pannes
The emergence of Smart Grids is causing profound changes in the electricity distribution business. Indeed, these networks are seeing new uses (electric vehicles, air conditioning) and new decentralized producers (photovoltaic, wind), which make it more difficult to ensure a balance between electricity supply and demand, and imposes to introduce a form of distributed intelligence between their different components. Considering its complexity and the extent of its implementation, it is necessary to co-simulate it in order to validate its performances. In the RISEGrid institute, CentraleSupélec and EDF R&D have developed a co-simulation platform based on the FMI2 (Functional Mock-up Interface) standard called DACCOSIM, permitting to design and develop Smart Grids. The key components of this platform are represented as gray boxes called FMUs (Functional Mock-up Unit). In addition, simulators of the physical systems of Smart Grids can make backtracking when an inaccuracy is suspected in FMU computations, unlike discrete simulators (control units) that often can only advance in time. In order these different simulators collaborate, we designed a hybrid solution that takes into account the constraints of all the components, and precisely identifies the types of the events that system is facing. This study has led to a FMI standard change proposal. Moreover, it is difficult to rapidly design an efficient Smart Grid simulation, especially when the problem has a national or even a regional scale.To fill this gap,we have focused on the most computationally intensive part, which is the simulation of physical devices. We have therefore proposed methodologies, approaches and algorithms to quickly and efficiently distribute these different FMUs on distributed architectures. The implementation of these algorithms has already allowed simulating large-scale business cases on a multi-core PC cluster. The integration of these methods into DACCOSIM will enable EDF engineers to design « large scale Smart Grids » which will be more resistant to breakdowns
APA, Harvard, Vancouver, ISO, and other styles
9

Tarabay, Ranine. "Simulations des écoulements sanguins dans des réseaux vasculaires complexes." Thesis, Strasbourg, 2016. http://www.theses.fr/2016STRAD034/document.

Full text
Abstract:
Au cours des dernières décennies, des progrès remarquables ont été réalisés au niveau de la simulation d’écoulements sanguins dans des modèles anatomiques réalistes construits à partir de données d'imagerie médicale 3D en vue de simulation hémodynamique et physiologique 3D à grande échelle. Alors que les modèles anatomiques précis sont d'une importance primordiale pour simuler le flux sanguin, des conditions aux limites réalistes sont également importantes surtout lorsqu’il s’agit de calculer des champs de vitesse et de pression. La première cible de cette thèse était d'étudier l'analyse de convergence des inconnus pour différents types de conditions aux limites permettant un cadre flexible par rapport au type de données d'entrée (vitesse, pression, débit, ...). Afin de faire face au grand coût informatique associé, nécessitant un calcul haute performance, nous nous sommes intéressés à comparer les performances de deux préconditionneurs par blocs; le preconditionneur LSC (Least-Squared Commutator et le preconditionneur PCD (Pressure Convection Diffusion). Dans le cadre de cette thèse, nous avons implémenté ce dernier dans la bibliothèque Feel++. Dans le but de traiter l'interaction fluide-structure, nous nous sommes focalisés sur l'approximation de la force exercée par le fluide sur la structure, un champ essentiel intervenant dans la condition de continuité pour assurer le couplage du modèle de fluide avec le modèle de structure. Enfin, afin de valider nos choix numériques, deux cas tests ont été réalisés et une comparaison avec les données expérimentales et numériques a été établie et validée (le benchmark FDA et le benchmark Phantom)
Towards a large scale 3D computational model of physiological hemodynamics, remarkable progress has been made in simulating blood flow in realistic anatomical models constructed from three-dimensional medical imaging data in the past few decades. When accurate anatomic models are of primary importance in simulating blood flow, realistic boundary conditions are equally important in computing velocity and pressure fields. Thus, the first target of this thesis was to investigate the convergence analysis of the unknown fields for various types of boundary conditions allowing for a flexible framework with respect to the type of input data (velocity, pressure, flow rate, ...). In order to deal with the associated large computational cost, requiring high performance computing, we were interested in comparing the performance of two block preconditioners; the least-squared commutator preconditioner and the pressure convection diffusion preconditioner. We implemented the latter, in the context of this thesis, in the Feel++ library. With the purpose of handling the fluid-structure interaction, we focused of the approximation of the force exerted by the fluid on the structure, a field that is essential while setting the continuity condition to ensure the coupling of the fluid model with the structure model. Finally, in order to assess our numerical choices, two benchmarks (the FDA benchmark and the Phantom benchmark) were carried out, and a comparison with respect to experimental and numerical data was established and validated
APA, Harvard, Vancouver, ISO, and other styles
10

Grass, Thomas. "Simulation methodologies for future large-scale parallel systems." Doctoral thesis, Universitat Politècnica de Catalunya, 2017. http://hdl.handle.net/10803/461198.

Full text
Abstract:
Since the early 2000s, computer systems have seen a transition from single-core to multi-core systems. While single-core systems included only one processor core on a chip, current multi-core processors include up to tens of cores on a single chip, a trend which is likely to continue in the future. Today, multi-core processors are ubiquitous. They are used in all classes of computing systems, ranging from low-cost mobile phones to high-end High-Performance Computing (HPC) systems. Designing future multi-core systems is a major challenge [12]. The primary design tool used by computer architects in academia and industry is architectural simulation. Simulating a computer system executing a program is typically several orders of magnitude slower than running the program on a real system. Therefore, new techniques are needed to speed up simulation and allow the exploration of large design spaces in a reasonable amount of time. One way of increasing simulation speed is sampling. Sampling reduces simulation time by simulating only a representative subset of a program in detail. In this thesis, we present a workload analysis of a set of task-based programs. We then use the insights from this study to propose TaskPoint, a sampled simulation methodology for task-based programs. Task-based programming models can reduce the synchronization costs of parallel programs on multi-core systems and are becoming increasingly important. Finally, we present MUSA, a simulation methodology for simulating applications running on thousands of cores on a hybrid, distributed shared-memory system. The simulation time required for simulation with MUSA is comparable to the time needed for native execution of the simulated program on a production HPC system. The techniques developed in the scope of this thesis permit researchers and engineers working in computer architecture to simulate large workloads, which were infeasible to simulate in the past. Our work enables architectural research in the fields of future large-scale shared-memory and hybrid, distributed shared-memory systems.
Des dels principis dels anys 2000, els sistemes d'ordinadors han experimentat una transició de sistemes d'un sol nucli a sistemes de múltiples nuclis. Mentre els sistemes d'un sol nucli incloïen només un nucli en un xip, els sistemes actuals de múltiples nuclis n'inclouen desenes, una tendència que probablement continuarà en el futur. Avui en dia, els processadors de múltiples nuclis són omnipresents. Es fan servir en totes les classes de sistemes de computació, de telèfons mòbils de baix cost fins a sistemes de computació d'alt rendiment. Dissenyar els futurs sistemes de múltiples nuclis és un repte important. L'eina principal usada pels arquitectes de computadors, tant a l'acadèmia com a la indústria, és la simulació. Simular un ordinador executant un programa típicament és múltiples ordres de magnitud més lent que executar el mateix programa en un sistema real. Per tant, es necessiten noves tècniques per accelerar la simulació i permetre l'exploració de grans espais de disseny en un temps raonable. Una manera d'accelerar la velocitat de simulació és la simulació mostrejada. La simulació mostrejada redueix el temps de simulació simulant en detall només un subconjunt representatiu d¿un programa. En aquesta tesi es presenta una anàlisi de rendiment d'una col·lecció de programes basats en tasques. Com a resultat d'aquesta anàlisi, proposem TaskPoint, una metodologia de simulació mostrejada per programes basats en tasques. Els models de programació basats en tasques poden reduir els costos de sincronització de programes paral·lels executats en sistemes de múltiples nuclis i actualment estan guanyant importància. Finalment, presentem MUSA, una metodologia de simulació per simular aplicacions executant-se en milers de nuclis d'un sistema híbrid, que consisteix en nodes de memòria compartida que formen un sistema de memòria distribuïda. El temps que requereixen les simulacions amb MUSA és comparable amb el temps que triga l'execució nativa en un sistema d'alt rendiment en producció. Les tècniques desenvolupades al llarg d'aquesta tesi permeten simular execucions de programes que abans no eren viables, tant als investigadors com als enginyers que treballen en l'arquitectura de computadors. Per tant, aquest treball habilita futura recerca en el camp d'arquitectura de sistemes de memòria compartida o distribuïda, o bé de sistemes híbrids, a gran escala.
A principios de los años 2000, los sistemas de ordenadores experimentaron una transición de sistemas con un núcleo a sistemas con múltiples núcleos. Mientras los sistemas single-core incluían un sólo núcleo, los sistemas multi-core incluyen decenas de núcleos en el mismo chip, una tendencia que probablemente continuará en el futuro. Hoy en día, los procesadores multi-core son omnipresentes. Se utilizan en todas las clases de sistemas de computación, de teléfonos móviles de bajo coste hasta sistemas de alto rendimiento. Diseñar sistemas multi-core del futuro es un reto importante. La herramienta principal usada por arquitectos de computadores, tanto en la academia como en la industria, es la simulación. Simular un computador ejecutando un programa típicamente es múltiples ordenes de magnitud más lento que ejecutar el mismo programa en un sistema real. Por ese motivo se necesitan nuevas técnicas para acelerar la simulación y permitir la exploración de grandes espacios de diseño dentro de un tiempo razonable. Una manera de aumentar la velocidad de simulación es la simulación muestreada. La simulación muestreada reduce el tiempo de simulación simulando en detalle sólo un subconjunto representativo de la ejecución entera de un programa. En esta tesis presentamos un análisis de rendimiento de una colección de programas basados en tareas. Como resultado de este análisis presentamos TaskPoint, una metodología de simulación muestreada para programas basados en tareas. Los modelos de programación basados en tareas pueden reducir los costes de sincronización de programas paralelos ejecutados en sistemas multi-core y actualmente están ganando importancia. Finalmente, presentamos MUSA, una metodología para simular aplicaciones ejecutadas en miles de núcleos de un sistema híbrido, compuesto de nodos de memoria compartida que forman un sistema de memoria distribuida. El tiempo de simulación que requieren las simulaciones con MUSA es comparable con el tiempo necesario para la ejecución del programa simulado en un sistema de alto rendimiento en producción. Las técnicas desarolladas al largo de esta tesis permiten a los investigadores e ingenieros trabajando en la arquitectura de computadores simular ejecuciones largas, que antes no se podían simular. Nuestro trabajo facilita nuevos caminos de investigación en los campos de sistemas de memoria compartida o distribuida y en sistemas híbridos.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Large-scale parallel simulations"

1

Dinavahi, Venkata, and Ning Lin. Parallel Dynamic and Transient Simulation of Large-Scale Power Systems. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-86782-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

EUROSIM '96 (1996 Delft, Netherlands). EUROSIM '96, HPCN challenges in telecomp and telecom: Parallel simulation of complex systems and large-scale applications : proceedings of the EUROSIM international conference, 10-12 June 1996, Delft, The Netherlands. Amsterdam: Elsevier, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

(Editor), L. Dekker, W. Smit (Editor), and J. C. Zuidervaart (Editor), eds. HPCN Challenges in Telecomp and Telecom: Parallel Simulation of Complex Systems and Large-Scale Applications. Elsevier Science Pub Co, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lin, Ning, and Venkata Dinavahi. Parallel Dynamic and Transient Simulation of Large-Scale Power Systems: A High-Performance Computing Solution. Springer International Publishing AG, 2021.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Large-scale parallel simulations"

1

Nagel, Kai, Marcus Rickert, and Christopher L. Barrett. "Large scale traffic simulations." In Vector and Parallel Processing — VECPAR'96, 380–402. Berlin, Heidelberg: Springer Berlin Heidelberg, 1997. http://dx.doi.org/10.1007/3-540-62828-2_131.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Masson, Roland, Philippe Quandalle, Stéphane Requena, and Robert Scheichl. "Parallel Preconditioning for Sedimentary Basin Simulations." In Large-Scale Scientific Computing, 93–102. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-24588-9_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kosturski, N., S. Margenov, and Y. Vutov. "Improving the Efficiency of Parallel FEM Simulations on Voxel Domains." In Large-Scale Scientific Computing, 574–81. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-29843-1_65.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Shankar, Vijaya, Adour Kabakian, Chris Rowell, and Touraj Sahely. "Large-Scale Parallel Simulations in Computational Electromagnetics." In Computational Fluid Dynamics 2000, 411–16. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/978-3-642-56535-9_61.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Antonelli, Laura, Pasqua D’Ambra, Francesco Gregoretti, Gennaro Oliva, and Paola Belardini. "A Parallel Combustion Solver within an Operator Splitting Context for Engine Simulations on Grids." In Large-Scale Scientific Computing, 167–74. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-78827-0_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Wu, Xingfu, Benchun Duan, and Valerie Taylor. "Parallel Earthquake Simulations on Large-Scale Multicore Supercomputers." In Handbook of Data Intensive Computing, 539–62. New York, NY: Springer New York, 2011. http://dx.doi.org/10.1007/978-1-4614-1415-5_21.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kwon, Sung Jin, Young Min Lee, and Se Young Im. "Parallel Computation of Large-Scale Molecular Dynamics Simulations." In Experimental Mechanics in Nano and Biotechnology, 341–44. Stafa: Trans Tech Publications Ltd., 2006. http://dx.doi.org/10.4028/0-87849-415-4.341.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Cui, Yifeng, Reagan Moore, Kim Olsen, Amit Chourasia, Philip Maechling, Bernard Minster, Steven Day, et al. "Enabling Very-Large Scale Earthquake Simulations on Parallel Machines." In Computational Science – ICCS 2007, 46–53. Berlin, Heidelberg: Springer Berlin Heidelberg, 2007. http://dx.doi.org/10.1007/978-3-540-72584-8_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Silva, Rômulo M., Benaia S. J. Lima, José J. Camata, Renato N. Elias, and Alvaro L. G. A. Coutinho. "Communication–Free Parallel Mesh Multiplication for Large Scale Simulations." In High Performance Computing for Computational Science – VECPAR 2018, 3–15. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-15996-2_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Löhner, R., J. D. Baum, Ch Charman, and D. Pelessone. "Large-Scale Fluid-Structure Interaction Simulations Using Parallel Computers." In Lecture Notes in Computational Science and Engineering, 3–20. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/978-3-642-55919-8_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Large-scale parallel simulations"

1

Chen, Yan, Hui Liu, Kun Wang, Zhangxin Chen, Yanfeng He, Bo Yang, and Peng Zhang. "Large-Scale Reservoir Simulations on Parallel Computers." In 2016 IEEE 2nd International Conference on Big Data Security on Cloud (BigDataSecurity), IEEE International Conference on High Performance and Smart Computing (HPSC) and IEEE International Conference on Intelligent Data and Security (IDS). IEEE, 2016. http://dx.doi.org/10.1109/bigdatasecurity-hpsc-ids.2016.20.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

"Large-scale Reservoir Simulations on Distributed-memory Parallel Computers." In 2016 Spring Simulation Multi-Conference. Society for Modeling and Simulation International (SCS), 2016. http://dx.doi.org/10.22360/springsim.2016.hpc.034.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kamal, Tariq, Keith R. Bisset, Ali R. Butt, Youngyun Chungbaek, and Madhav Marathe. "Load balancing in large-scale epidemiological simulations." In HPDC'13: The 22nd International Symposium on High-Performance Parallel and Distributed Computing. New York, NY, USA: ACM, 2013. http://dx.doi.org/10.1145/2462902.2462929.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Li, Xinyuan, Huang Ye, and Jian Zhang. "Large-scale Simulations of Peridynamics on Sunway Taihulight Supercomputer." In ICPP '20: 49th International Conference on Parallel Processing. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3404397.3404421.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Liu, Xing, and Edmond Chow. "Large-Scale Hydrodynamic Brownian Simulations on Multicore and Manycore Architectures." In 2014 IEEE International Parallel & Distributed Processing Symposium (IPDPS). IEEE, 2014. http://dx.doi.org/10.1109/ipdps.2014.65.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ross, Caitlin, Christopher D. Carothers, Misbah Mubarak, Philip Carns, Robert Ross, Jianping Kelvin Li, and Kwan-Liu Ma. "Visual Data-Analytics of Large-Scale Parallel Discrete-Event Simulations." In 2016 7th International Workshop on Performance Modeling, Benchmarking and Simulation of High Performance Computer Systems (PMBS). IEEE, 2016. http://dx.doi.org/10.1109/pmbs.2016.014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Nakano, Aiichiro. "Large-scale molecular dynamics simulations of materials on parallel computers." In ADVANCED COMPUTING AND ANALYSIS TECHNIQUES IN PHYSICS RESEARCH: VII International Workshop; ACAT 2000. AIP, 2001. http://dx.doi.org/10.1063/1.1405262.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Polizzi and Sameh. "Numerical parallel algorithms for large-scale nanoelectronics simulations using NESSIE." In Electrical Performance of Electronic Packaging. IEEE, 2004. http://dx.doi.org/10.1109/iwce.2004.1407326.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Qiu, Haozhong, Chuanfu Xu, Dali Li, Haoyu Wang, Jie Li, and Zheng Wang. "Parallelizing and Balancing Coupled DSMC/PIC for Large-scale Particle Simulations." In 2022 IEEE International Parallel and Distributed Processing Symposium (IPDPS). IEEE, 2022. http://dx.doi.org/10.1109/ipdps53621.2022.00045.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Hou, Bonan, Yiping Yao, and Shaoliang Peng. "Empirical Study on Entity Interaction Graph of Large-Scale Parallel Simulations." In 2011 ACM/IEEE/SCS 25th Workshop on Principles of Advanced and Distributed Simulation (PADS). IEEE, 2011. http://dx.doi.org/10.1109/pads.2011.5936762.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Large-scale parallel simulations"

1

Bader, Brett William, Roger Patrick Pawlowski, and Tamara Gibson Kolda. Robust large-scale parallel nonlinear solvers for simulations. Office of Scientific and Technical Information (OSTI), November 2005. http://dx.doi.org/10.2172/876345.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hiremath, Varun, Steven R. Lantz, Haifeng Wang, and Stephen B. Pope. Large-Scale Parallel Simulations of Turbulent Combustion using Combined Dimension Reduction and Tabulation of Chemistry. Fort Belvoir, VA: Defense Technical Information Center, May 2012. http://dx.doi.org/10.21236/ada569795.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Mahinthakumar, K. Multigrid and Krylov Solvers for Large Scale Finite Element Groundwater Flow Simulations on Distributed Memory Parallel Platforms. Office of Scientific and Technical Information (OSTI), January 1997. http://dx.doi.org/10.2172/814802.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Bauer, Andrew. In situ and time. Engineer Research and Development Center (U.S.), December 2022. http://dx.doi.org/10.21079/11681/46162.

Full text
Abstract:
Large-scale HPC simulations with their inherent I/O bottleneck have made in situ visualization an essential approach for data analysis, although the idea of in situ visualization dates back to the era of coprocessing in the 1990s. In situ coupling of analysis and visualization to a live simulation circumvents writing raw data to disk for post-mortem analysis -- an approach that is already inefficient for today's very large simulation codes. Instead, with in situ visualization, data abstracts are generated that provide a much higher level of expressiveness per byte. Therefore, more details can be computed and stored for later analysis, providing more insight than traditional methods. This workshop encouraged talks on methods and workflows that have been used for large-scale parallel visualization, with a particular focus on the in situ case.
APA, Harvard, Vancouver, ISO, and other styles
5

Carothers, Christopher, and Elsa Gonsiorowski. Modeling Large Scale Circuits Using Massively Parallel Descrete-Event Simulation. Fort Belvoir, VA: Defense Technical Information Center, June 2013. http://dx.doi.org/10.21236/ada586802.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography