Academic literature on the topic 'Parallel and distributed multi-Level programming'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Parallel and distributed multi-Level programming.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Parallel and distributed multi-Level programming"

1

Zhunissov, N. M., A. T. Bayaly, and E. T. Satybaldy. "THE POSSIBILITIES OF USING PARALLEL PROGRAMMING USING PYTHON." Q A Iasaýı atyndaǵy Halyqaralyq qazaq-túrіk ýnıversıtetіnіń habarlary (fızıka matematıka ınformatıka serııasy) 28, no. 1 (March 30, 2024): 105–14. http://dx.doi.org/10.47526/2024-1/2524-0080.09.

Full text
Abstract:
This article explores the development of parallel software applications using the Python programming language. Parallel programming is becoming increasingly important in the information technology world as multi-core processors and distributed computing become more common. Python provides developers with a variety of tools and libraries for creating parallel applications, including threads, processes, and asynchronous programming. This topic covers the basics of parallel programming in Python, including the principles of thread and process management, error handling, synchronization mechanisms, and resource management. He also considers asynchronous programming using the asyncio library, which allows you to efficiently handle asynchronous tasks. In addition, this topic raises issues of optimization and profiling of parallel applications, as well as explores distributed parallel programming using third-party libraries and frameworks. He also emphasizes the importance of testing and debugging in the context of parallel programming. Research and experiments in parallel programming using Python help developers create high-performance and efficient applications that can effectively use multi-core systems and distributed computing. This article offers an in-depth study that examines how Python is suitable for teaching parallel programming to inexperienced students. The results show that there are obstacles that prevent Python from maintaining its advantages in the transition from sequential programming to parallel.
APA, Harvard, Vancouver, ISO, and other styles
2

Deshpande, Ashish, and Martin Schultz. "Efficient Parallel Programming with Linda." Scientific Programming 1, no. 2 (1992): 177–83. http://dx.doi.org/10.1155/1992/829092.

Full text
Abstract:
Linda is a coordination language inverted by David Gelernter at Yale University, which when combined with a computation language (like C) yields a high-level parallel programming language for MIMD machines. Linda is based on a virtual shared associative memory containing objects called tuples. Skeptics have long claimed that Linda programs could not be efficient on distributed memory architectures. In this paper, we address this claim by discussing C-Linda's performance in solving a particular scientific computing problem, the shallow water equations, and make comparisons with alternatives available on various shared and distributed memory parallel machines.
APA, Harvard, Vancouver, ISO, and other styles
3

RAUBER, THOMAS, and GUDULA RÜNGER. "A DATA RE-DISTRIBUTION LIBRARY FOR MULTI-PROCESSOR TASK PROGRAMMING." International Journal of Foundations of Computer Science 17, no. 02 (April 2006): 251–70. http://dx.doi.org/10.1142/s0129054106003814.

Full text
Abstract:
Multiprocessor task (M-task) programming is a suitable parallel programming model for coding application problems with an inherent modular structure. An M-task can be executed on a group of processors of arbitrary size, concurrently to other M-tasks of the same application program. The data of a multiprocessor task program usually include composed data structures, like vectors or arrays. For distributed memory machines or cluster platforms, those composed data structures are distributed within one or more processor groups. Thus, a concise parallel programming model for M-tasks requires a standardized distributed data format for composed data structures. Additionally, functions for data re-distribution with respect to different data distributions and different processor group layouts are needed to glue program parts together. In this paper, we present a data re-distribution library which extends the M-task programming with Tlib, a library providing operations to split processor groups and to map M-tasks to processor groups.
APA, Harvard, Vancouver, ISO, and other styles
4

Aversa, R., B. Di Martino, N. Mazzocca, and S. Venticinque. "A Skeleton Based Programming Paradigm for Mobile Multi-Agents on Distributed Systems and Its Realization within the MAGDA Mobile Agents Platform." Mobile Information Systems 4, no. 2 (2008): 131–46. http://dx.doi.org/10.1155/2008/745406.

Full text
Abstract:
Parallel programming effort can be reduced by using high level constructs such as algorithmic skeletons. Within the MAGDA toolset, supporting programming and execution of mobile agent based distributed applications, we provide a skeleton-based parallel programming environment, based on specialization of Algorithmic Skeleton Java interfaces and classes. Their implementation include mobile agent features for execution on heterogeneous systems, such as clusters of WSs and PCs, and support reliability and dynamic workload balancing. The user can thus develop a parallel, mobile agent based application by simply specialising a given set of classes and methods and using a set of added functionalities.
APA, Harvard, Vancouver, ISO, and other styles
5

Gorodnyaya, Lidia. "FUNCTIONAL PROGRAMMING FOR PARALLEL COMPUTING." Bulletin of the Novosibirsk Computing Center. Series: Computer Science, no. 45 (2021): 29–48. http://dx.doi.org/10.31144/bncc.cs.2542-1972.2021.n45.p29-48.

Full text
Abstract:
The paper is devoted to modern trends in the application of functional programming to the problems of organizing parallel computations. Functional programming is considered as a meta-paradigm for solving the problems of developing multi-threaded programs for multiprocessor complexes and distributed systems, as well as for solving the problems associated with rapid IT development. The semantic and pragmatic principles of functional programming and consequences of these principles are described. The paradigm analysis of programming languages and systems is used, which allows assessing their similarities and differences. Taking into account these features is necessary when predicting the course of application processes, as well as when planning the study and organization of program development. There are reasons to believe that functional programming is capable of improving program performance through its adaptability to modeling and prototyping. A variety of features and characteristics inherent in the development and debugging of long-lived parallel computing programs is shown. The author emphasizes the prospects of functional programming as a universal technique for solving complex problems burdened with difficult to verify and poorly compatible requirements. A brief outline of the requirements for a multiparadigm parallel programming language is given.
APA, Harvard, Vancouver, ISO, and other styles
6

Spahi, Enis, and D. Altilar. "ITU-PRP: Parallel and Distributed Computing Middleware for Java Developers." International Journal of Business & Technology 3, no. 1 (November 2014): 2–13. http://dx.doi.org/10.33107/ijbte.2014.3.1.01.

Full text
Abstract:
ITU-PRP provides a Parallel Programming Framework for Java Developers on which they can adapt their sequential application code to operate on a distributed multi-host parallel environment. Developers would implement parallel models, such as Loop Parallelism, Divide and Conquer, Master-Slave and Fork-Join by the help of an API Library provided under framework. Produced parallel applications would be submitted to a middleware called Parallel Running Platform (PRP), on which parallel resources for parallel processing are being organized and performed. The middleware creates Task Plans (TP) according to application’s parallel model, assigns best available resource Hosts, in order to perform fast parallel processing. Task Plans will be created dynamically in real time according to resources actual utilization status or availability, instead of predefined/preconfigured task plans. ITU-PRP achieves better efficiency on parallel processing over big data sets and distributes divided base data to multiple hosts to be operated by Coarse-Grained parallelism. According to this model distributed parallel tasks would operate independently with minimal interaction until processing ends.
APA, Harvard, Vancouver, ISO, and other styles
7

Городняя, Лидия Васильевна. "Perspectives of Functional Programming of Parallel Computations." Russian Digital Libraries Journal 24, no. 6 (January 26, 2022): 1090–116. http://dx.doi.org/10.26907/1562-5419-2021-24-6-1090-1116.

Full text
Abstract:
The article is devoted to the results of the analysis of modern trends in functional programming, considered as a metaparadigm for solving the problems of organizing parallel computations and multithreaded programs for multiprocessor complexes and distributed systems. Taking into account the multi-paradigm nature of parallel programming, the paradigm analysis of languages and functional programming systems is used. This makes it possible to reduce the complexity of the problems being solved by methods of decomposition of programs into autonomously developed components, to evaluate their similarities and differences. Consideration of such features is necessary when predicting the course of application processes, as well as when planning the study and organizing the development of programs. There is reason to believe that functional programming has the ability to improve programs performance. A variety of paradigmatic characteristics inherent in the preparation and debugging of long-lived parallel computing programs are shown.
APA, Harvard, Vancouver, ISO, and other styles
8

LUKE, EDWARD A., and THOMAS GEORGE. "Loci: a rule-based framework for parallel multi-disciplinary simulation synthesis." Journal of Functional Programming 15, no. 3 (May 2005): 477–502. http://dx.doi.org/10.1017/s0956796805005514.

Full text
Abstract:
We present a rule-based framework for the development of scalable parallel high performance simulations for a broad class of scientific applications (with particular emphasis on continuum mechanics). We take a pragmatic approach to our programming abstractions by implementing structures that are used frequently and have common high performance implementations on distributed memory architectures. The resulting framework borrows heavily from rule-based systems for relational database models, however limiting the scope to those parts that have obvious high performance implementation. Using our approach, we demonstrate predictable performance behavior and efficient utilization of large scale distributed memory architectures on problems of significant complexity involving multiple disciplines.
APA, Harvard, Vancouver, ISO, and other styles
9

TRINDER, P. W. "Special Issue High Performance Parallel Functional Programming." Journal of Functional Programming 15, no. 3 (May 2005): 351–52. http://dx.doi.org/10.1017/s0956796805005496.

Full text
Abstract:
Engineering high-performance parallel programs is hard: not only must a correct, efficient and inherently-parallel algorithm be developed, but the computations must be effectively and efficiently coordinated across multiple processors. It has long been recognised that ideas and approaches drawn from functional programming may be particularly applicable to parallel and distributed computing (e.g. Wegner 1971). There are several reasons for this suitability. Concurrent stateless computations are much easier to coordinate, high-level coordination abstractions reduce programming effort, and declarative notations are amenable to reasoning, i.e. to optimising transformations, derivation and performance analysis.
APA, Harvard, Vancouver, ISO, and other styles
10

POGGI, AGOSTINO, and PAOLA TURCI. "AN AGENT BASED LANGUAGE FOR THE DEVELOPMENT OF DISTRIBUTED SOFTWARE SYSTEMS." International Journal on Artificial Intelligence Tools 05, no. 03 (September 1996): 347–66. http://dx.doi.org/10.1142/s0218213096000237.

Full text
Abstract:
This paper presents a concurrent object-oriented language, called CUBL, that seems be suitable for the development and maintenance of multi-agent systems. This language is based on objects, called c_units, that act in parallel and communicate with each other through synchronous and asynchronous message passing, and allows the distribution of a program, that is, of its objects on a network of UNIX workstations. This language has been enriched with an agent architecture that offers some of more important features for agent-oriented programming and some advantages as regards the other implemented agent architectures. In particular this architecture allows the development of systems where agents communicate with each other through a high level agent communication language and can change their behavior during their life.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Parallel and distributed multi-Level programming"

1

Djemame, Karim. "Distributed simulation of high-level algebraic Petri nets." Thesis, University of Glasgow, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.301624.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Saifi, Mohamad Maamoun El. "PMPI: uma implementação MPI multi-plataforma, multi-linguagem." Universidade de São Paulo, 2006. http://www.teses.usp.br/teses/disponiveis/3/3141/tde-08122006-154811/.

Full text
Abstract:
Esta dissertação apresenta o PMPI, uma implementação do padrão MPI em plataformas heterogêneas. Diferentemente de outras implementações de MPI, o PMPI permite que a aplicação paralela seja realizada num sistema multi-plataforma, e que programas em linguagens de programação diferentes participem da mesma computação. PMPI é construído sobre o Dotnet Framework. Com o PMPI, os nós de processamento chamam funções MPI que são executadas transparentemente em outros nós participantes da computação paralela pela rede de comunicação. O PMPI pode atravessar múltiplos domínios administrativos distribuídos geograficamente. Para os programadores, o grid se parece como uma computação MPI local. O modelo de computação é indistinguível da computação MPI padrão. Esta dissertação estuda a implementação de PMPI com o Microsoft Dotnet Framework e com o MONO para prover uma biblioteca que suporta ambiente de multi-linguagens de programação e multi-plataformas. São analisados os resultados obtidos dos testes executados em sistemas heterogêneos usando PMPI. Os resultados obtidos mostram que a implementação PMPI é uma solução viável, possuindo várias vantagens que ainda podemos explorar melhor.
This dissertation describes PMPI, an implementation of the MPI standard on a heterogeneous platform. Unlike other MPI implementations, PMPI permits MPI computation to run on a multiplatform system. In addition, PMPI permits programs executing on different nodes to be written in different programming languages. PMPI is build on the top of Dotnet framework. With PMPI, nodes call MPI functions that are transparently executed on the participating nodes across the network. PMPI can span multiple administrative domains distributed geographically. To programmers, the grid looks like a local MPI computation. The model of computation is indistinguishable from that of standard MPI computation. This dissertation studies the implementation of PMPI with Microsoft Dotnet framework and MONO Dotnet framework to provide a common layer for a multiprogramming language multiplatform MPI library. Results obtained from tests running PMPI on a heterogeneous system are analyzed. The obtained results show that PMPI implementation is feasible and has many advantages that can be explored.
APA, Harvard, Vancouver, ISO, and other styles
3

Xirogiannis, George. "Execution of Prolog by transformations on distributed memory multi-processors." Thesis, Heriot-Watt University, 1998. http://hdl.handle.net/10399/639.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Morgadinho, Nuno Eduardo Quaresma. "Distributed multi-threading in GNU prolog." Master's thesis, Universidade de Évora, 2007. http://hdl.handle.net/10174/16496.

Full text
Abstract:
Embora a computação paralela já tenha sido alvo de inúmeros estudos, o processo de a tornar acessível as massas ainda mal começou. Através da combinação com o Prolog de um ambiente de programação distribuída e multithreaded, como o PM2, torna-se possível ter computações paralelas e concorrentes usando programação em logica. Com este objetivo foi desenvolvido o PM2-Prolog, um interface Prolog para o sistema PM2. Tal sistema permite correr aplicações Prolog multithreaded em múltiplas instâncias do GNU Prolog num ambiente distribuído, tirando, assim, partido dos recursos disponíveis nos computadores ligados numa rede. Em problemas computacionalmente pesados, onde o tempo de execução é crucial, existe particular vantagem em usar este sistema. A API do sistema oferece primitivas para gestão de threads e para comunicação explícita entre threads. Testes preliminares mostram um ganho de desempenho quase linear, em comparação com uma versão sequencial. /ABSTRACT - Although parallel computing has been widely researched, the process of bringing concurrency and parallel programming to the mainstream has just begun. Combining a distributed multi-threading environment like PM2 with Prolog, opens the way to exploit concurrency and parallel computing using logic programming. To achieve such a purpose, we developed PM2-Prolog, a Prolog interface to the PM2 system. It allows multithreaded Prolog applications to run in multiple GNU Prolog engines in a distributed environment, thus taking advantage of the resources available on a computer network. This is especially useful for computationally intensive problems, where performance is an important factor. The system API offers thread management primitives, as well as explicit communication between threads. Preliminary test results show an almost linear speedup, when compared to a sequential version.
APA, Harvard, Vancouver, ISO, and other styles
5

Samson, Rodelyn Reyes. "A multi-agent architecture for internet distributed computing system." CSUSB ScholarWorks, 2003. https://scholarworks.lib.csusb.edu/etd-project/2408.

Full text
Abstract:
This thesis presents the developed taxonomy of the agent-based distributed computing systems. Based on this taxonomy, a design, implementation, analysis and distribution protocol of a multi-agent architecture for internet-based distributed computing system was developed. A prototype of the designed architecture was implemented on Spider III using the IBM Aglets software development kit (ASDK 2.0) and the language Java.
APA, Harvard, Vancouver, ISO, and other styles
6

Ruan, Jianhua, Han-Shen Yuh, and Koping Wang. "Spider III: A multi-agent-based distributed computing system." CSUSB ScholarWorks, 2002. https://scholarworks.lib.csusb.edu/etd-project/2249.

Full text
Abstract:
The project, Spider III, presents architecture and protocol of a multi-agent-based internet distributed computing system, which provides a convenient development and execution environment for transparent task distribution, load balancing, and fault tolerance. Spider is an on going distribution computing project in the Department of Computer Science, California State University San Bernardino. It was first proposed as an object-oriented distributed system by Han-Sheng Yuh in his master's thesis in 1997. It has been further developed by Koping Wang in his master's project, of where he made large contribution and implemented the Spider II System.
APA, Harvard, Vancouver, ISO, and other styles
7

Gurhem, Jérôme. "Paradigmes de programmation répartie et parallèle utilisant des graphes de tâches pour supercalculateurs post-pétascale." Thesis, Lille, 2021. http://www.theses.fr/2021LILUI005.

Full text
Abstract:
Depuis le milieu des années 1990, les bibliothèques de transmission de messages sont les technologies les plus utilisées pour développer des applications parallèles et distribuées. Des modèles de programmation basés sur des tâches peuvent être utilisés, par exemple, pour éviter les communications collectives sur toutes les ressources comme les réductions, les diffusions ou les rassemblements en les transformant en multiples opérations avec des tâches. Ensuite, ces opérations peuvent être planifiées par l'ordonnanceur pour placer les données et les calculs de manière à optimiser et réduire les communications de données.L'objectif principal de cette thèse est d'étudier ce que doit être la programmation basée sur des tâches pour des applications scientifiques et de proposer une spécification de cette programmation distribuée et parallèle, en expérimentant avec plusieurs représentations simplifiées d'applications scientifiques importantes pour TOTAL, et de méthodes linéaire classique dense et creuses. Au cours de la thèse, plusieurs langages de programmation et paradigmes sont étudiés. Des méthodes linéaires denses pour résoudre des systèmes linéaires, des séquences de produit matrice vecteur creux et la migration sismique en profondeur pré-empilement de Kirchhoff sont étudiées et implémentées en tant qu'applications basées sur des tâches.Une taxonomie, basée sur plusieurs de ces langages et paradigmes est proposée. Des logiciels ont été développés en utilisant ces modèles de programmation pour chaque application simplifiée. À la suite de ces recherches, une méthodologie pour la programmation de tâches parallèles est proposée, optimisant les mouvements de données en général et, en particulier, pour des applications scientifiques ciblées
Since the middle of the 1990s, message passing libraries are the most used technology to implement parallel and distributed applications. However, they may not be a solution efficient enough on exascale machines since scalability issues will appear due to the increase in computing resources. Task-based programming models can be used, for example, to avoid collective communications along all the resources like reductions, broadcast or gather by transforming them into multiple operations on tasks. Then, these operations can be scheduled by the scheduler to place the data and computations in a way that optimize and reduce the data communications. The main objective of this thesis is to study what must be task-based programming for scientific applications and to propose a specification of such distributed and parallel programming, by experimenting for several simplified representations of important scientific applications for TOTAL, and classical dense and sparse linear methods.During the dissertation, several programming languages and paradigms are studied. Dense linear methods to solve linear systems, sequences of sparse matrix vector product and the Kirchhoff seismic pre-stack depth migration are studied and implemented as task-based applications. A taxonomy, based on several of these languages and paradigms is proposed.Software were developed using these programming models for each simplified application. As a result of these researches, a methodology for parallel task programming is proposed, optimizing data movements, in general, and for targeted scientific applications, in particular
APA, Harvard, Vancouver, ISO, and other styles
8

Moukir, Sara. "High performance analysis for road traffic control." Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASG039.

Full text
Abstract:
La réduction des temps de trajet et de la consommation d'énergie dans les réseaux routiers urbains est cruciale pour le bien-être collectif et la durabilité environnementale. Depuis les années 1950, la modélisation du trafic a été un axe central de la recherche. Avec l'évolution des capacités informatiques, des simulations sophistiquées représentant fidèlement les complexités du trafic routier ont émergé, essentielles pour évaluer les technologies sans perturber le trafic réel.Les systèmes de transport deviennent plus complexes avec des informations en temps réel, nécessitant des modèles de simulation adaptés. Les simulations multi-agents, analysant les comportements individuels dans un environnement dynamique, sont particulièrement efficaces pour cette tâche, permettant de comprendre et de gérer le trafic urbain en représentant les interactions entre les voyageurs et leur environnement.Simuler de grandes populations de voyageurs dans les villes a longtemps été une tâche exigeante en termes de ressources informatiques. Les technologies avancées permettant la distribution des calculs sur plusieurs ordinateurs ont ouvert de nouvelles possibilités. Cependant, de nombreux simulateurs de mobilité urbaine n'exploitent pas pleinement ces architectures distribuées, limitant leur capacité à modéliser des scénarios complexes.L'objectif principal de cette recherche est d'améliorer la performance algorithmique et computationnelle des simulateurs de mobilité. Nous développons et validons des modèles de distribution génériques et reproductibles pouvant être adoptés par divers simulateurs de mobilité multi-agents, surmontant ainsi les barrières techniques pour analyser les systèmes de transport complexes dans des environnements urbains dynamiques.Nous utilisons le simulateur de trafic MATSim, reconnu pour la simulation de trafic multi-agents, pour tester nos méthodes génériques. Notre première contribution applique l'approche "Unite and Conquer" (UC) à MATSim. Cette méthode accélère les simulations en exploitant les architectures informatiques modernes. L'approche multiMATSim réplique plusieurs instances de MATSim sur plusieurs nœuds de calcul avec des communications périodiques, chaque instance fonctionnant sur un nœud séparé, utilisant les capacités de multithreading de MATSim pour améliorer le parallélisme. La synchronisation périodique assure la cohérence des données, tandis que les mécanismes de tolérance aux pannes permettent à la simulation de se poursuivre même en cas d'échec de certaines instances. Cette approche optimise l'utilisation des ressources informatiques selon les capacités spécifiques de chaque nœud.La deuxième contribution explore les techniques d'intelligence artificielle pour accélérer la simulation. Nous utilisons des réseaux de neurones profonds pour prédire les résultats des simulations MATSim. Initialement mise en œuvre sur un seul nœud, cette approche de preuve de concept utilise efficacement les ressources CPU disponibles. Les réseaux de neurones sont entraînés sur des données de simulations précédentes pour prédire des indicateurs tels que les temps de trajet et les niveaux de congestion. Les résultats sont comparés à ceux de MATSim pour évaluer leur précision. Cette approche est conçue pour évoluer avec des plans futurs pour une formation distribuée sur plusieurs nœuds.En résumé, nos contributions fournissent de nouvelles variantes algorithmiques et explorent l'intégration du calcul haute performance et de l'IA dans les simulateurs de trafic multi-agents. Nous démontrons l'impact de ces modèles et technologies sur la simulation de trafic, en abordant les défis et les limites de leur mise en œuvre. Notre travail met en évidence les avantages des architectures émergentes et des nouveaux concepts algorithmiques pour améliorer la robustesse et la performance des simulateurs de trafic, avec des résultats prometteurs
The need to reduce travel times and energy consumption in urban road networks is critical for improving collective well-being and environmental sustainability. Since the 1950s, traffic modeling has been a central research focus. With the rapid evolution of computing capabilities in the 21st century, sophisticated digital simulations have emerged, accurately depicting road traffic complexities. Mobility simulations are essential for assessing emerging technologies like cooperative systems and dynamic GPS navigation without disrupting real traffic.As transport systems become more complex with real-time information, simulation models must adapt. Multi-agent simulations, which analyze individual behaviors within a dynamic environment, are particularly suited for this task. These simulations help understand and manage urban traffic by representing interactions between travelers and their environment.Simulating large populations of travelers in cities, potentially millions of individuals, has historically been computationally demanding. Advanced computer technologies allowing distributed calculations across multiple computers have opened new possibilities. However, many urban mobility simulators do not fully exploit these distributed architectures, limiting their ability to model complex scenarios involving many travelers and extensive networks.The main objective of this research is to improve the algorithmic and computational performance of mobility simulators. We aim to develop and validate generic and reproducible distribution models that can be adopted by various multi-agent mobility simulators. This approach seeks to overcome technical barriers and provide a solid foundation for analyzing complex transport systems in dynamic urban environments.Our research leverages the MATSim traffic simulator due to its flexibility and open structure. MATSim is widely recognized in the literature for multi-agent traffic simulation, making it an ideal candidate to test our generic methods.Our first contribution applies the "Unite and Conquer" (UC) approach to MATSim. This method accelerates simulation speed by leveraging modern computing architectures. The multiMATSim approach involves replicating several MATSim instances across multiple computing nodes with periodic communications. Each instance runs on a separate node, utilizing MATSim's native multithreading capabilities to enhance parallelism. Periodic synchronization ensures data consistency, while fault tolerance mechanisms allow the simulation to continue smoothly even if some instances fail. This approach efficiently uses diverse computational resources based on each node's specific capabilities.The second contribution explores artificial intelligence techniques to expedite the simulation process. Specifically, we use deep neural networks to predict MATSim simulation outcomes. Initially implemented on a single node, this proof-of-concept approach efficiently uses available CPU resources. Neural networks are trained on data from previous simulations to predict key metrics like travel times and congestion levels. The outputs are compared to MATSim results to assess accuracy. This approach is designed to scale, with future plans for distributed neural network training across multiple nodes.In summary, our contributions provide new algorithmic variants and explore integrating high-performance computing and AI into multi-agent traffic simulators. We aim to demonstrate the impact of these models and technologies on traffic simulation, addressing the challenges and limitations of their implementation. Our work highlights the benefits of emerging architectures and new algorithmic concepts for enhancing the robustness and performance of traffic simulators, presenting promising results
APA, Harvard, Vancouver, ISO, and other styles
9

Adornes, Daniel Couto. "A unified mapreduce programming interface for multi-core and distributed architectures." Pontif?cia Universidade Cat?lica do Rio Grande do Sul, 2015. http://tede2.pucrs.br/tede2/handle/tede/6782.

Full text
Abstract:
Submitted by Setor de Tratamento da Informa??o - BC/PUCRS (tede2@pucrs.br) on 2016-06-22T19:44:58Z No. of bitstreams: 1 DIS_DANIEL_COUTO_ADORNES_COMPLETO.pdf: 1894086 bytes, checksum: f87c59fa92f43ed62efaafd9c724ed8d (MD5)
Made available in DSpace on 2016-06-22T19:44:58Z (GMT). No. of bitstreams: 1 DIS_DANIEL_COUTO_ADORNES_COMPLETO.pdf: 1894086 bytes, checksum: f87c59fa92f43ed62efaafd9c724ed8d (MD5) Previous issue date: 2015-03-31
Coordena??o de Aperfei?oamento de Pessoal de N?vel Superior - CAPES
In order to improve performance, simplicity and scalability of large datasets processing, Google proposed the MapReduce parallel pattern. This pattern has been implemented in several ways for different architectural levels, achieving significant results for high performance computing. However, developing optimized code with those solutions requires specialized knowledge in each framework?s interface and programming language. Recently, the DSL-POPP was proposed as a framework with a high-level language for patternsoriented parallel programming, aimed at abstracting complexities of parallel and distributed code. Inspired on DSL-POPP, this work proposes the implementation of a unified MapReduce programming interface with rules for code transformation to optimized solutions for shared-memory multi-core and distributed architectures. The evaluation demonstrates that the proposed interface is able to avoid performance losses, while also achieving a code and a development cost reduction from 41.84% to 96.48%. Moreover, the construction of the code generator, the compatibility with other MapReduce solutions and the extension of DSL-POPP with the MapReduce pattern are proposed as future work.
Visando melhoria de performance, simplicidade e escalabilidade no processamento de dados amplos, o Google prop?s o padr?o paralelo MapReduce. Este padr?o tem sido implementado de variadas formas para diferentes n?veis de arquitetura, alcan?ando resultados significativos com respeito a computa??o de alto desempenho. No entanto, desenvolver c?digo otimizado com tais solu??es requer conhecimento especializado na interface e na linguagem de programa??o de cada solu??o. Recentemente, a DSL-POPP foi proposta como uma solu??o de linguagem de programa??o de alto n?vel para programa??o paralela orientada a padr?es, visando abstrair as complexidades envolvidas em programa??o paralela e distribu?da. Inspirado na DSL-POPP, este trabalho prop?e a implementa??o de uma interface unificada de programa??o MapReduce com regras para transforma??o de c?digo para solu??es otimizadas para arquiteturas multi-core de mem?ria compartilhada e distribu?da. A avalia??o demonstra que a interface proposta ? capaz de evitar perdas de performance, enquanto alcan?a uma redu??o de c?digo e esfor?o de programa??o de 41,84% a 96,48%. Ademais, a constru??o do gerador de c?digo, a compatibilidade com outras solu??es MapReduce e a extens?o da DSL-POPP com o padr?o MapReduce s?o propostas para trabalhos futuros.
APA, Harvard, Vancouver, ISO, and other styles
10

McCall, Andrew James. "Multi-level Parallelism with MPI and OpenACC for CFD Applications." Thesis, Virginia Tech, 2017. http://hdl.handle.net/10919/78203.

Full text
Abstract:
High-level parallel programming approaches, such as OpenACC, have recently become popular in complex fluid dynamics research since they are cross-platform and easy to implement. OpenACC is a directive-based programming model that, unlike low-level programming models, abstracts the details of implementation on the GPU. Although OpenACC generally limits the performance of the GPU, this model significantly reduces the work required to port an existing code to any accelerator platform, including GPUs. The purpose of this research is twofold: to investigate the effectiveness of OpenACC in developing a portable and maintainable GPU-accelerated code, and to determine the capability of OpenACC to accelerate large, complex programs on the GPU. In both of these studies, the OpenACC implementation is optimized and extended to a multi-GPU implementation while maintaining a unified code base. OpenACC is shown as a viable option for GPU computing with CFD problems. In the first study, a CFD code that solves incompressible cavity flows is accelerated using OpenACC. Overlapping communication with computation improves performance for the multi-GPU implementation by up to 21%, achieving up to 400 times faster performance than a single CPU and 99% weak scalability efficiency with 32 GPUs. The second study ports the execution of a more complex CFD research code to the GPU using OpenACC. Challenges using OpenACC with modern Fortran are discussed. Three test cases are used to evaluate performance and scalability. The multi-GPU performance using 27 GPUs is up to 100 times faster than a single CPU and maintains a weak scalability efficiency of 95%.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Parallel and distributed multi-Level programming"

1

Bruce Irvin, R., and Barton P. Miller. "A Performance Tool for High-Level Parallel Programming Languages." In Programming Environments for Massively Parallel Distributed Systems, 299–313. Basel: Birkhäuser Basel, 1994. http://dx.doi.org/10.1007/978-3-0348-8534-8_30.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hofstee, H. Peter, Johan J. Lukkien, and Jan L. A. Snepscheut. "A distributed implementation of a task pool." In Reasearch Directions in High-Level Parallel Programming Languages, 338–48. Berlin, Heidelberg: Springer Berlin Heidelberg, 1992. http://dx.doi.org/10.1007/3-540-55160-3_54.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Faasen, Craig. "Intermediate uniformly distributed tuple space on transputer meshes." In Reasearch Directions in High-Level Parallel Programming Languages, 157–73. Berlin, Heidelberg: Springer Berlin Heidelberg, 1992. http://dx.doi.org/10.1007/3-540-55160-3_41.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Sakagami, Hitoshi. "Three-Dimensional Fluid Code with XcalableMP." In XcalableMP PGAS Programming Language, 165–79. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-7683-6_6.

Full text
Abstract:
AbstractIn order to adapt parallel computers to general convenient tools for computational scientists, a high-level and easy-to-use portable parallel programming paradigm is mandatory. XcalableMP, which is proposed by the XcalableMP Specification Working Group, is a directive-based language extension for Fortran and C to easily describe parallelization in programs for distributed memory parallel computers. The Omni XcalableMP compiler, which is provided as a reference XcalableMP compiler, is currently implemented as a source-to-source translator. It converts XcalableMP programs to standard MPI programs, which can be easily compiled by the native Fortran compiler and executed on most of parallel computers. A three-dimensional Eulerian fluid code written in Fortran is parallelized by XcalableMP using two different programming models with the ordinary domain decomposition method, and its performances are measured on the K computer. Programs converted by the Omni XcalableMP compiler prevent native Fortran compiler optimizations and show lower performance than that of hand-coded MPI programs. Finally almost the same performances are obtained by using specific compiler options of the native Fortran compiler in the case of a global-view programming model, but performance degradation is not improved by specifying any native compiler options when the code is parallelized by a local-view programming model.
APA, Harvard, Vancouver, ISO, and other styles
5

Rastogi, Rajeev, Philip Bohannon, James Parker, Avi Silberschatz, S. Seshadri, and S. Sudarshan. "Distributed Multi-Level Recovery in Main-Memory Databases." In Parallel and Distributed Information Systems, 41–71. Boston, MA: Springer US, 1998. http://dx.doi.org/10.1007/978-1-4757-6132-0_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Dib, Djawida, Nikos Parlavantzas, and Christine Morin. "Towards Multi-level Adaptation for Distributed Operating Systems and Applications." In Algorithms and Architectures for Parallel Processing, 100–109. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33065-0_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Protze, Joachim, Miwako Tsuji, Christian Terboven, Thomas Dufaud, Hitoshi Murai, Serge Petiton, Nahid Emad, Matthias S. Müller, and Taisuke Boku. "MYX: Runtime Correctness Analysis for Multi-Level Parallel Programming Paradigms." In Software for Exascale Computing - SPPEXA 2016-2019, 545–67. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-47956-5_18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Jung, Yu-Jin, and Yong-Ik Yoon. "Flexible Multi-level Regression Model for Prediction of Pedestrian Abnormal Behavior." In Advances in Parallel and Distributed Computing and Ubiquitous Services, 137–43. Singapore: Springer Singapore, 2016. http://dx.doi.org/10.1007/978-981-10-0068-3_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Yu, Yang, Laksono Kurnianggoro, Wahyono, and Kang-Hyun Jo. "Online Programming Design of Distributed System Based on Multi-level Storage." In Intelligent Computing Methodologies, 745–52. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-42297-8_69.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Spinelli, Stefano. "Optimal Management and Control of Smart Thermal-Energy Grids." In Special Topics in Information Technology, 15–27. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-85918-3_2.

Full text
Abstract:
AbstractThis work deals with the development of novel algorithms and methodologies for the optimal management and control of thermal and electrical energy units operating in a networked configuration. The aim of the work is to foster the creation of a smart thermal-energy grid (smart-TEG), by providing supporting tools for the modeling of subsystems and their optimal control and coordination. A hierarchical scheme is proposed to optimally address the management and control issues of the smart-TEG. Different methods are adopted to deal with the features of the specific generation units involved, e.g., multi-rate MPC approaches, or linear parameter-varying strategies for managing subsystem nonlinearity. An advanced scheme based on ensemble model is also conceived for a network of homogeneous units operating in parallel. Moreover, a distributed optimization algorithm for the high-level unit commitment problem is proposed to provide a robust, flexible and scalable scheme.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Parallel and distributed multi-Level programming"

1

Steuwer, Michel, Philipp Kegel, and Sergei Gorlatch. "Towards High-Level Programming of Multi-GPU Systems Using the SkelCL Library." In 2012 26th IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW). IEEE, 2012. http://dx.doi.org/10.1109/ipdpsw.2012.229.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Yi Pan. "High-level vs low-level parallel programming for scientific computing." In Proceedings 16th International Parallel and Distributed Processing Symposium. IPDPS 2002. IEEE, 2002. http://dx.doi.org/10.1109/ipdps.2002.1016644.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Jungblut, Pascal, and Dieter Kranzlmuller. "Optimal Schedules for High-Level Programming Environments on FPGAs with Constraint Programming." In 2022 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW). IEEE, 2022. http://dx.doi.org/10.1109/ipdpsw55747.2022.00025.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Brady, T., E. Konstantinov, and A. Lastovetsky. "SmartNetSolve: high-level programming system for high performance grid computing." In Proceedings 20th IEEE International Parallel & Distributed Processing Symposium. IEEE, 2006. http://dx.doi.org/10.1109/ipdps.2006.1639660.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Isard, Michael, and Yuan Yu. "Distributed data-parallel computing using a high-level programming language." In the 35th SIGMOD international conference. New York, New York, USA: ACM Press, 2009. http://dx.doi.org/10.1145/1559845.1559962.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Chiang, Chia-Chu. "Low-level language constructs considered harmful for distributed parallel programming." In the 42nd annual Southeast regional conference. New York, New York, USA: ACM Press, 2004. http://dx.doi.org/10.1145/986537.986603.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

"Workshop on high-level parallel programming models & supportive environments." In 18th International Parallel and Distributed Processing Symposium, 2004. Proceedings. IEEE, 2004. http://dx.doi.org/10.1109/ipdps.2004.1303141.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Niculescu, Virginia, Frederic Loulergue, Darius Bufnea, and Adrian Sterca. "A Java Framework for High Level Parallel Programming Using Powerlists." In 2017 18th International Conference on Parallel and Distributed Computing, Applications and Technologies (PDCAT). IEEE, 2017. http://dx.doi.org/10.1109/pdcat.2017.00049.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

B. Dennis, Jack. "The fresh breeze project: A multi-core chip supporting composable parallel programming." In Distributed Processing Symposium (IPDPS). IEEE, 2008. http://dx.doi.org/10.1109/ipdps.2008.4536391.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Li, Dong, and Heike Jagode. "Workshop 6: HIPS High-level Parallel Programming Models and Supportive Environments." In 2020 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW). IEEE, 2020. http://dx.doi.org/10.1109/ipdpsw50202.2020.00064.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Parallel and distributed multi-Level programming"

1

Amela, R., R. Badia, S. Böhm, R. Tosi, C. Soriano, and R. Rossi. D4.2 Profiling report of the partner’s tools, complete with performance suggestions. Scipedia, 2021. http://dx.doi.org/10.23967/exaqute.2021.2.023.

Full text
Abstract:
This deliverable focuses on the proling activities developed in the project with the partner's applications. To perform this proling activities, a couple of benchmarks were dened in collaboration with WP5. The rst benchmark is an embarrassingly parallel benchmark that performs a read and then multiple writes of the same object, with the objective of stressing the memory and storage systems and evaluate the overhead when these reads and writes are performed in parallel. A second benchmark is dened based on the Continuation Multi Level Monte Carlo (C-MLMC) algorithm. While this algorithm is normally executed using multiple levels, for the proling and performance analysis objectives, the execution of a single level was enough since the forthcoming levels have similar performance characteristics. Additionally, while the simulation tasks can be executed as parallel (multi-threaded tasks), in the benchmark, single threaded tasks were executed to increase the number of simulations to be scheduled and stress the scheduling engines. A set of experiments based on these two benchmarks have been executed in the MareNostrum 4 supercomputer and using PyCOMPSs as underlying programming model and dynamic scheduler of the tasks involved in the executions. While the rst benchmark was executed several times in a single iteration, the second benchmark was executed in an iterative manner, with cycles of 1) Execution and trace generation; 2) Performance analysis; 3) Improvements. This had enabled to perform several improvements in the benchmark and in the scheduler of PyCOMPSs. The initial iterations focused on the C-MLMC structure itself, performing re-factors of the code to remove ne grain and sequential tasks and merging them in larger granularity tasks. The next iterations focused on improving the PyCOMPSs scheduler, removing existent bottlenecks and increasing its performance by making the scheduler a multithreaded engine. While the results can still be improved, we are satised with the results since the granularity of the simulations run in this evaluation step are much ner than the one that will be used for the real scenarios. The deliverable nishes with some recommendations that should be followed along the project in order to obtain good performance in the execution of the project codes.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography