Literatura científica selecionada sobre o tema "Parallel and distributed multi-Level programming"
Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos
Consulte a lista de atuais artigos, livros, teses, anais de congressos e outras fontes científicas relevantes para o tema "Parallel and distributed multi-Level programming".
Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.
Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.
Artigos de revistas sobre o assunto "Parallel and distributed multi-Level programming"
Zhunissov, N. M., A. T. Bayaly e E. T. Satybaldy. "THE POSSIBILITIES OF USING PARALLEL PROGRAMMING USING PYTHON". Q A Iasaýı atyndaǵy Halyqaralyq qazaq-túrіk ýnıversıtetіnіń habarlary (fızıka matematıka ınformatıka serııasy) 28, n.º 1 (30 de março de 2024): 105–14. http://dx.doi.org/10.47526/2024-1/2524-0080.09.
Texto completo da fonteDeshpande, Ashish, e Martin Schultz. "Efficient Parallel Programming with Linda". Scientific Programming 1, n.º 2 (1992): 177–83. http://dx.doi.org/10.1155/1992/829092.
Texto completo da fonteRAUBER, THOMAS, e GUDULA RÜNGER. "A DATA RE-DISTRIBUTION LIBRARY FOR MULTI-PROCESSOR TASK PROGRAMMING". International Journal of Foundations of Computer Science 17, n.º 02 (abril de 2006): 251–70. http://dx.doi.org/10.1142/s0129054106003814.
Texto completo da fonteAversa, R., B. Di Martino, N. Mazzocca e S. Venticinque. "A Skeleton Based Programming Paradigm for Mobile Multi-Agents on Distributed Systems and Its Realization within the MAGDA Mobile Agents Platform". Mobile Information Systems 4, n.º 2 (2008): 131–46. http://dx.doi.org/10.1155/2008/745406.
Texto completo da fonteGorodnyaya, Lidia. "FUNCTIONAL PROGRAMMING FOR PARALLEL COMPUTING". Bulletin of the Novosibirsk Computing Center. Series: Computer Science, n.º 45 (2021): 29–48. http://dx.doi.org/10.31144/bncc.cs.2542-1972.2021.n45.p29-48.
Texto completo da fonteSpahi, Enis, e D. Altilar. "ITU-PRP: Parallel and Distributed Computing Middleware for Java Developers". International Journal of Business & Technology 3, n.º 1 (novembro de 2014): 2–13. http://dx.doi.org/10.33107/ijbte.2014.3.1.01.
Texto completo da fonteГородняя, Лидия Васильевна. "Perspectives of Functional Programming of Parallel Computations". Russian Digital Libraries Journal 24, n.º 6 (26 de janeiro de 2022): 1090–116. http://dx.doi.org/10.26907/1562-5419-2021-24-6-1090-1116.
Texto completo da fonteLUKE, EDWARD A., e THOMAS GEORGE. "Loci: a rule-based framework for parallel multi-disciplinary simulation synthesis". Journal of Functional Programming 15, n.º 3 (maio de 2005): 477–502. http://dx.doi.org/10.1017/s0956796805005514.
Texto completo da fonteTRINDER, P. W. "Special Issue High Performance Parallel Functional Programming". Journal of Functional Programming 15, n.º 3 (maio de 2005): 351–52. http://dx.doi.org/10.1017/s0956796805005496.
Texto completo da fontePOGGI, AGOSTINO, e PAOLA TURCI. "AN AGENT BASED LANGUAGE FOR THE DEVELOPMENT OF DISTRIBUTED SOFTWARE SYSTEMS". International Journal on Artificial Intelligence Tools 05, n.º 03 (setembro de 1996): 347–66. http://dx.doi.org/10.1142/s0218213096000237.
Texto completo da fonteTeses / dissertações sobre o assunto "Parallel and distributed multi-Level programming"
Djemame, Karim. "Distributed simulation of high-level algebraic Petri nets". Thesis, University of Glasgow, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.301624.
Texto completo da fonteSaifi, Mohamad Maamoun El. "PMPI: uma implementação MPI multi-plataforma, multi-linguagem". Universidade de São Paulo, 2006. http://www.teses.usp.br/teses/disponiveis/3/3141/tde-08122006-154811/.
Texto completo da fonteThis dissertation describes PMPI, an implementation of the MPI standard on a heterogeneous platform. Unlike other MPI implementations, PMPI permits MPI computation to run on a multiplatform system. In addition, PMPI permits programs executing on different nodes to be written in different programming languages. PMPI is build on the top of Dotnet framework. With PMPI, nodes call MPI functions that are transparently executed on the participating nodes across the network. PMPI can span multiple administrative domains distributed geographically. To programmers, the grid looks like a local MPI computation. The model of computation is indistinguishable from that of standard MPI computation. This dissertation studies the implementation of PMPI with Microsoft Dotnet framework and MONO Dotnet framework to provide a common layer for a multiprogramming language multiplatform MPI library. Results obtained from tests running PMPI on a heterogeneous system are analyzed. The obtained results show that PMPI implementation is feasible and has many advantages that can be explored.
Xirogiannis, George. "Execution of Prolog by transformations on distributed memory multi-processors". Thesis, Heriot-Watt University, 1998. http://hdl.handle.net/10399/639.
Texto completo da fonteMorgadinho, Nuno Eduardo Quaresma. "Distributed multi-threading in GNU prolog". Master's thesis, Universidade de Évora, 2007. http://hdl.handle.net/10174/16496.
Texto completo da fonteSamson, Rodelyn Reyes. "A multi-agent architecture for internet distributed computing system". CSUSB ScholarWorks, 2003. https://scholarworks.lib.csusb.edu/etd-project/2408.
Texto completo da fonteRuan, Jianhua, Han-Shen Yuh e Koping Wang. "Spider III: A multi-agent-based distributed computing system". CSUSB ScholarWorks, 2002. https://scholarworks.lib.csusb.edu/etd-project/2249.
Texto completo da fonteGurhem, Jérôme. "Paradigmes de programmation répartie et parallèle utilisant des graphes de tâches pour supercalculateurs post-pétascale". Thesis, Lille, 2021. http://www.theses.fr/2021LILUI005.
Texto completo da fonteSince the middle of the 1990s, message passing libraries are the most used technology to implement parallel and distributed applications. However, they may not be a solution efficient enough on exascale machines since scalability issues will appear due to the increase in computing resources. Task-based programming models can be used, for example, to avoid collective communications along all the resources like reductions, broadcast or gather by transforming them into multiple operations on tasks. Then, these operations can be scheduled by the scheduler to place the data and computations in a way that optimize and reduce the data communications. The main objective of this thesis is to study what must be task-based programming for scientific applications and to propose a specification of such distributed and parallel programming, by experimenting for several simplified representations of important scientific applications for TOTAL, and classical dense and sparse linear methods.During the dissertation, several programming languages and paradigms are studied. Dense linear methods to solve linear systems, sequences of sparse matrix vector product and the Kirchhoff seismic pre-stack depth migration are studied and implemented as task-based applications. A taxonomy, based on several of these languages and paradigms is proposed.Software were developed using these programming models for each simplified application. As a result of these researches, a methodology for parallel task programming is proposed, optimizing data movements, in general, and for targeted scientific applications, in particular
Moukir, Sara. "High performance analysis for road traffic control". Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASG039.
Texto completo da fonteThe need to reduce travel times and energy consumption in urban road networks is critical for improving collective well-being and environmental sustainability. Since the 1950s, traffic modeling has been a central research focus. With the rapid evolution of computing capabilities in the 21st century, sophisticated digital simulations have emerged, accurately depicting road traffic complexities. Mobility simulations are essential for assessing emerging technologies like cooperative systems and dynamic GPS navigation without disrupting real traffic.As transport systems become more complex with real-time information, simulation models must adapt. Multi-agent simulations, which analyze individual behaviors within a dynamic environment, are particularly suited for this task. These simulations help understand and manage urban traffic by representing interactions between travelers and their environment.Simulating large populations of travelers in cities, potentially millions of individuals, has historically been computationally demanding. Advanced computer technologies allowing distributed calculations across multiple computers have opened new possibilities. However, many urban mobility simulators do not fully exploit these distributed architectures, limiting their ability to model complex scenarios involving many travelers and extensive networks.The main objective of this research is to improve the algorithmic and computational performance of mobility simulators. We aim to develop and validate generic and reproducible distribution models that can be adopted by various multi-agent mobility simulators. This approach seeks to overcome technical barriers and provide a solid foundation for analyzing complex transport systems in dynamic urban environments.Our research leverages the MATSim traffic simulator due to its flexibility and open structure. MATSim is widely recognized in the literature for multi-agent traffic simulation, making it an ideal candidate to test our generic methods.Our first contribution applies the "Unite and Conquer" (UC) approach to MATSim. This method accelerates simulation speed by leveraging modern computing architectures. The multiMATSim approach involves replicating several MATSim instances across multiple computing nodes with periodic communications. Each instance runs on a separate node, utilizing MATSim's native multithreading capabilities to enhance parallelism. Periodic synchronization ensures data consistency, while fault tolerance mechanisms allow the simulation to continue smoothly even if some instances fail. This approach efficiently uses diverse computational resources based on each node's specific capabilities.The second contribution explores artificial intelligence techniques to expedite the simulation process. Specifically, we use deep neural networks to predict MATSim simulation outcomes. Initially implemented on a single node, this proof-of-concept approach efficiently uses available CPU resources. Neural networks are trained on data from previous simulations to predict key metrics like travel times and congestion levels. The outputs are compared to MATSim results to assess accuracy. This approach is designed to scale, with future plans for distributed neural network training across multiple nodes.In summary, our contributions provide new algorithmic variants and explore integrating high-performance computing and AI into multi-agent traffic simulators. We aim to demonstrate the impact of these models and technologies on traffic simulation, addressing the challenges and limitations of their implementation. Our work highlights the benefits of emerging architectures and new algorithmic concepts for enhancing the robustness and performance of traffic simulators, presenting promising results
Adornes, Daniel Couto. "A unified mapreduce programming interface for multi-core and distributed architectures". Pontif?cia Universidade Cat?lica do Rio Grande do Sul, 2015. http://tede2.pucrs.br/tede2/handle/tede/6782.
Texto completo da fonteMade available in DSpace on 2016-06-22T19:44:58Z (GMT). No. of bitstreams: 1 DIS_DANIEL_COUTO_ADORNES_COMPLETO.pdf: 1894086 bytes, checksum: f87c59fa92f43ed62efaafd9c724ed8d (MD5) Previous issue date: 2015-03-31
Coordena??o de Aperfei?oamento de Pessoal de N?vel Superior - CAPES
In order to improve performance, simplicity and scalability of large datasets processing, Google proposed the MapReduce parallel pattern. This pattern has been implemented in several ways for different architectural levels, achieving significant results for high performance computing. However, developing optimized code with those solutions requires specialized knowledge in each framework?s interface and programming language. Recently, the DSL-POPP was proposed as a framework with a high-level language for patternsoriented parallel programming, aimed at abstracting complexities of parallel and distributed code. Inspired on DSL-POPP, this work proposes the implementation of a unified MapReduce programming interface with rules for code transformation to optimized solutions for shared-memory multi-core and distributed architectures. The evaluation demonstrates that the proposed interface is able to avoid performance losses, while also achieving a code and a development cost reduction from 41.84% to 96.48%. Moreover, the construction of the code generator, the compatibility with other MapReduce solutions and the extension of DSL-POPP with the MapReduce pattern are proposed as future work.
Visando melhoria de performance, simplicidade e escalabilidade no processamento de dados amplos, o Google prop?s o padr?o paralelo MapReduce. Este padr?o tem sido implementado de variadas formas para diferentes n?veis de arquitetura, alcan?ando resultados significativos com respeito a computa??o de alto desempenho. No entanto, desenvolver c?digo otimizado com tais solu??es requer conhecimento especializado na interface e na linguagem de programa??o de cada solu??o. Recentemente, a DSL-POPP foi proposta como uma solu??o de linguagem de programa??o de alto n?vel para programa??o paralela orientada a padr?es, visando abstrair as complexidades envolvidas em programa??o paralela e distribu?da. Inspirado na DSL-POPP, este trabalho prop?e a implementa??o de uma interface unificada de programa??o MapReduce com regras para transforma??o de c?digo para solu??es otimizadas para arquiteturas multi-core de mem?ria compartilhada e distribu?da. A avalia??o demonstra que a interface proposta ? capaz de evitar perdas de performance, enquanto alcan?a uma redu??o de c?digo e esfor?o de programa??o de 41,84% a 96,48%. Ademais, a constru??o do gerador de c?digo, a compatibilidade com outras solu??es MapReduce e a extens?o da DSL-POPP com o padr?o MapReduce s?o propostas para trabalhos futuros.
McCall, Andrew James. "Multi-level Parallelism with MPI and OpenACC for CFD Applications". Thesis, Virginia Tech, 2017. http://hdl.handle.net/10919/78203.
Texto completo da fonteMaster of Science
Capítulos de livros sobre o assunto "Parallel and distributed multi-Level programming"
Bruce Irvin, R., e Barton P. Miller. "A Performance Tool for High-Level Parallel Programming Languages". In Programming Environments for Massively Parallel Distributed Systems, 299–313. Basel: Birkhäuser Basel, 1994. http://dx.doi.org/10.1007/978-3-0348-8534-8_30.
Texto completo da fonteHofstee, H. Peter, Johan J. Lukkien e Jan L. A. Snepscheut. "A distributed implementation of a task pool". In Reasearch Directions in High-Level Parallel Programming Languages, 338–48. Berlin, Heidelberg: Springer Berlin Heidelberg, 1992. http://dx.doi.org/10.1007/3-540-55160-3_54.
Texto completo da fonteFaasen, Craig. "Intermediate uniformly distributed tuple space on transputer meshes". In Reasearch Directions in High-Level Parallel Programming Languages, 157–73. Berlin, Heidelberg: Springer Berlin Heidelberg, 1992. http://dx.doi.org/10.1007/3-540-55160-3_41.
Texto completo da fonteSakagami, Hitoshi. "Three-Dimensional Fluid Code with XcalableMP". In XcalableMP PGAS Programming Language, 165–79. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-7683-6_6.
Texto completo da fonteRastogi, Rajeev, Philip Bohannon, James Parker, Avi Silberschatz, S. Seshadri e S. Sudarshan. "Distributed Multi-Level Recovery in Main-Memory Databases". In Parallel and Distributed Information Systems, 41–71. Boston, MA: Springer US, 1998. http://dx.doi.org/10.1007/978-1-4757-6132-0_3.
Texto completo da fonteDib, Djawida, Nikos Parlavantzas e Christine Morin. "Towards Multi-level Adaptation for Distributed Operating Systems and Applications". In Algorithms and Architectures for Parallel Processing, 100–109. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33065-0_11.
Texto completo da fonteProtze, Joachim, Miwako Tsuji, Christian Terboven, Thomas Dufaud, Hitoshi Murai, Serge Petiton, Nahid Emad, Matthias S. Müller e Taisuke Boku. "MYX: Runtime Correctness Analysis for Multi-Level Parallel Programming Paradigms". In Software for Exascale Computing - SPPEXA 2016-2019, 545–67. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-47956-5_18.
Texto completo da fonteJung, Yu-Jin, e Yong-Ik Yoon. "Flexible Multi-level Regression Model for Prediction of Pedestrian Abnormal Behavior". In Advances in Parallel and Distributed Computing and Ubiquitous Services, 137–43. Singapore: Springer Singapore, 2016. http://dx.doi.org/10.1007/978-981-10-0068-3_17.
Texto completo da fonteYu, Yang, Laksono Kurnianggoro, Wahyono e Kang-Hyun Jo. "Online Programming Design of Distributed System Based on Multi-level Storage". In Intelligent Computing Methodologies, 745–52. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-42297-8_69.
Texto completo da fonteSpinelli, Stefano. "Optimal Management and Control of Smart Thermal-Energy Grids". In Special Topics in Information Technology, 15–27. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-85918-3_2.
Texto completo da fonteTrabalhos de conferências sobre o assunto "Parallel and distributed multi-Level programming"
Steuwer, Michel, Philipp Kegel e Sergei Gorlatch. "Towards High-Level Programming of Multi-GPU Systems Using the SkelCL Library". In 2012 26th IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW). IEEE, 2012. http://dx.doi.org/10.1109/ipdpsw.2012.229.
Texto completo da fonteYi Pan. "High-level vs low-level parallel programming for scientific computing". In Proceedings 16th International Parallel and Distributed Processing Symposium. IPDPS 2002. IEEE, 2002. http://dx.doi.org/10.1109/ipdps.2002.1016644.
Texto completo da fonteJungblut, Pascal, e Dieter Kranzlmuller. "Optimal Schedules for High-Level Programming Environments on FPGAs with Constraint Programming". In 2022 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW). IEEE, 2022. http://dx.doi.org/10.1109/ipdpsw55747.2022.00025.
Texto completo da fonteBrady, T., E. Konstantinov e A. Lastovetsky. "SmartNetSolve: high-level programming system for high performance grid computing". In Proceedings 20th IEEE International Parallel & Distributed Processing Symposium. IEEE, 2006. http://dx.doi.org/10.1109/ipdps.2006.1639660.
Texto completo da fonteIsard, Michael, e Yuan Yu. "Distributed data-parallel computing using a high-level programming language". In the 35th SIGMOD international conference. New York, New York, USA: ACM Press, 2009. http://dx.doi.org/10.1145/1559845.1559962.
Texto completo da fonteChiang, Chia-Chu. "Low-level language constructs considered harmful for distributed parallel programming". In the 42nd annual Southeast regional conference. New York, New York, USA: ACM Press, 2004. http://dx.doi.org/10.1145/986537.986603.
Texto completo da fonte"Workshop on high-level parallel programming models & supportive environments". In 18th International Parallel and Distributed Processing Symposium, 2004. Proceedings. IEEE, 2004. http://dx.doi.org/10.1109/ipdps.2004.1303141.
Texto completo da fonteNiculescu, Virginia, Frederic Loulergue, Darius Bufnea e Adrian Sterca. "A Java Framework for High Level Parallel Programming Using Powerlists". In 2017 18th International Conference on Parallel and Distributed Computing, Applications and Technologies (PDCAT). IEEE, 2017. http://dx.doi.org/10.1109/pdcat.2017.00049.
Texto completo da fonteB. Dennis, Jack. "The fresh breeze project: A multi-core chip supporting composable parallel programming". In Distributed Processing Symposium (IPDPS). IEEE, 2008. http://dx.doi.org/10.1109/ipdps.2008.4536391.
Texto completo da fonteLi, Dong, e Heike Jagode. "Workshop 6: HIPS High-level Parallel Programming Models and Supportive Environments". In 2020 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW). IEEE, 2020. http://dx.doi.org/10.1109/ipdpsw50202.2020.00064.
Texto completo da fonteRelatórios de organizações sobre o assunto "Parallel and distributed multi-Level programming"
Amela, R., R. Badia, S. Böhm, R. Tosi, C. Soriano e R. Rossi. D4.2 Profiling report of the partner’s tools, complete with performance suggestions. Scipedia, 2021. http://dx.doi.org/10.23967/exaqute.2021.2.023.
Texto completo da fonte