Inhaltsverzeichnis
Auswahl der wissenschaftlichen Literatur zum Thema „Application Distribuée Parallèle“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Application Distribuée Parallèle" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "Application Distribuée Parallèle"
Spahi, Enis, und D. Altilar. „ITU-PRP: Parallel and Distributed Computing Middleware for Java Developers“. International Journal of Business & Technology 3, Nr. 1 (November 2014): 2–13. http://dx.doi.org/10.33107/ijbte.2014.3.1.01.
Der volle Inhalt der QuelleCésar, Eduardo, Anna Morajko, Tomàs Margalef, Joan Sorribes, Antonio Espinosa und Emilio Luque. „Dynamic Performance Tuning Supported by Program Specification“. Scientific Programming 10, Nr. 1 (2002): 35–44. http://dx.doi.org/10.1155/2002/549617.
Der volle Inhalt der QuelleVADHIYAR, SATHISH S., und JACK J. DONGARRA. „SRS: A FRAMEWORK FOR DEVELOPING MALLEABLE AND MIGRATABLE PARALLEL APPLICATIONS FOR DISTRIBUTED SYSTEMS“. Parallel Processing Letters 13, Nr. 02 (Juni 2003): 291–312. http://dx.doi.org/10.1142/s0129626403001288.
Der volle Inhalt der QuelleMORAJKO, ANNA, OLEG MORAJKO, JOSEP JORBA, TOMÀS MARGALEF und EMILIO LUQUE. „AUTOMATIC PERFORMANCE ANALYSIS AND DYNAMIC TUNING OF DISTRIBUTED APPLICATIONS“. Parallel Processing Letters 13, Nr. 02 (Juni 2003): 169–87. http://dx.doi.org/10.1142/s0129626403001227.
Der volle Inhalt der QuelleSikora, Andrzej, und Ewa Niewiadomska-Szynkiewicz. „Parallel and Distributed Simulation of Ad Hoc Networks“. Journal of Telecommunications and Information Technology, Nr. 3 (26.06.2023): 76–84. http://dx.doi.org/10.26636/jtit.2009.3.943.
Der volle Inhalt der QuelleBaude, Francoise, Denis Caromel und David Sagnol. „Distributed Objects for Parallel Numerical Applications“. ESAIM: Mathematical Modelling and Numerical Analysis 36, Nr. 5 (September 2002): 837–61. http://dx.doi.org/10.1051/m2an:2002039.
Der volle Inhalt der QuelleIoannidis, Sotiris, Umit Rencuzogullari, Robert Stets und Sandhya Dwarkadas. „CRAUL: Compiler and Run-Time Integration for Adaptation under Load“. Scientific Programming 7, Nr. 3-4 (1999): 261–73. http://dx.doi.org/10.1155/1999/603478.
Der volle Inhalt der QuelleChen, Wenjie, Qiliang Yang, Ziyan Jiang, Jianchun Xing, Shuo Zhao, Qizhen Zhou, Deshuai Han und Bowei Feng. „SwarmL: A Language for Programming Fully Distributed Intelligent Building Systems“. Buildings 13, Nr. 2 (12.02.2023): 499. http://dx.doi.org/10.3390/buildings13020499.
Der volle Inhalt der QuelleCollins, C., und M. Duffy. „Distributed (parallel) inductor design for VRM applications“. IEEE Transactions on Magnetics 41, Nr. 10 (Oktober 2005): 4000–4002. http://dx.doi.org/10.1109/tmag.2005.855163.
Der volle Inhalt der QuelleAlléon, G., S. Champagneux, G. Chevalier, L. Giraud und G. Sylvand. „Parallel distributed numerical simulations in aeronautic applications“. Applied Mathematical Modelling 30, Nr. 8 (August 2006): 714–30. http://dx.doi.org/10.1016/j.apm.2005.06.014.
Der volle Inhalt der QuelleDissertationen zum Thema "Application Distribuée Parallèle"
Lavallée, Ivan. „Contribution à l'algoritmique parallèle et distribuée : application à l'optimisation combinatoire“. Paris 11, 1986. http://www.theses.fr/1986PA112275.
Der volle Inhalt der QuelleTourancheau, Bernard. „Algorithmique parallèle pour les machines à mémoire distribuée : application aux algorithmes matriciels“. Grenoble INPG, 1989. http://tel.archives-ouvertes.fr/tel-00332663/.
Der volle Inhalt der QuelleGamom, Ngounou Ewo Roland Christian. „Déploiement d'applications parallèles sur une architecture distribuée matériellement reconfigurable“. Thesis, Cergy-Pontoise, 2015. http://www.theses.fr/2015CERG0773/document.
Der volle Inhalt der QuelleAmong the architectural targets that could be buid a system on chip (SoC), dynamically reconfigurable architectures (DRA) offer interesting potential for flexibility and dynamicity . However this potential is still difficult to use in massively parallel on chip applications. In our work we identified and analyzed the solutions currently proposed to use DRA and found their limitations including: the use of a particular technology or proprietary architecture, the lack of parallel applications consideration, the difficult scalability, the lack of a common language adopted by the community to use the flexibility of DRA ...In our work we propose a solution for deployment on an DRA of a parallel application using standard SoC design flows. This solution is called MATIP ( textit {MPI Application Platform Task Integreation}) and uses primitives of MPI standard Version 2 to make communications and to reconfigure the MP-RSoC architecture . MATIP is a Platform-Based Design (PBD) level solution.The MATIP platform is modeled in three layers: interconnection, communication and application. Each layer is designed to satisfies the requirements of heterogeneity and dynamicity of parallel applications. For this, MATIP uses a distributed memory architecture and utilizes the message passing parallel programming paradigm to enhance scalability of the platform.MATIP frees the designer of all the details related to interconnection, communication between tasks and management of dynamic reconfiguration of the hardware target. A demonstrator of MATIP was performed on Xilinx FPGA through the implementation of an application consisting of two static and two dynamic hardware tasks. MATIP offers a bandwidth of 2.4 Gb / s and latency of 3.43 microseconds for the transfer of a byte. Compared to other MPI platforms (TMD-MPI, SOC-MPI MPI HAL), MATIP is in the state of the art
Philippe, Jean-Laurent. „Programmation de calculateurs massivement parallèles : application à la factorisation d'entiers“. Grenoble INPG, 1990. http://tel.archives-ouvertes.fr/tel-00338193.
Der volle Inhalt der QuelleDesprez, Frédéric. „Procédures de base pour le calcul scientifique sur machines parallèles à mémoire distribuée“. Phd thesis, Grenoble INPG, 1994. http://tel.archives-ouvertes.fr/tel-00344993.
Der volle Inhalt der QuelleJeatsa, Toulepi Armel. „Optimisation de l'allocation de la mémoire cache CPU pour les fonctions cloud et les applications haute performance“. Electronic Thesis or Diss., Université de Toulouse (2023-....), 2024. http://www.theses.fr/2024TLSEP089.
Der volle Inhalt der QuelleContemporary IT services are mainly based on two major paradigms: cluster computing and cloud computing. The former involves the distribution of computing tasks between different nodes that work together as a single system, while the latter is based on the virtualization of computing infrastructure, enabling it to be provided on demand. In this thesis, our focus is on last-level cache (LLC) allocation in the context of these two paradigms, concentrating specifically on distributed parallel applications and FaaS functions. The LLC is a shared memory space used by all processor cores on a NUMA socket. As a shared resource, it is subject to contention, which can have a significant impact on performance. To alleviate this problem, Intel has implemented a technology in its processors that enables partitioning and allocation of cache memory: Cache Allocation Technology (CAT).In this work, using CAT, we first examine the impact of LLC contention on the performance of FaaS functions. Then, we study how this contention in a subset of nodes in a cluster affects the overall performance of a running distributed application. From these studies, we propose CASY and CADiA, intelligent LLC allocation systems for FaaS functions and distributed applications respectively. CASY uses supervised machine learning to predict the cache requirements of a FaaS function based on the size of the input file, while CADiA dynamically constructs the cache usage profile of a distributed application and performs harmonized allocation across all nodes according to this profile. These two solutions enabled us to achieve performance gains of up to around 11% for CASY, and 13% for CADiA
Bougé, Luc. „Modularité et symétrie pour les systèmes répartis; application au langage CSP“. Phd thesis, Université Paris-Diderot - Paris VII, 1987. http://tel.archives-ouvertes.fr/tel-00416184.
Der volle Inhalt der QuelleLa modularité exprime que les processeurs du système n'ont initialement aucune connaissance concernant globalement le réseau dans lequel ils sont plongés. La symétrie exprime que les processeurs avec des positions topologiquement équivalentes dans le réseau ont aussi des rôles équivalents dans les calculs.
Nous définissons ces propriétés dans le cadre du langage CSP des processus séquentiels communicants de Hoare. Nous proposons une définition syntaxique pour la modularité. Nous montrons qu'une définition syntaxique de la symétrie n'est pas suffisante. Nous en proposons une définition sémantique. Cette définition se réfère implicitement à une sémantique partiellement ordonnée de CSP.
Nous étudions l'existence d'algorithmes de diffusion et d'élection dans les réseaux de processus communicants, qui soient modulaires et symétriques. Nous obtenons de nombreux résultats positifs et négatifs. Ceci conduit en particulier à une évaluation précise du pouvoir expressif de CSP. Nous montrons par exemple qu'il n'existe pas d'implantation des gardes d'émission par des gardes de réception seulement, si la symétrie doit être préservée.
Ces résultats sont enfin utilisés pour proposer une solution modulaire, symétrique et bornée au problème de la détection de la terminaison répartie proposé par Francez.
Dad, Cherifa. „Méthodologie et algorithmes pour la distribution large échelle de co-simulations de systèmes complexes : application aux réseaux électriques intelligents (Smart Grids)“. Electronic Thesis or Diss., CentraleSupélec, 2018. http://www.theses.fr/2018CSUP0004.
Der volle Inhalt der QuelleThe emergence of Smart Grids is causing profound changes in the electricity distribution business. Indeed, these networks are seeing new uses (electric vehicles, air conditioning) and new decentralized producers (photovoltaic, wind), which make it more difficult to ensure a balance between electricity supply and demand, and imposes to introduce a form of distributed intelligence between their different components. Considering its complexity and the extent of its implementation, it is necessary to co-simulate it in order to validate its performances. In the RISEGrid institute, CentraleSupélec and EDF R&D have developed a co-simulation platform based on the FMI2 (Functional Mock-up Interface) standard called DACCOSIM, permitting to design and develop Smart Grids. The key components of this platform are represented as gray boxes called FMUs (Functional Mock-up Unit). In addition, simulators of the physical systems of Smart Grids can make backtracking when an inaccuracy is suspected in FMU computations, unlike discrete simulators (control units) that often can only advance in time. In order these different simulators collaborate, we designed a hybrid solution that takes into account the constraints of all the components, and precisely identifies the types of the events that system is facing. This study has led to a FMI standard change proposal. Moreover, it is difficult to rapidly design an efficient Smart Grid simulation, especially when the problem has a national or even a regional scale.To fill this gap,we have focused on the most computationally intensive part, which is the simulation of physical devices. We have therefore proposed methodologies, approaches and algorithms to quickly and efficiently distribute these different FMUs on distributed architectures. The implementation of these algorithms has already allowed simulating large-scale business cases on a multi-core PC cluster. The integration of these methods into DACCOSIM will enable EDF engineers to design « large scale Smart Grids » which will be more resistant to breakdowns
Mosli, Bouksiaa Mohamed Said. „Performance variation considered helpful“. Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLL001/document.
Der volle Inhalt der QuelleUnderstanding the performance of a multi-threaded application is difficult. The threads interfere when they access the same resource, which slows their execution down. Unfortunately, current profiling tools focus on identifying the interference causes, not their effects.The developer can thus not know if optimizing the interference reported by a profiling tool can lead to better performance. In this thesis, we propose to complete the profiling toolbox with an effect-oriented profiling tool able to indicate how much interference impacts performance, regardless of the interference cause. With an evaluation of 27 applications, we show that our tool successfully identifies 12 performance bottlenecks caused by 6 different kinds of interference
Bounaim, Aïcha. „Méthodes de décomposition de domaine : application à la résolution de problèmes de contrôle optimal“. Phd thesis, Université Joseph Fourier (Grenoble), 1999. http://tel.archives-ouvertes.fr/tel-00004809.
Der volle Inhalt der QuelleBücher zum Thema "Application Distribuée Parallèle"
Stojmenovic, Ivan, Ruppa K. Thulasiram, Laurence T. Yang, Weijia Jia, Minyi Guo und Rodrigo Fernandes de Mello, Hrsg. Parallel and Distributed Processing and Applications. Berlin, Heidelberg: Springer Berlin Heidelberg, 2007. http://dx.doi.org/10.1007/978-3-540-74742-0.
Der volle Inhalt der QuelleGuo, Minyi, und Laurence Tianruo Yang, Hrsg. Parallel and Distributed Processing and Applications. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/3-540-37619-4.
Der volle Inhalt der QuelleGuo, Minyi, Laurence T. Yang, Beniamino Di Martino, Hans P. Zima, Jack Dongarra und Feilong Tang, Hrsg. Parallel and Distributed Processing and Applications. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11946441.
Der volle Inhalt der QuellePan, Yi, Daoxu Chen, Minyi Guo, Jiannong Cao und Jack Dongarra, Hrsg. Parallel and Distributed Processing and Applications. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11576235.
Der volle Inhalt der QuelleCao, Jiannong, Laurence T. Yang, Minyi Guo und Francis Lau, Hrsg. Parallel and Distributed Processing and Applications. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/b104574.
Der volle Inhalt der QuelleInstitute Of Electrical and Electronics Engineers. IEEE parallel & distributed technology: Systems & applications. Los Alamitos, CA: IEEE Computer Society, 1993.
Den vollen Inhalt der Quelle findenShen, Hong, Yingpeng Sang, Yong Zhang, Nong Xiao, Hamid R. Arabnia, Geoffrey Fox, Ajay Gupta und Manu Malek, Hrsg. Parallel and Distributed Computing, Applications and Technologies. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-96772-7.
Der volle Inhalt der QuelleXie, Guoqi, Gang Zeng, Renfa Li und Keqin Li. Scheduling Parallel Applications on Heterogeneous Distributed Systems. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-13-6557-7.
Der volle Inhalt der QuellePark, Jong Hyuk, Hong Shen, Yunsick Sung und Hui Tian, Hrsg. Parallel and Distributed Computing, Applications and Technologies. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-13-5907-1.
Der volle Inhalt der QuelleLiew, Kim-Meow, Hong Shen, Simon See, Wentong Cai, Pingzhi Fan und Susumu Horiguchi, Hrsg. Parallel and Distributed Computing: Applications and Technologies. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/b103538.
Der volle Inhalt der QuelleBuchteile zum Thema "Application Distribuée Parallèle"
Chung, Minh Thanh, Josef Weidendorfer, Karl Fürlinger und Dieter Kranzlmüller. „Proactive Task Offloading for Load Balancing in Iterative Applications“. In Parallel Processing and Applied Mathematics, 263–75. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-30442-2_20.
Der volle Inhalt der QuelleKrawczyk, Henryk, und Bogdan Wiszniewski. „Quality of Distributed Applications“. In Distributed and Parallel Systems, 33–36. Boston, MA: Springer US, 2000. http://dx.doi.org/10.1007/978-1-4615-4489-0_4.
Der volle Inhalt der QuelleDvořák, Václav, und Rudolf Čejka. „Prototyping Cluster-Based Distributed Applications“. In Distributed and Parallel Systems, 225–28. Boston, MA: Springer US, 2000. http://dx.doi.org/10.1007/978-1-4615-4489-0_28.
Der volle Inhalt der QuelleKovacs, Jozsef, und Peter Kacsuk. „Server Based Migration of Parallel Applications“. In Distributed and Parallel Systems, 30–37. Boston, MA: Springer US, 2002. http://dx.doi.org/10.1007/978-1-4615-1167-0_4.
Der volle Inhalt der QuelleLovas, Róbert, Péter Kacsuk, Ákos Horváth und Ándrás Horányi. „Application of P-Grade Development Environment in Meteorology“. In Distributed and Parallel Systems, 109–16. Boston, MA: Springer US, 2002. http://dx.doi.org/10.1007/978-1-4615-1167-0_13.
Der volle Inhalt der QuelleLee, DongWoo, und R. S. Ramakrishna. „Inter-round Scheduling for Divisible Workload Applications“. In Distributed and Parallel Computing, 225–31. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11564621_25.
Der volle Inhalt der QuelleXu, Shiming, Wenguang Chen, Weimin Zheng, Tao Wang und Yimin Zhang. „Hierarchical Parallel Simulated Annealing and Its Applications“. In Distributed and Parallel Computing, 293–300. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11564621_33.
Der volle Inhalt der QuelleFarkas, Zoltán, Zoltán Balaton und Péter Kacsuk. „Supporting MPI applications in P-GRADE Portal“. In Distributed and Parallel Systems, 55–64. Boston, MA: Springer US, 2007. http://dx.doi.org/10.1007/978-0-387-69858-8_6.
Der volle Inhalt der QuelleTóth, Márton László, Norbert Podhorszki und Peter Kacsuk. „Load Balancing for P-Grade Parallel Applications“. In Distributed and Parallel Systems, 12–20. Boston, MA: Springer US, 2002. http://dx.doi.org/10.1007/978-1-4615-1167-0_2.
Der volle Inhalt der QuelleYu, Shui, und Wanlei Zhou. „An Efficient Reliable Architecture for Application Layer Anycast Service“. In Distributed and Parallel Computing, 376–85. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11564621_44.
Der volle Inhalt der QuelleKonferenzberichte zum Thema "Application Distribuée Parallèle"
Li, Yusen, Wentong Cai und Xueyan Tang. „Application Layer Multicast in P2P Distributed Interactive Applications“. In 2013 International Conference on Parallel and Distributed Systems (ICPADS). IEEE, 2013. http://dx.doi.org/10.1109/icpads.2013.62.
Der volle Inhalt der QuelleFaber, Clayton J., und Roger D. Chamberlain. „Application of Network Calculus Models to Heterogeneous Streaming Applications“. In 2024 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW). IEEE, 2024. http://dx.doi.org/10.1109/ipdpsw63119.2024.00057.
Der volle Inhalt der QuellePopescu, Diana Andreea, Eliana-Dina Tirsa, Mugurel Ionut Andreica und Valentin Cristea. „An Application-Assisted Checkpoint-Restart Mechanism for Java Applications“. In 2013 IEEE 12th International Symposium on Parallel and Distributed Computing (ISPDC). IEEE, 2013. http://dx.doi.org/10.1109/ispdc.2013.33.
Der volle Inhalt der QuelleSudarsan, R., und C. J. Ribbens. „Scheduling resizable parallel applications“. In amp; Distributed Processing (IPDPS). IEEE, 2009. http://dx.doi.org/10.1109/ipdps.2009.5161077.
Der volle Inhalt der QuelleFricke, Florian, Andre Werner, Keyvan Shahin, Florian Werner und Michael Hubner. „Automatic Tool-Flow for Mapping Applications to an Application-Specific CGRA Architecture“. In 2019 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW). IEEE, 2019. http://dx.doi.org/10.1109/ipdpsw.2019.00033.
Der volle Inhalt der QuelleGankevich, I., I. Petriakov, A. Gavrikov, D. Tereshchenko und G. Mozhaiskii. „VERIFIABLE APPLICATION-LEVEL CHECKPOINT AND RESTART FRAMEWORK FOR PARALLEL COMPUTING“. In 9th International Conference "Distributed Computing and Grid Technologies in Science and Education". Crossref, 2021. http://dx.doi.org/10.54546/mlit.2021.45.84.001.
Der volle Inhalt der QuelleXie, Jing, Wenxue Cheng, Tong Zhang, Qingkai Meng, Xuesong Li, Rong Li und Fengyuan Ren. „Active and Adaptive Application-Level Flow Control for Latency Sensitive RPC Applications“. In 2019 IEEE 25th International Conference on Parallel and Distributed Systems (ICPADS). IEEE, 2019. http://dx.doi.org/10.1109/icpads47876.2019.00056.
Der volle Inhalt der QuelleFreire de Souza, Jaime, Hermes Senger und Fabricio A. B. Silva. „Escalabilidade de Aplicações Bag-of-Tasks em Plataformas Heterogêneas“. In XXXVII Simpósio Brasileiro de Redes de Computadores e Sistemas Distribuídos. Sociedade Brasileira de Computação - SBC, 2019. http://dx.doi.org/10.5753/sbrc.2019.7394.
Der volle Inhalt der QuelleFernando, Milinda, Dmitry Duplyakin und Hari Sundar. „Machine and Application Aware Partitioning for Adaptive Mesh Refinement Applications“. In HPDC '17: The 26th International Symposium on High-Performance Parallel and Distributed Computing. New York, NY, USA: ACM, 2017. http://dx.doi.org/10.1145/3078597.3078610.
Der volle Inhalt der QuelleKhan, Gulista, und Zabeeulla Zabeeulla. „The Smart Application Development in Real Time Parallel Applications with Industrial Automation“. In 2023 International Conference on Distributed Computing and Electrical Circuits and Electronics (ICDCECE). IEEE, 2023. http://dx.doi.org/10.1109/icdcece57866.2023.10150756.
Der volle Inhalt der Quelle