Siga este link para ver outros tipos de publicações sobre o tema: Application Distribuée Parallèle.

Artigos de revistas sobre o tema "Application Distribuée Parallèle"

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Veja os 50 melhores artigos de revistas para estudos sobre o assunto "Application Distribuée Parallèle".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Veja os artigos de revistas das mais diversas áreas científicas e compile uma bibliografia correta.

1

Spahi, Enis, e D. Altilar. "ITU-PRP: Parallel and Distributed Computing Middleware for Java Developers". International Journal of Business & Technology 3, n.º 1 (novembro de 2014): 2–13. http://dx.doi.org/10.33107/ijbte.2014.3.1.01.

Texto completo da fonte
Resumo:
ITU-PRP provides a Parallel Programming Framework for Java Developers on which they can adapt their sequential application code to operate on a distributed multi-host parallel environment. Developers would implement parallel models, such as Loop Parallelism, Divide and Conquer, Master-Slave and Fork-Join by the help of an API Library provided under framework. Produced parallel applications would be submitted to a middleware called Parallel Running Platform (PRP), on which parallel resources for parallel processing are being organized and performed. The middleware creates Task Plans (TP) according to application’s parallel model, assigns best available resource Hosts, in order to perform fast parallel processing. Task Plans will be created dynamically in real time according to resources actual utilization status or availability, instead of predefined/preconfigured task plans. ITU-PRP achieves better efficiency on parallel processing over big data sets and distributes divided base data to multiple hosts to be operated by Coarse-Grained parallelism. According to this model distributed parallel tasks would operate independently with minimal interaction until processing ends.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

César, Eduardo, Anna Morajko, Tomàs Margalef, Joan Sorribes, Antonio Espinosa e Emilio Luque. "Dynamic Performance Tuning Supported by Program Specification". Scientific Programming 10, n.º 1 (2002): 35–44. http://dx.doi.org/10.1155/2002/549617.

Texto completo da fonte
Resumo:
Performance analysis and tuning of parallel/distributed applications are very difficult tasks for non-expert programmers. It is necessary to provide tools that automatically carry out these tasks. These can be static tools that carry out the analysis on a post-mortem phase or can tune the application on the fly. Both kind of tools have their target applications. Static automatic analysis tools are suitable for stable application while dynamic tuning tools are more appropriate to applications with dynamic behaviour. In this paper, we describe KappaPi as an example of a static automatic performance analysis tool, and also a general environment based on parallel patterns for developing and dynamically tuning parallel/distributed applications.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

VADHIYAR, SATHISH S., e JACK J. DONGARRA. "SRS: A FRAMEWORK FOR DEVELOPING MALLEABLE AND MIGRATABLE PARALLEL APPLICATIONS FOR DISTRIBUTED SYSTEMS". Parallel Processing Letters 13, n.º 02 (junho de 2003): 291–312. http://dx.doi.org/10.1142/s0129626403001288.

Texto completo da fonte
Resumo:
The ability to produce malleable parallel applications that can be stopped and reconfigured during the execution can offer attractive benefits for both the system and the applications. The reconfiguration can be in terms of varying the parallelism for the applications, changing the data distributions during the executions or dynamically changing the software components involved in the application execution. In distributed and Grid computing systems, migration and reconfiguration of such malleable applications across distributed heterogeneous sites which do not share common file systems provides flexibility for scheduling and resource management in such distributed environments. The present reconfiguration systems do not support migration of parallel applications to distributed locations. In this paper, we discuss a framework for developing malleable and migratable MPI message-passing parallel applications for distributed systems. The framework includes a user-level checkpointing library called SRS and a runtime support system that manages the checkpointed data for distribution to distributed locations. Our experiments and results indicate that the parallel applications, with instrumentation to SRS library, were able to achieve reconfigurability incurring about 15-35% overhead.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

MORAJKO, ANNA, OLEG MORAJKO, JOSEP JORBA, TOMÀS MARGALEF e EMILIO LUQUE. "AUTOMATIC PERFORMANCE ANALYSIS AND DYNAMIC TUNING OF DISTRIBUTED APPLICATIONS". Parallel Processing Letters 13, n.º 02 (junho de 2003): 169–87. http://dx.doi.org/10.1142/s0129626403001227.

Texto completo da fonte
Resumo:
The classical way of tuning parallel/distributed applications is based on the analysis of the monitoring information obtained from an execution of the application. However, this "measure and modify" approach is not feasible when the applications have a dynamic behavior. In this case, another approach is required to accomplish performance expectations. This paper presents a solution based on the dynamic tuning approach that addresses these issues. In this approach, an application is monitored, its performance bottlenecks are detected and the application is modified automatically during the execution, without stopping, recompiling or re-running it. The introduced modifications adapt the behavior of the application to dynamic variations.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Sikora, Andrzej, e Ewa Niewiadomska-Szynkiewicz. "Parallel and Distributed Simulation of Ad Hoc Networks". Journal of Telecommunications and Information Technology, n.º 3 (26 de junho de 2023): 76–84. http://dx.doi.org/10.26636/jtit.2009.3.943.

Texto completo da fonte
Resumo:
Modeling and simulation are traditional methods used to evaluate wireless network design. This paper addresses issues associated with the application of parallel discrete event simulation to mobile ad hoc networks design and analysis. The basic characteristics and major issues pertaining to ad hoc networks modeling and simulation are introduced. The focus is on wireless transmission and mobility models. Particular attention is paid to the MobASim system, a Javabased software environment for parallel and distributed simulation of mobile ad hoc networks. We describe the design, performance and possible applications of presented simulation software.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Baude, Francoise, Denis Caromel e David Sagnol. "Distributed Objects for Parallel Numerical Applications". ESAIM: Mathematical Modelling and Numerical Analysis 36, n.º 5 (setembro de 2002): 837–61. http://dx.doi.org/10.1051/m2an:2002039.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Ioannidis, Sotiris, Umit Rencuzogullari, Robert Stets e Sandhya Dwarkadas. "CRAUL: Compiler and Run-Time Integration for Adaptation under Load". Scientific Programming 7, n.º 3-4 (1999): 261–73. http://dx.doi.org/10.1155/1999/603478.

Texto completo da fonte
Resumo:
Clusters of workstations provide a cost‐effective, high performance parallel computing environment. These environments, however, are often shared by multiple users, or may consist of heterogeneous machines. As a result, parallel applications executing in these environments must operate despite unequal computational resources. For maximum performance, applications should automatically adapt execution to maximize use of the available resources. Ideally, this adaptation should be transparent to the application programmer. In this paper, we present CRAUL (Compiler and Run‐Time Integration for Adaptation Under Load), a system that dynamically balances computational load in a parallel application. Our target run‐time is software‐based distributed shared memory (SDSM). SDSM is a good target for parallelizing compilers since it reduces compile‐time complexity by providing data caching and other support for dynamic load balancing. CRAUL combines compile‐time support to identify data access patterns with a run‐time system that uses the access information to intelligently distribute the parallel workload in loop‐based programs. The distribution is chosen according to the relative power of the processors and so as to minimize SDSM overhead and maximize locality. We have evaluated the resulting load distribution in the presence of different types of load – computational, computational and memory intensive, and network load. CRAUL performs within 5–23% of ideal in the presence of load, and is able to improve on naive compiler‐based work distribution that does not take locality into account even in the absence of load.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Chen, Wenjie, Qiliang Yang, Ziyan Jiang, Jianchun Xing, Shuo Zhao, Qizhen Zhou, Deshuai Han e Bowei Feng. "SwarmL: A Language for Programming Fully Distributed Intelligent Building Systems". Buildings 13, n.º 2 (12 de fevereiro de 2023): 499. http://dx.doi.org/10.3390/buildings13020499.

Texto completo da fonte
Resumo:
Fully distributed intelligent building systems can be used to effectively reduce the complexity of building automation systems and improve the efficiency of the operation and maintenance management because of its self-organization, flexibility, and robustness. However, the parallel computing mode, dynamic network topology, and complex node interaction logic make application development complex, time-consuming, and challenging. To address the development difficulties of fully distributed intelligent building system applications, this paper proposes a user-friendly programming language called SwarmL. Concretely, SwarmL (1) establishes a language model, an overall framework, and an abstract syntax that intuitively describes the static physical objects and dynamic execution mechanisms of a fully distributed intelligent building system, (2) proposes a physical field-oriented variable that adapts the programming model to the distributed architectures by employing a serial programming style in accordance with human thinking to program parallel applications of fully distributed intelligent building systems for reducing programming difficulty, (3) designs a computational scope-based communication mechanism that separates the computational logic from the node interaction logic, thus adapting to dynamically changing network topologies and supporting the generalized development of the fully distributed intelligent building system applications, and (4) implements an integrated development tool that supports program editing and object code generation. To validate SwarmL, an example application of a real scenario and a subject-based experiment are explored. The results demonstrate that SwarmL can effectively reduce the programming difficulty and improve the development efficiency of fully distributed intelligent building system applications. SwarmL enables building users to quickly understand and master the development methods of application tasks in fully distributed intelligent building systems, and supports the intuitive description and generalized, efficient development of application tasks. The created SwarmL support tool supports the downloading and deployment of applications for fully distributed intelligent building systems, which can improve the efficiency of building control management and promote the application and popularization of new intelligent building systems.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Collins, C., e M. Duffy. "Distributed (parallel) inductor design for VRM applications". IEEE Transactions on Magnetics 41, n.º 10 (outubro de 2005): 4000–4002. http://dx.doi.org/10.1109/tmag.2005.855163.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Alléon, G., S. Champagneux, G. Chevalier, L. Giraud e G. Sylvand. "Parallel distributed numerical simulations in aeronautic applications". Applied Mathematical Modelling 30, n.º 8 (agosto de 2006): 714–30. http://dx.doi.org/10.1016/j.apm.2005.06.014.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
11

Carretero, Jesus, e Laurence T. Yang. "Parallel and Distributed Processing with Applications: Preface". International Journal of Parallel Programming 42, n.º 3 (4 de junho de 2013): 405–7. http://dx.doi.org/10.1007/s10766-013-0254-9.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
12

Guo, Minyi. "Editorial: Parallel and Distributed Processing with Applications". Journal of Supercomputing 29, n.º 2 (agosto de 2004): 123. http://dx.doi.org/10.1023/b:supe.0000026865.54649.ba.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
13

Li, Kenli, Lipo Wang e Yong Liu. "Applications in heterogeneous parallel and distributed environment". Concurrency and Computation: Practice and Experience 29, n.º 20 (22 de agosto de 2017): e4285. http://dx.doi.org/10.1002/cpe.4285.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
14

Janssen, Curtis L., Helgi Adalsteinsson, Scott Cranford, Joseph P. Kenny, Ali Pinar, David A. Evensky e Jackson Mayo. "A Simulator for Large-Scale Parallel Computer Architectures". International Journal of Distributed Systems and Technologies 1, n.º 2 (abril de 2010): 57–73. http://dx.doi.org/10.4018/jdst.2010040104.

Texto completo da fonte
Resumo:
Efficient design of hardware and software for large-scale parallel execution requires detailed understanding of the interactions between the application, computer, and network. The authors have developed a macro-scale simulator (SST/macro) that permits the coarse-grained study of distributed-memory applications. In the presented work, applications using the Message Passing Interface (MPI) are simulated; however, the simulator is designed to allow inclusion of other programming models. The simulator is driven from either a trace file or a skeleton application. Trace files can be either a standard format (Open Trace Format) or a more detailed custom format (DUMPI). The simulator architecture is modular, allowing it to easily be extended with additional network models, trace file formats, and more detailed processor models. This paper describes the design of the simulator, provides performance results, and presents studies showing how application performance is affected by machine characteristics.
Estilos ABNT, Harvard, Vancouver, APA, etc.
15

Mubarak, Misbah, Seegyoung Seol, Qiukai Lu e Mark S. Shephard. "A Parallel Ghosting Algorithm for The Flexible Distributed Mesh Database". Scientific Programming 21, n.º 1-2 (2013): 17–42. http://dx.doi.org/10.1155/2013/654971.

Texto completo da fonte
Resumo:
Critical to the scalability of parallel adaptive simulations are parallel control functions including load balancing, reduced inter-process communication and optimal data decomposition. In distributed meshes, many mesh-based applications frequently access neighborhood information for computational purposes which must be transmitted efficiently to avoid parallel performance degradation when the neighbors are on different processors. This article presents a parallel algorithm of creating and deleting data copies, referred to as ghost copies, which localize neighborhood data for computation purposes while minimizing inter-process communication. The key characteristics of the algorithm are: (1) It can create ghost copies of any permissible topological order in a 1D, 2D or 3D mesh based on selected adjacencies. (2) It exploits neighborhood communication patterns during the ghost creation process thus eliminating all-to-all communication. (3) For applications that need neighbors of neighbors, the algorithm can creatennumber of ghost layers up to a point where the whole partitioned mesh can be ghosted. Strong and weak scaling results are presented for the IBM BG/P and Cray XE6 architectures up to a core count of 32,768 processors. The algorithm also leads to scalable results when used in a parallel super-convergent patch recovery error estimator, an application that frequently accesses neighborhood data to carry out computation.
Estilos ABNT, Harvard, Vancouver, APA, etc.
16

Anjum, Asma, e Asma Parveen. "Optimized load balancing mechanism in parallel computing for workflow in cloud computing environment". International Journal of Reconfigurable and Embedded Systems (IJRES) 12, n.º 2 (1 de julho de 2023): 276. http://dx.doi.org/10.11591/ijres.v12.i2.pp276-286.

Texto completo da fonte
Resumo:
Cloud computing gives on-demand access to computing resources in metered and powerfully adapted way; it empowers the client to get access to fast and flexible resources through virtualization and widely adaptable for various applications. Further, to provide assurance of productive computation, scheduling of task is very much important in cloud infrastructure environment. Moreover, the main aim of task execution phenomena is to reduce the execution time and reserve infrastructure; further, considering huge application, workflow scheduling has drawn fine attention in business as well as scientific area. Hence, in this research work, we design and develop an optimized load balancing in parallel computation aka optimal load balancing in parallel computing (OLBP) mechanism to distribute the load; at first different parameter in workload is computed and then loads are distributed. Further OLBP mechanism considers makespan time and energy as constraint and further task offloading is done considering the server speed. This phenomenon provides the balancing of workflow; further OLBP mechanism is evaluated using cyber shake workflow dataset and outperforms the existing workflow mechanism.
Estilos ABNT, Harvard, Vancouver, APA, etc.
17

Agrawal, Amrit, e Pranay Chaudhuri. "An Algorithm for Task Scheduling in Heterogeneous Distributed Systems Using Task Duplication". International Journal of Grid and High Performance Computing 3, n.º 1 (janeiro de 2011): 89–97. http://dx.doi.org/10.4018/jghpc.2011010105.

Texto completo da fonte
Resumo:
Task scheduling in heterogeneous parallel and distributed computing environment is a challenging problem. Applications identified by parallel tasks can be represented by directed-acyclic graphs (DAGs). Scheduling refers to the assignment of these parallel tasks on a set of bounded heterogeneous processors connected by high speed networks. Since task assignment is an NP-complete problem, instead of finding an exact solution, scheduling algorithms are developed based on heuristics, with the primary goal of minimizing the overall execution time of the application or schedule length. In this paper, the overall execution time (schedule length) of the tasks is reduced using task duplication on top of the Critical-Path-On-a-Processor (CPOP) algorithm.
Estilos ABNT, Harvard, Vancouver, APA, etc.
18

Vijayaraj, M., R. Malar Vizhi, P. Chandrakala, Laith H. Alzubaidi, Khasanov Muzaffar e R. Senthilkumar. "Parallel and Distributed Computing for High-Performance Applications". E3S Web of Conferences 399 (2023): 04039. http://dx.doi.org/10.1051/e3sconf/202339904039.

Texto completo da fonte
Resumo:
The study of parallel and distributed computing has become an important area in computer science because it makes it possible to create high-performance software that can effectively handle challenging computational tasks. In terms of their use in the world of high-performance applications, parallel and distributed computing techniques are given a thorough introduction in this study. The partitioning of computational processes into smaller subtasks that may be completed concurrently on numerous processors or computers is the core idea underpinning parallel and distributed computing. This strategy enables quicker execution times and enhanced performance in general. Parallel and distributed computing are essential for high-performance applications like scientific simulations, data analysis, and artificial intelligence since they frequently call for significant computational resources. High-performance apps are able to effectively handle computationally demanding tasks thanks in large part to parallel and distributed computing. This article offers a thorough review of the theories, methods, difficulties, and developments in parallel and distributed computing for high-performance applications. Researchers and practitioners may fully utilize the potential of parallel and distributed computing to open up new vistas in computational science and engineering by comprehending the underlying concepts and utilizing the most recent breakthroughs.
Estilos ABNT, Harvard, Vancouver, APA, etc.
19

Li, Jinghong, Guoqi Xie, Keqin Li e Zhuo Tang. "Enhanced Parallel Application Scheduling Algorithm with Energy Consumption Constraint in Heterogeneous Distributed Systems". Journal of Circuits, Systems and Computers 28, n.º 11 (outubro de 2019): 1950190. http://dx.doi.org/10.1142/s0218126619501901.

Texto completo da fonte
Resumo:
Energy consumption has always been one of the main design problems in heterogeneous distributed systems, whether for large cluster computer systems or small handheld terminal devices. And as energy consumption explodes for complex performance, many efforts and work are focused on minimizing the schedule length of parallel applications that meet the energy consumption constraints currently. In prior studies, a pre-allocation method based on dynamic voltage and frequency scaling (DVFS) technology allocates unassigned tasks with minimal energy consumption. However, this approach does not necessarily result in minimal scheduling length. In this paper, we propose an enhanced scheduling algorithm, which allocates the same energy consumption for each task by selecting a relatively intermediate value among the unequal allocations. Based on the two real-world applications (Fast Fourier transform and Gaussian elimination) and the randomly generated parallel application, experiments show that the proposed algorithm not only achieves better scheduling length while meeting the energy consumption constraints, but also has better performance than the existing parallel algorithms.
Estilos ABNT, Harvard, Vancouver, APA, etc.
20

DECKER, K. M., C. JAYEWARDENA e R. REHMANN. "Libraries and Development Environments for Monte Carlo Simulations of Lattice Gauge Theories on Parallel Computers". International Journal of Modern Physics C 02, n.º 01 (março de 1991): 316–21. http://dx.doi.org/10.1142/s0129183191000408.

Texto completo da fonte
Resumo:
We describe the library lgtlib, and lgttool, the corresponding development environment for Monte Carlo simulations of lattice gauge theory on multiprocessor vector computers with shared memory. We explain why distributed memory parallel processor (DMPP) architectures are particularly appealing for compute-intensive scientific applications, and introduce the design of a general application and program development environment system for scientific applications on DMPP architectures.
Estilos ABNT, Harvard, Vancouver, APA, etc.
21

Chassin de Kergommeaux, J., e B. de Oliveira Stein. "Flexible performance visualization of parallel and distributed applications". Future Generation Computer Systems 19, n.º 5 (julho de 2003): 735–47. http://dx.doi.org/10.1016/s0167-739x(02)00181-4.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
22

Luksch, Peter. "Parallel and distributed implementation of large industrial applications". Future Generation Computer Systems 16, n.º 6 (abril de 2000): 649–63. http://dx.doi.org/10.1016/s0167-739x(99)00077-1.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
23

Shaffer, E., D. A. Reed, S. Whitmore e B. Schaeffer. "Virtue: performance visualization of parallel and distributed applications". Computer 32, n.º 12 (1999): 44–51. http://dx.doi.org/10.1109/2.809250.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
24

CHOPARD, BASTIEN, OLIVIER PICTET e MARCO TOMASSINP. "PARALLEL AND DISTRIBUTED EVOLUTIONARY COMPUTATION FOR FINANCIAL APPLICATIONS". Parallel Algorithms and Applications 15, n.º 1-2 (junho de 2000): 15–36. http://dx.doi.org/10.1080/01495730008947348.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
25

Senger, Hermes, e Claudio Geyer. "Parallel and distributed computing for Big Data applications". Concurrency and Computation: Practice and Experience 28, n.º 8 (5 de março de 2016): 2412–15. http://dx.doi.org/10.1002/cpe.3813.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
26

Shen, Hong, Hui Tian e Yingpeng Sang. "Guest editorial - Parallel and distributed computing and applications". Computer Science and Information Systems 20, n.º 1 (2023): ix. http://dx.doi.org/10.2298/csis230100ixs.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
27

DUTOT, PIERRE-FRANÇOIS, LIONEL EYRAUD, GRÉGORY MOUNIÉ e DENIS TRYSTRAM. "SCHEDULING ON LARGE SCALE DISTRIBUTED PLATFORMS: FROM MODELS TO IMPLEMENTATIONS". International Journal of Foundations of Computer Science 16, n.º 02 (abril de 2005): 217–37. http://dx.doi.org/10.1142/s0129054105002954.

Texto completo da fonte
Resumo:
Today, large scale parallel systems are available at low cost, Many powerful such systems have been installed all over the world and the number of users is always increasing. The difficulty of using them efficiently is growing with the complexity of the interactions between more and more architectural constraints and the diversity of the applications. The design of efficient parallel algorithms has to be reconsidered under the influence of new parameters of such platforms (namely, cluster, grid and global computing) which are characterized by a larger number of heterogeneous processors, often organized in several hierarchical sub-systems. At each step of the evolution of the parallel processing field, researchers designed adequate computational models whose objective was to abstract the real world in order to be able to analyze the behavior of algorithms. In this paper, we will investigate two complementary computational models that have been proposed recently: Parallel Task (PT) and Divisible Load (DL). The Parallel Task (i.e. tasks that require more than one processor for their execution) model is a promising alternative for scheduling parallel applications, especially in the case of slow communication media. The basic idea is to consider the application at a coarse level of granularity. Another way of looking at the problem (which is somehow a dual view) is the Divisible Load model where an application is considered as a collection of a large number of elementary – sequential – computing units that will be distributed among the available resources. Unlike the PT model, the DL model corresponds to a fine level of granularity. We will focus on the PT model, and discuss how to mix it with simple Divisible Load scheduling. As the main difficulty for distributing the load among the processors (usually known as the scheduling problem) in actual systems comes from handling efficiently the communications, these two models of the problem allow us to consider them implicitly or to mask them, thus leading to more tractable problems. We will show that in spite of the enormous complexity of the general scheduling problem on new platforms, it is still useful to study theoretical models. We will focus on the links between models and actual implementations on a regional grid with more than 500 processors.
Estilos ABNT, Harvard, Vancouver, APA, etc.
28

Sato, Yuji, e Mikiko Sato. "Parallelization and sustainability of distributed genetic algorithms on many-core processors". International Journal of Intelligent Computing and Cybernetics 7, n.º 1 (4 de março de 2014): 2–23. http://dx.doi.org/10.1108/ijicc-06-2013-0033.

Texto completo da fonte
Resumo:
Purpose – The purpose of this paper is to propose a fault-tolerant technology for increasing the durability of application programs when evolutionary computation is performed by fast parallel processing on many-core processors such as graphics processing units (GPUs) and multi-core processors (MCPs). Design/methodology/approach – For distributed genetic algorithm (GA) models, the paper proposes a method where an island's ID number is added to the header of data transferred by this island for use in fault detection. Findings – The paper has shown that the processing time of the proposed idea is practically negligible in applications and also shown that an optimal solution can be obtained even with a single stuck-at fault or a transient fault, and that increasing the number of parallel threads makes the system less susceptible to faults. Originality/value – The study described in this paper is a new approach to increase the sustainability of application program using distributed GA on GPUs and MCPs.
Estilos ABNT, Harvard, Vancouver, APA, etc.
29

SAADI, Ramzi, e Mohamed Yacine HAMMOUDI. "Parallel Distributed Compensation for Three-Phase pulse width modulation converter". Electrotehnica, Electronica, Automatica 71, n.º 2 (15 de maio de 2023): 38–45. http://dx.doi.org/10.46904/eea.23.71.2.1108005.

Texto completo da fonte
Resumo:
In applications such as industrial drive systems and power supply systems, the use of a three-phase voltage-source Rectifier pulse width modulation converter is becoming increasingly common. Examples of these types of applications include: Power Factor Corrections and the reduction of harmonics. The critical control requirements of an application involving a three-phase voltage-source Rectifier pulse width modulation converter are to achieve a unity power factor and to regulate the dc voltage. The unity power factor guarantees the highest possible efficiency, while the dc voltage regulation guarantees that the converter will operate as desired. In this study, a novel control method for stabilizing a Rectifier-Inverter pulse width modulation converter is designed and simulated to reach higher levels of stability while also improving dynamic performances. In the first step, the nonlinear equation system of the converter is transformed into a polytopic form. This is done with the help of the sector nonlinearity transformation. After that, a Takagi-Sugeno fuzzy controller that uses the parallel distributed compensation operating concept is applied. The design of the control system takes into account practical control requirements, such as a fast voltage tracking strategy and line-currents regulation. In order to obtain the controller gain, a series of linear matrix inequalities must be resolved. Simulations performed using Matlab/Simulink make it abundantly evident that the proposed method possesses excellent performance in terms of both voltage tracking control and unity power factor regulation.
Estilos ABNT, Harvard, Vancouver, APA, etc.
30

Ogle, D. M., K. Schwan e R. Snodgrass. "Application-dependent dynamic monitoring of distributed and parallel systems". IEEE Transactions on Parallel and Distributed Systems 4, n.º 7 (julho de 1993): 762–78. http://dx.doi.org/10.1109/71.238299.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
31

Čiegis, Raimondas, Vadimas Starikovičius, Natalija Tumanova e Minvydas Ragulskis. "Application of distributed parallel computing for dynamic visual cryptography". Journal of Supercomputing 72, n.º 11 (4 de maio de 2016): 4204–20. http://dx.doi.org/10.1007/s11227-016-1733-8.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
32

Noor, Fazal, e Hatem ElBoghdadi. "Neural Nets Distributed on Microcontrollers using Metaheuristic Parallel Optimization Algorithm". Annals of Emerging Technologies in Computing 4, n.º 4 (1 de outubro de 2020): 28–38. http://dx.doi.org/10.33166/aetic.2020.04.004.

Texto completo da fonte
Resumo:
Metaheuristic algorithms are powerful methods for solving compute intensive problems. neural Networks, when trained well, are great at prediction and classification type of problems. Backpropagation is the most popular method utilized to obtain the weights of Neural Nets though it has some limitations of slow convergence and getting stuck in a local minimum. In order to overcome these limitations, in this paper, a hybrid method combining the parallel distributed bat algorithm with backpropagation is proposed to compute the weights of the Neural Nets. The aim is to use the hybrid method in applications of a distributed nature. Our study uses the Matlab® software and Arduino® microcontrollers as a testbed. To test the performance of the testbed, an application in the area of speech recognition is carried out. Due to the resource limitations of Arduino microcontrollers, the core speech pre-processing of LPC (linear predictive coding) feature extractions are done in Matlab® and only the LPC parameters are passed to the Neural Nets, which are implemented on Arduino microcontrollers. The experimental results show that the proposed scheme does produce promising results.
Estilos ABNT, Harvard, Vancouver, APA, etc.
33

Prylli, L., e B. Tourancheau. "Execution-Driven Simulation of Parallel Applications". Parallel Processing Letters 08, n.º 01 (março de 1998): 95–109. http://dx.doi.org/10.1142/s0129626498000122.

Texto completo da fonte
Resumo:
This paper presents our work on the simulation of distributed memory parallel computers. We design a distributed simulator that takes as input an application written for a MIMD computer and run it on a workstations cluster with just a recompilation of the code. The hardware of the target machine is simulated so that the behavior of the application is identical to a native run on the simulated computer with virtual timings and trace file. Moreover, our analysis sets up the conditions required to achieve a good speedup as a function of the number of simulation hosts, the network latency and the granularity of the application.
Estilos ABNT, Harvard, Vancouver, APA, etc.
34

Lastovetsky, Alexey. "Parallel testing of distributed software". Information and Software Technology 47, n.º 10 (julho de 2005): 657–62. http://dx.doi.org/10.1016/j.infsof.2004.11.006.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
35

Chi, Song. "Application of MATLAB Parallel Programming Technology". Applied Mechanics and Materials 602-605 (agosto de 2014): 3787–90. http://dx.doi.org/10.4028/www.scientific.net/amm.602-605.3787.

Texto completo da fonte
Resumo:
The parallel application program development technology and process is analyzed based on the MATLAB parallel and distributed computing toolbox. Hereby, the comparison of the serial computing and the parallel computing is done by computer simulations, and the parallel computing program design and develop method is proposed. The simulations results show that, the parallel computing technology has many advantages in the calculation of high intensive and it is convenience of the parallel application development using MATLAB.
Estilos ABNT, Harvard, Vancouver, APA, etc.
36

Bhatt, Sandeep, Marina Chen, James Cowie, Cheng-Yee Lin e Pangfeng Liu. "Object-Oriented Support for Adaptive Methods on Paranel Machines". Scientific Programming 2, n.º 4 (1993): 179–92. http://dx.doi.org/10.1155/1993/474972.

Texto completo da fonte
Resumo:
This article reports on experiments from our ongoing project whose goal is to develop a C++ library which supports adaptive and irregular data structures on distributed memory supercomputers. We demonstrate the use of our abstractions in implementing "tree codes" for large-scale N-body simulations. These algorithms require dynamically evolving treelike data structures, as well as load-balancing, both of which are widely believed to make the application difficult and cumbersome to program for distributed-memory machines. The ease of writing the application code on top of our C++ library abstractions (which themselves are application independent), and the low overhead of the resulting C++ code (over hand-crafted C code) supports our belief that object-oriented approaches are eminently suited to programming distributed-memory machines in a manner that (to the applications programmer) is architecture-independent. Our contribution in parallel programming methodology is to identify and encapsulate general classes of communication and load-balancing strategies useful across applications and MIMD architectures. This article reports experimental results from simulations of half a million particles using multiple methods.
Estilos ABNT, Harvard, Vancouver, APA, etc.
37

Ishikawa, Masatoshi. "High-Speed Vision and its Applications Toward High-Speed Intelligent Systems". Journal of Robotics and Mechatronics 34, n.º 5 (20 de outubro de 2022): 912–35. http://dx.doi.org/10.20965/jrm.2022.p0912.

Texto completo da fonte
Resumo:
Currently, high-speed vision based on parallel processing exists, and its various applications as high-speed intelligent systems have been proposed and implemented. The basic goal of high-speed vision is to realize vision capabilities and systems that operate at speeds necessary for intelligent systems, in which intelligence operating at the speed inherently required by the application system is achieved. This paper described the vision chip and parallel image processing architectures, presented outlines of system architectures, image-processing algorithms, and related peripheral technologies; described the concepts required to configure high-speed intelligent systems, such as hierarchical parallel distributed architecture, parallel decomposition, orthogonal decomposition, dynamics matching, latency minimization, high-speed 3D shape measurement, active vision, tracking vision, dynamic compensation, and dynamic projection mapping; and discussed a wide range of application systems in a systematic manner.
Estilos ABNT, Harvard, Vancouver, APA, etc.
38

SHAHZAD, FAISAL, MARKUS WITTMANN, MORITZ KREUTZER, THOMAS ZEISER, GEORG HAGER e GERHARD WELLEIN. "A SURVEY OF CHECKPOINT/RESTART TECHNIQUES ON DISTRIBUTED MEMORY SYSTEMS". Parallel Processing Letters 23, n.º 04 (dezembro de 2013): 1340011. http://dx.doi.org/10.1142/s0129626413400112.

Texto completo da fonte
Resumo:
The road to exascale computing poses many challenges for the High Performance Computing (HPC) community. Each step on the exascale path is mainly the result of a higher level of parallelism of the basic building blocks (i.e., CPUs, memory units, networking components, etc.). The reliability of each of these basic components does not increase at the same rate as the rate of hardware parallelism. This results in a reduction of the mean time to failure (MTTF) of the whole system. A fault tolerance environment is thus indispensable to run large applications on such clusters. Checkpoint/Restart (C/R) is the classic and most popular method to minimize failure damage. Its ease of implementation makes it useful, but typically it introduces significant overhead to the application. Several efforts have been made to reduce the C/R overhead. In this paper we compare various C/R techniques for their overheads by implementing them on two different categories of applications. These approaches are based on parallel-file-system (PFS)-level checkpoints (synchronous/asynchronous) and node-level checkpoints. We utilize the Scalable Checkpoint/Restart (SCR) library for the comparison of node-level checkpoints. For asynchronous PFS-level checkpoints, we use the Damaris library, the SCR asynchronous feature, and application-based checkpointing via dedicated threads. Our baseline for overhead comparison is the naïve application-based synchronous PFS-level checkpointing method. A 3D lattice-Boltzmann (LBM) flow solver and a Lanczos eigenvalue solver are used as prototypical applications in which all the techniques considered here may be applied.
Estilos ABNT, Harvard, Vancouver, APA, etc.
39

Martínez, Andrea, Anna Sikora, Eduardo César e Joan Sorribes. "ELASTIC: A Large Scale Dynamic Tuning Environment". Scientific Programming 22, n.º 4 (2014): 261–71. http://dx.doi.org/10.1155/2014/403695.

Texto completo da fonte
Resumo:
The spectacular growth in the number of cores in current supercomputers poses design challenges for the development of performance analysis and tuning tools. To be effective, such analysis and tuning tools must be scalable and be able to manage the dynamic behaviour of parallel applications. In this work, we present ELASTIC, an environment for dynamic tuning of large-scale parallel applications. To be scalable, the architecture of ELASTIC takes the form of a hierarchical tuning network of nodes that perform a distributed analysis and tuning process. Moreover, the tuning network topology can be configured to adapt itself to the size of the parallel application. To guide the dynamic tuning process, ELASTIC supports a plugin architecture. These plugins, called ELASTIC packages, allow the integration of different tuning strategies into ELASTIC. We also present experimental tests conducted using ELASTIC, showing its effectiveness to improve the performance of large-scale parallel applications.
Estilos ABNT, Harvard, Vancouver, APA, etc.
40

Shi, Xian Cheng, Yu Cheng Feng, Jin Song Zeng e Ke Fu Chen. "Distributed Cross-Direction Control System Based on CAN-Bus". Applied Mechanics and Materials 705 (dezembro de 2014): 345–49. http://dx.doi.org/10.4028/www.scientific.net/amm.705.345.

Texto completo da fonte
Resumo:
A distributed cross-direction control system based on CAN-bus was designed to solve the large-scale cross-direction control problem for paper machine. Multiple Master-Slave CAN networks were constructed in parallel to coordinate and control multiple array actuators, and the Ethernet/CAN converter was used to connect the monitoring station with the field intelligent actuators. The monitoring station accesses measured data and field data via OPC technology. A CAN application layer protocol and corresponding underlying communication software were designed according to the characteristics of the cross-direction control processes. Finally, the feasibility and reliability of the system were verified with practical applications.
Estilos ABNT, Harvard, Vancouver, APA, etc.
41

Ejarque, Jorge, Marc Domínguez e Rosa M. Badia. "A hierarchic task-based programming model for distributed heterogeneous computing". International Journal of High Performance Computing Applications 33, n.º 5 (maio de 2019): 987–97. http://dx.doi.org/10.1177/1094342019845438.

Texto completo da fonte
Resumo:
Distributed computing platforms are evolving to heterogeneous ecosystems with Clusters, Grids and Clouds introducing in its computing nodes, processors with different core architectures, accelerators (i.e. GPUs, FPGAs), as well as different memories and storage devices in order to achieve better performance with lower energy consumption. As a consequence of this heterogeneity, programming applications for these distributed heterogeneous platforms becomes a complex task. Additionally to the complexity of developing an application for distributed platforms, developers must also deal now with the complexity of the different computing devices inside the node. In this article, we present a programming model that aims to facilitate the development and execution of applications in current and future distributed heterogeneous parallel architectures. This programming model is based on the hierarchical composition of the COMP Superscalar and Omp Superscalar programming models that allow developers to implement infrastructure-agnostic applications. The underlying runtime enables applications to adapt to the infrastructure without the need of maintaining different versions of the code. Our programming model proposal has been evaluated on real platforms, in terms of heterogeneous resource usage, performance and adaptation.
Estilos ABNT, Harvard, Vancouver, APA, etc.
42

Han, Jaeung, Seungheun Jeon, Young-ri Choi e Jaehyuk Huh. "Interference Management for Distributed Parallel Applications in Consolidated Clusters". ACM SIGARCH Computer Architecture News 44, n.º 2 (29 de julho de 2016): 443–56. http://dx.doi.org/10.1145/2980024.2872388.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
43

Yero, Eduardo Javier Huerta, e Marco Aurélio Amaral Henriques. "Contention-sensitive static performance prediction for parallel distributed applications". Performance Evaluation 63, n.º 4-5 (maio de 2006): 265–77. http://dx.doi.org/10.1016/j.peva.2005.01.008.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
44

Benitez, Noe Lopez, M. Jahed Djomehri e Rupak Biswas. "Task assignment heuristics for parallel and distributed CFD applications". International Journal of Computational Science and Engineering 3, n.º 2 (2007): 155. http://dx.doi.org/10.1504/ijcse.2007.015745.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
45

Han, Jaeung, Seungheun Jeon, Young-ri Choi e Jaehyuk Huh. "Interference Management for Distributed Parallel Applications in Consolidated Clusters". ACM SIGPLAN Notices 51, n.º 4 (9 de junho de 2016): 443–56. http://dx.doi.org/10.1145/2954679.2872388.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
46

Han, Jaeung, Seungheun Jeon, Young-ri Choi e Jaehyuk Huh. "Interference Management for Distributed Parallel Applications in Consolidated Clusters". ACM SIGOPS Operating Systems Review 50, n.º 2 (25 de março de 2016): 443–56. http://dx.doi.org/10.1145/2954680.2872388.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
47

van Heiningen, Willem, Steve MacDonald e Tim Brecht. "Babylon: middleware for distributed, parallel, and mobile Java applications". Concurrency and Computation: Practice and Experience 20, n.º 10 (julho de 2008): 1195–224. http://dx.doi.org/10.1002/cpe.1264.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
48

Drost, Niels, Rob V. van Nieuwpoort, Jason Maassen, Frank Seinstra e Henri E. Bal. "JEL: unified resource tracking for parallel and distributed applications". Concurrency and Computation: Practice and Experience 23, n.º 1 (16 de dezembro de 2010): 17–37. http://dx.doi.org/10.1002/cpe.1592.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
49

Aversa, R., B. Di Martino, N. Mazzocca e S. Venticinque. "A Skeleton Based Programming Paradigm for Mobile Multi-Agents on Distributed Systems and Its Realization within the MAGDA Mobile Agents Platform". Mobile Information Systems 4, n.º 2 (2008): 131–46. http://dx.doi.org/10.1155/2008/745406.

Texto completo da fonte
Resumo:
Parallel programming effort can be reduced by using high level constructs such as algorithmic skeletons. Within the MAGDA toolset, supporting programming and execution of mobile agent based distributed applications, we provide a skeleton-based parallel programming environment, based on specialization of Algorithmic Skeleton Java interfaces and classes. Their implementation include mobile agent features for execution on heterogeneous systems, such as clusters of WSs and PCs, and support reliability and dynamic workload balancing. The user can thus develop a parallel, mobile agent based application by simply specialising a given set of classes and methods and using a set of added functionalities.
Estilos ABNT, Harvard, Vancouver, APA, etc.
50

Han, Jun, e Shu Fang Zhang. "A Scientific Computing Resource Sharing Management in Internet-Oriented Environment". Advanced Materials Research 662 (fevereiro de 2013): 892–95. http://dx.doi.org/10.4028/www.scientific.net/amr.662.892.

Texto completo da fonte
Resumo:
In this paper, we introduced a scientific computing environment for Internet-Oriented computing resource sharing, abbreviate ISCEs, which is a high-performance computing environment that allows users to write and evaluate parallel distributed applications for different hardware and software configurations using a web interface. We described the software architecture of ISCEs by emphasizing Application editor, Application Scheduling Components, and Application execution/runtime modules. ISCEs is efficient which is strongly supported by the time measurement scheduling polices, it has the advantage of convenient scientific activity, task-oriented scheduling algorithm prediction, scalability, efficiency.
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia