Academic literature on the topic 'Cluster OpenMP implementations'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Cluster OpenMP implementations.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Cluster OpenMP implementations"
Saeed, Firas Mahmood, Salwa M. Ali, and Mohammed W. Al-Neama. "A parallel time series algorithm for searching similar sub-sequences." Indonesian Journal of Electrical Engineering and Computer Science 25, no. 3 (March 1, 2022): 1652. http://dx.doi.org/10.11591/ijeecs.v25.i3.pp1652-1661.
Full textAl-Neama, Mohammed W., Naglaa M. Reda, and Fayed F. M. Ghaleb. "An Improved Distance Matrix Computation Algorithm for Multicore Clusters." BioMed Research International 2014 (2014): 1–12. http://dx.doi.org/10.1155/2014/406178.
Full textThomas, Nathan, Steven Saunders, Tim Smith, Gabriel Tanase, and Lawrence Rauchwerger. "ARMI: A High Level Communication Library for STAPL." Parallel Processing Letters 16, no. 02 (June 2006): 261–80. http://dx.doi.org/10.1142/s0129626406002617.
Full textSCHUBERT, GERALD, HOLGER FEHSKE, GEORG HAGER, and GERHARD WELLEIN. "HYBRID-PARALLEL SPARSE MATRIX-VECTOR MULTIPLICATION WITH EXPLICIT COMMUNICATION OVERLAP ON CURRENT MULTICORE-BASED SYSTEMS." Parallel Processing Letters 21, no. 03 (September 2011): 339–58. http://dx.doi.org/10.1142/s0129626411000254.
Full textРечкалов, Т. В., and М. Л. Цымблер. "A parallel data clustering algorithm for Intel MIC accelerators." Numerical Methods and Programming (Vychislitel'nye Metody i Programmirovanie), no. 2 (March 28, 2019): 104–15. http://dx.doi.org/10.26089/nummet.v20r211.
Full textOsthoff, Carla, Francieli Zanon Boito, Rodrigo Virote Kassick, Laércio Lima Pilla, Philippe O. A. Navaux, Claudio Schepke, Jairo Panetta, et al. "Atmospheric models hybrid OpenMP/MPI implementation multicore cluster evaluation." International Journal of Information Technology, Communications and Convergence 2, no. 3 (2012): 212. http://dx.doi.org/10.1504/ijitcc.2012.050411.
Full textMahinthakumar, G., and F. Saied. "A Hybrid Mpi-Openmp Implementation of an Implicit Finite-Element Code on Parallel Architectures." International Journal of High Performance Computing Applications 16, no. 4 (November 2002): 371–93. http://dx.doi.org/10.1177/109434200201600402.
Full textSmith, Lorna, and Mark Bull. "Development of Mixed Mode MPI / OpenMP Applications." Scientific Programming 9, no. 2-3 (2001): 83–98. http://dx.doi.org/10.1155/2001/450503.
Full textHuang, Lei, Barbara Chapman, and Zhenying Liu. "Towards a more efficient implementation of OpenMP for clusters via translation to global arrays." Parallel Computing 31, no. 10-12 (October 2005): 1114–39. http://dx.doi.org/10.1016/j.parco.2005.03.015.
Full textLi, Hua Zhong, Yong Sheng Liang, Tao He, and Yi Li. "AOI Multi-Core Parallel System for TFT-LCD Defect Detection." Advanced Materials Research 472-475 (February 2012): 2325–31. http://dx.doi.org/10.4028/www.scientific.net/amr.472-475.2325.
Full textDissertations / Theses on the topic "Cluster OpenMP implementations"
Tran, Van Long. "Optimization of checkpointing and execution model for an implementation of OpenMP on distributed memory architectures." Thesis, Evry, Institut national des télécommunications, 2018. http://www.theses.fr/2018TELE0017/document.
Full textOpenMP and MPI have become the standard tools to develop parallel programs on shared-memory and distributed-memory architectures respectively. As compared to MPI, OpenMP is easier to use. This is due to the ability of OpenMP to automatically execute code in parallel and synchronize results using its directives, clauses, and runtime functions while MPI requires programmers do all this manually. Therefore, some efforts have been made to port OpenMP on distributed-memory architectures. However, excluding CAPE, no solution has successfully met both requirements: 1) to be fully compliant with the OpenMP standard and 2) high performance. CAPE stands for Checkpointing-Aided Parallel Execution. It is a framework that automatically translates and provides runtime functions to execute OpenMP program on distributed-memory architectures based on checkpointing techniques. In order to execute an OpenMP program on distributed-memory system, CAPE uses a set of templates to translate OpenMP source code to CAPE source code, and then, the CAPE source code is compiled by a C/C++ compiler. This code can be executed on distributed-memory systems under the support of the CAPE framework. Basically, the idea of CAPE is the following: the program first run on a set of nodes on the system, each node being executed as a process. Whenever the program meets a parallel section, the master distributes the jobs to the slave processes by using a Discontinuous Incremental Checkpoint (DICKPT). After sending the checkpoints, the master waits for the returned results from the slaves. The next step on the master is the reception and merging of the resulting checkpoints before injecting them into the memory. For slave nodes, they receive different checkpoints, and then, they inject it into their memory to compute the divided job. The result is sent back to the master using DICKPTs. At the end of the parallel region, the master sends the result of the checkpoint to every slaves to synchronize the memory space of the program as a whole. In some experiments, CAPE has shown very high-performance on distributed-memory systems and is a viable and fully compatible with OpenMP solution. However, CAPE is in the development stage. Its checkpoint mechanism and execution model need to be optimized in order to improve the performance, ability, and reliability. This thesis aims at presenting the approaches that were proposed to optimize and improve checkpoints, design and implement a new execution model, and improve the ability for CAPE. First, we proposed arithmetics on checkpoints, which aims at modeling checkpoint’s data structure and its operations. This modeling contributes to optimize checkpoint size and reduces the time when merging, as well as improve checkpoints capability. Second, we developed TICKPT which stands for Time-stamp Incremental Checkpointing as an instance of arithmetics on checkpoints. TICKPT is an improvement of DICKPT. It adds a timestamp to checkpoints to identify the checkpoints order. The analysis and experiments to compare it to DICKPT show that TICKPT do not only provide smaller in checkpoint size, but also has less impact on the performance of the program using checkpointing. Third, we designed and implemented a new execution model and new prototypes for CAPE based on TICKPT. The new execution model allows CAPE to use resources efficiently, avoid the risk of bottlenecks, overcome the requirement of matching the Bernstein’s conditions. As a result, these approaches make CAPE improving the performance, ability as well as reliability. Four, Open Data-sharing attributes are implemented on CAPE based on arithmetics on checkpoints and TICKPT. This also demonstrates the right direction that we took, and makes CAPE more complete
Cai, Jie. "Region-based techniques for modeling and enhancing cluster OpenMP performance." Phd thesis, 2011. http://hdl.handle.net/1885/8865.
Full textBook chapters on the topic "Cluster OpenMP implementations"
Wong, H. J., J. Cai, A. P. Rendell, and P. Strazdins. "Micro-benchmarks for Cluster OpenMP Implementations: Memory Consistency Costs." In OpenMP in a New Era of Parallelism, 60–70. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-79561-2_6.
Full textEachempati, Deepak, Lei Huang, and Barbara Chapman. "Strategies and Implementation for Translating OpenMP Code for Clusters." In High Performance Computing and Communications, 420–31. Berlin, Heidelberg: Springer Berlin Heidelberg, 2007. http://dx.doi.org/10.1007/978-3-540-75444-2_42.
Full textLiu, Zhenying, Lei Huang, Barbara Chapman, and Tien-Hsiung Weng. "Efficient Implementation of OpenMP for Clusters with Implicit Data Distribution." In Lecture Notes in Computer Science, 121–36. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/978-3-540-31832-3_11.
Full textBriguglio, Sergio, Beniamino Di Martino, Giuliana Fogaccia, and Gregorio Vlad. "Hierarchical MPI+OpenMP Implementation of Parallel PIC Applications on Clusters of Symmetric MultiProcessors." In Recent Advances in Parallel Virtual Machine and Message Passing Interface, 180–87. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/978-3-540-39924-7_27.
Full textCabral, Frederico, Carla Osthoff, Roberto Pinto Souto, Gabriel P. Costa, Sanderson L. Gonzaga de Oliveira, Diego N. Brandão, and Mauricio Kischinhevsky. "An Improved OpenMP Implementation of the TVD–Hopmoc Method Based on a Cluster of Points." In High Performance Computing for Computational Science – VECPAR 2018, 132–45. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-15996-2_10.
Full textTakahashi, Daisuke. "A Hybrid MPI/OpenMP Implementation of a Parallel 3-D FFT on SMP Clusters." In Parallel Processing and Applied Mathematics, 970–77. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11752578_117.
Full textPavão, Pedro Nuno Rebelo, João Pedro Almeida Couto, and Maria Manuela Santos Natário. "A Tale of Different Realities." In The Role of Knowledge Transfer in Open Innovation, 262–80. IGI Global, 2019. http://dx.doi.org/10.4018/978-1-5225-5849-1.ch013.
Full textNtseane, Peggy Gabo, and Idowu Biao. "Learning Cities." In Advances in Electronic Government, Digital Divide, and Regional Development, 73–93. IGI Global, 2019. http://dx.doi.org/10.4018/978-1-5225-8134-5.ch004.
Full textWillis, James S., Matthieu Schaller, Pedro Gonnet, and John C. Helly. "A Hybrid MPI+Threads Approach to Particle Group Finding Using Union-Find." In Parallel Computing: Technology Trends. IOS Press, 2020. http://dx.doi.org/10.3233/apc200050.
Full textTeodoro, George. "Efficient Execution of Dataflows on Parallel and Heterogeneous Environments." In Advances in Systems Analysis, Software Engineering, and High Performance Computing, 1–17. IGI Global, 2013. http://dx.doi.org/10.4018/978-1-4666-2533-4.ch001.
Full textConference papers on the topic "Cluster OpenMP implementations"
Cai, Jie, Alistair P. Rendell, Peter E. Strazdins, and H'sien Jin Wong. "Performance models for Cluster-enabled OpenMP implementations." In 2008 13th Asia-Pacific Computer Systems Architecture Conference (ACSAC). IEEE, 2008. http://dx.doi.org/10.1109/apcsac.2008.4625433.
Full textSantander-Jimenez, Sergio, and Miguel A. Vega-Rodriguez. "Applying OpenMP-based parallel implementations of NSGA-II and SPEA2 to study phylogenetic relationships." In 2014 IEEE International Conference On Cluster Computing (CLUSTER). IEEE, 2014. http://dx.doi.org/10.1109/cluster.2014.6968779.
Full textNoor, Nor Rizuan Mat, and Tanya Vladimirova. "Parallel implementation of lossless clustered integer KLT using OpenMP." In 2012 NASA/ESA Conference on Adaptive Hardware and Systems (AHS). IEEE, 2012. http://dx.doi.org/10.1109/ahs.2012.6268639.
Full textTran, Van Long, Eric Renault, and Viet Hai Ha. "Optimization of Checkpoints and Execution Model for an Implementation of OpenMP on Distributed Memory Architectures." In 2017 17th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGRID). IEEE, 2017. http://dx.doi.org/10.1109/ccgrid.2017.119.
Full textXuan, Huailiang, Weiqin Tong, Zhixun Gong, and Youwen Lan. "Implementation and performance analysis of hybrid MPI+OpenMP programming for parallel MLFMA on SMP cluster." In 2012 Third International Conference on Intelligent Control and Information Processing (ICICIP). IEEE, 2012. http://dx.doi.org/10.1109/icicip.2012.6391557.
Full textAndreev, Vyacheslav Viktorovich, Olga Vyacheslavovna Andreeva, and Vasiliy Evgenievich Gai. "Computer Modelling Based on the Percolation Theory of the Third Stage of Cracks Formation and Development on the Steel Microstructures Surfaces." In 32nd International Conference on Computer Graphics and Vision. Keldysh Institute of Applied Mathematics, 2022. http://dx.doi.org/10.20948/graphicon-2022-1059-1064.
Full textDarmawan, B. "The Implementation of Hybrid Parallel Computation for Complex and Fine Reservoir Model Using Cluster Technology." In Indonesian Petroleum Association 44th Annual Convention and Exhibition. Indonesian Petroleum Association, 2021. http://dx.doi.org/10.29118/ipa21-e-53.
Full textClauberg, Jan, Michael Leistner, and Heinz Ulbrich. "Hybrid-Parallel Calculation of Jacobians in Multi-Body Dynamics." In ASME 2013 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2013. http://dx.doi.org/10.1115/detc2013-12245.
Full textMiniello, G., and M. La Salandra. "HIGH RESOLUTION IMAGE PROCESSING AND LAND COVER CLASSIFICATION FOR HYDRO- GEOMORPHOLOGICAL HIGH-RISK AREA MONITORING." In 9th International Conference "Distributed Computing and Grid Technologies in Science and Education". Crossref, 2021. http://dx.doi.org/10.54546/mlit.2021.12.40.001.
Full textLisboa, Flávio Gomes da Silva. "A scalable distributed system based on microservices for collecting pod logs from a Kubernetes cluster." In Congresso Latino-Americano de Software Livre e Tecnologias Abertas. Sociedade Brasileira de Computação - SBC, 2021. http://dx.doi.org/10.5753/latinoware.2021.19916.
Full text