Добірка наукової літератури з теми "Cluster OpenMP implementations"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Cluster OpenMP implementations".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Статті в журналах з теми "Cluster OpenMP implementations"
Saeed, Firas Mahmood, Salwa M. Ali, and Mohammed W. Al-Neama. "A parallel time series algorithm for searching similar sub-sequences." Indonesian Journal of Electrical Engineering and Computer Science 25, no. 3 (March 1, 2022): 1652. http://dx.doi.org/10.11591/ijeecs.v25.i3.pp1652-1661.
Повний текст джерелаAl-Neama, Mohammed W., Naglaa M. Reda, and Fayed F. M. Ghaleb. "An Improved Distance Matrix Computation Algorithm for Multicore Clusters." BioMed Research International 2014 (2014): 1–12. http://dx.doi.org/10.1155/2014/406178.
Повний текст джерелаThomas, Nathan, Steven Saunders, Tim Smith, Gabriel Tanase, and Lawrence Rauchwerger. "ARMI: A High Level Communication Library for STAPL." Parallel Processing Letters 16, no. 02 (June 2006): 261–80. http://dx.doi.org/10.1142/s0129626406002617.
Повний текст джерелаSCHUBERT, GERALD, HOLGER FEHSKE, GEORG HAGER, and GERHARD WELLEIN. "HYBRID-PARALLEL SPARSE MATRIX-VECTOR MULTIPLICATION WITH EXPLICIT COMMUNICATION OVERLAP ON CURRENT MULTICORE-BASED SYSTEMS." Parallel Processing Letters 21, no. 03 (September 2011): 339–58. http://dx.doi.org/10.1142/s0129626411000254.
Повний текст джерелаРечкалов, Т. В., and М. Л. Цымблер. "A parallel data clustering algorithm for Intel MIC accelerators." Numerical Methods and Programming (Vychislitel'nye Metody i Programmirovanie), no. 2 (March 28, 2019): 104–15. http://dx.doi.org/10.26089/nummet.v20r211.
Повний текст джерелаOsthoff, Carla, Francieli Zanon Boito, Rodrigo Virote Kassick, Laércio Lima Pilla, Philippe O. A. Navaux, Claudio Schepke, Jairo Panetta, et al. "Atmospheric models hybrid OpenMP/MPI implementation multicore cluster evaluation." International Journal of Information Technology, Communications and Convergence 2, no. 3 (2012): 212. http://dx.doi.org/10.1504/ijitcc.2012.050411.
Повний текст джерелаMahinthakumar, G., and F. Saied. "A Hybrid Mpi-Openmp Implementation of an Implicit Finite-Element Code on Parallel Architectures." International Journal of High Performance Computing Applications 16, no. 4 (November 2002): 371–93. http://dx.doi.org/10.1177/109434200201600402.
Повний текст джерелаSmith, Lorna, and Mark Bull. "Development of Mixed Mode MPI / OpenMP Applications." Scientific Programming 9, no. 2-3 (2001): 83–98. http://dx.doi.org/10.1155/2001/450503.
Повний текст джерелаHuang, Lei, Barbara Chapman, and Zhenying Liu. "Towards a more efficient implementation of OpenMP for clusters via translation to global arrays." Parallel Computing 31, no. 10-12 (October 2005): 1114–39. http://dx.doi.org/10.1016/j.parco.2005.03.015.
Повний текст джерелаLi, Hua Zhong, Yong Sheng Liang, Tao He, and Yi Li. "AOI Multi-Core Parallel System for TFT-LCD Defect Detection." Advanced Materials Research 472-475 (February 2012): 2325–31. http://dx.doi.org/10.4028/www.scientific.net/amr.472-475.2325.
Повний текст джерелаДисертації з теми "Cluster OpenMP implementations"
Tran, Van Long. "Optimization of checkpointing and execution model for an implementation of OpenMP on distributed memory architectures." Thesis, Evry, Institut national des télécommunications, 2018. http://www.theses.fr/2018TELE0017/document.
Повний текст джерелаOpenMP and MPI have become the standard tools to develop parallel programs on shared-memory and distributed-memory architectures respectively. As compared to MPI, OpenMP is easier to use. This is due to the ability of OpenMP to automatically execute code in parallel and synchronize results using its directives, clauses, and runtime functions while MPI requires programmers do all this manually. Therefore, some efforts have been made to port OpenMP on distributed-memory architectures. However, excluding CAPE, no solution has successfully met both requirements: 1) to be fully compliant with the OpenMP standard and 2) high performance. CAPE stands for Checkpointing-Aided Parallel Execution. It is a framework that automatically translates and provides runtime functions to execute OpenMP program on distributed-memory architectures based on checkpointing techniques. In order to execute an OpenMP program on distributed-memory system, CAPE uses a set of templates to translate OpenMP source code to CAPE source code, and then, the CAPE source code is compiled by a C/C++ compiler. This code can be executed on distributed-memory systems under the support of the CAPE framework. Basically, the idea of CAPE is the following: the program first run on a set of nodes on the system, each node being executed as a process. Whenever the program meets a parallel section, the master distributes the jobs to the slave processes by using a Discontinuous Incremental Checkpoint (DICKPT). After sending the checkpoints, the master waits for the returned results from the slaves. The next step on the master is the reception and merging of the resulting checkpoints before injecting them into the memory. For slave nodes, they receive different checkpoints, and then, they inject it into their memory to compute the divided job. The result is sent back to the master using DICKPTs. At the end of the parallel region, the master sends the result of the checkpoint to every slaves to synchronize the memory space of the program as a whole. In some experiments, CAPE has shown very high-performance on distributed-memory systems and is a viable and fully compatible with OpenMP solution. However, CAPE is in the development stage. Its checkpoint mechanism and execution model need to be optimized in order to improve the performance, ability, and reliability. This thesis aims at presenting the approaches that were proposed to optimize and improve checkpoints, design and implement a new execution model, and improve the ability for CAPE. First, we proposed arithmetics on checkpoints, which aims at modeling checkpoint’s data structure and its operations. This modeling contributes to optimize checkpoint size and reduces the time when merging, as well as improve checkpoints capability. Second, we developed TICKPT which stands for Time-stamp Incremental Checkpointing as an instance of arithmetics on checkpoints. TICKPT is an improvement of DICKPT. It adds a timestamp to checkpoints to identify the checkpoints order. The analysis and experiments to compare it to DICKPT show that TICKPT do not only provide smaller in checkpoint size, but also has less impact on the performance of the program using checkpointing. Third, we designed and implemented a new execution model and new prototypes for CAPE based on TICKPT. The new execution model allows CAPE to use resources efficiently, avoid the risk of bottlenecks, overcome the requirement of matching the Bernstein’s conditions. As a result, these approaches make CAPE improving the performance, ability as well as reliability. Four, Open Data-sharing attributes are implemented on CAPE based on arithmetics on checkpoints and TICKPT. This also demonstrates the right direction that we took, and makes CAPE more complete
Cai, Jie. "Region-based techniques for modeling and enhancing cluster OpenMP performance." Phd thesis, 2011. http://hdl.handle.net/1885/8865.
Повний текст джерелаЧастини книг з теми "Cluster OpenMP implementations"
Wong, H. J., J. Cai, A. P. Rendell, and P. Strazdins. "Micro-benchmarks for Cluster OpenMP Implementations: Memory Consistency Costs." In OpenMP in a New Era of Parallelism, 60–70. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-79561-2_6.
Повний текст джерелаEachempati, Deepak, Lei Huang, and Barbara Chapman. "Strategies and Implementation for Translating OpenMP Code for Clusters." In High Performance Computing and Communications, 420–31. Berlin, Heidelberg: Springer Berlin Heidelberg, 2007. http://dx.doi.org/10.1007/978-3-540-75444-2_42.
Повний текст джерелаLiu, Zhenying, Lei Huang, Barbara Chapman, and Tien-Hsiung Weng. "Efficient Implementation of OpenMP for Clusters with Implicit Data Distribution." In Lecture Notes in Computer Science, 121–36. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/978-3-540-31832-3_11.
Повний текст джерелаBriguglio, Sergio, Beniamino Di Martino, Giuliana Fogaccia, and Gregorio Vlad. "Hierarchical MPI+OpenMP Implementation of Parallel PIC Applications on Clusters of Symmetric MultiProcessors." In Recent Advances in Parallel Virtual Machine and Message Passing Interface, 180–87. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/978-3-540-39924-7_27.
Повний текст джерелаCabral, Frederico, Carla Osthoff, Roberto Pinto Souto, Gabriel P. Costa, Sanderson L. Gonzaga de Oliveira, Diego N. Brandão, and Mauricio Kischinhevsky. "An Improved OpenMP Implementation of the TVD–Hopmoc Method Based on a Cluster of Points." In High Performance Computing for Computational Science – VECPAR 2018, 132–45. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-15996-2_10.
Повний текст джерелаTakahashi, Daisuke. "A Hybrid MPI/OpenMP Implementation of a Parallel 3-D FFT on SMP Clusters." In Parallel Processing and Applied Mathematics, 970–77. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11752578_117.
Повний текст джерелаPavão, Pedro Nuno Rebelo, João Pedro Almeida Couto, and Maria Manuela Santos Natário. "A Tale of Different Realities." In The Role of Knowledge Transfer in Open Innovation, 262–80. IGI Global, 2019. http://dx.doi.org/10.4018/978-1-5225-5849-1.ch013.
Повний текст джерелаNtseane, Peggy Gabo, and Idowu Biao. "Learning Cities." In Advances in Electronic Government, Digital Divide, and Regional Development, 73–93. IGI Global, 2019. http://dx.doi.org/10.4018/978-1-5225-8134-5.ch004.
Повний текст джерелаWillis, James S., Matthieu Schaller, Pedro Gonnet, and John C. Helly. "A Hybrid MPI+Threads Approach to Particle Group Finding Using Union-Find." In Parallel Computing: Technology Trends. IOS Press, 2020. http://dx.doi.org/10.3233/apc200050.
Повний текст джерелаTeodoro, George. "Efficient Execution of Dataflows on Parallel and Heterogeneous Environments." In Advances in Systems Analysis, Software Engineering, and High Performance Computing, 1–17. IGI Global, 2013. http://dx.doi.org/10.4018/978-1-4666-2533-4.ch001.
Повний текст джерелаТези доповідей конференцій з теми "Cluster OpenMP implementations"
Cai, Jie, Alistair P. Rendell, Peter E. Strazdins, and H'sien Jin Wong. "Performance models for Cluster-enabled OpenMP implementations." In 2008 13th Asia-Pacific Computer Systems Architecture Conference (ACSAC). IEEE, 2008. http://dx.doi.org/10.1109/apcsac.2008.4625433.
Повний текст джерелаSantander-Jimenez, Sergio, and Miguel A. Vega-Rodriguez. "Applying OpenMP-based parallel implementations of NSGA-II and SPEA2 to study phylogenetic relationships." In 2014 IEEE International Conference On Cluster Computing (CLUSTER). IEEE, 2014. http://dx.doi.org/10.1109/cluster.2014.6968779.
Повний текст джерелаNoor, Nor Rizuan Mat, and Tanya Vladimirova. "Parallel implementation of lossless clustered integer KLT using OpenMP." In 2012 NASA/ESA Conference on Adaptive Hardware and Systems (AHS). IEEE, 2012. http://dx.doi.org/10.1109/ahs.2012.6268639.
Повний текст джерелаTran, Van Long, Eric Renault, and Viet Hai Ha. "Optimization of Checkpoints and Execution Model for an Implementation of OpenMP on Distributed Memory Architectures." In 2017 17th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGRID). IEEE, 2017. http://dx.doi.org/10.1109/ccgrid.2017.119.
Повний текст джерелаXuan, Huailiang, Weiqin Tong, Zhixun Gong, and Youwen Lan. "Implementation and performance analysis of hybrid MPI+OpenMP programming for parallel MLFMA on SMP cluster." In 2012 Third International Conference on Intelligent Control and Information Processing (ICICIP). IEEE, 2012. http://dx.doi.org/10.1109/icicip.2012.6391557.
Повний текст джерелаAndreev, Vyacheslav Viktorovich, Olga Vyacheslavovna Andreeva, and Vasiliy Evgenievich Gai. "Computer Modelling Based on the Percolation Theory of the Third Stage of Cracks Formation and Development on the Steel Microstructures Surfaces." In 32nd International Conference on Computer Graphics and Vision. Keldysh Institute of Applied Mathematics, 2022. http://dx.doi.org/10.20948/graphicon-2022-1059-1064.
Повний текст джерелаDarmawan, B. "The Implementation of Hybrid Parallel Computation for Complex and Fine Reservoir Model Using Cluster Technology." In Indonesian Petroleum Association 44th Annual Convention and Exhibition. Indonesian Petroleum Association, 2021. http://dx.doi.org/10.29118/ipa21-e-53.
Повний текст джерелаClauberg, Jan, Michael Leistner, and Heinz Ulbrich. "Hybrid-Parallel Calculation of Jacobians in Multi-Body Dynamics." In ASME 2013 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2013. http://dx.doi.org/10.1115/detc2013-12245.
Повний текст джерелаMiniello, G., and M. La Salandra. "HIGH RESOLUTION IMAGE PROCESSING AND LAND COVER CLASSIFICATION FOR HYDRO- GEOMORPHOLOGICAL HIGH-RISK AREA MONITORING." In 9th International Conference "Distributed Computing and Grid Technologies in Science and Education". Crossref, 2021. http://dx.doi.org/10.54546/mlit.2021.12.40.001.
Повний текст джерелаLisboa, Flávio Gomes da Silva. "A scalable distributed system based on microservices for collecting pod logs from a Kubernetes cluster." In Congresso Latino-Americano de Software Livre e Tecnologias Abertas. Sociedade Brasileira de Computação - SBC, 2021. http://dx.doi.org/10.5753/latinoware.2021.19916.
Повний текст джерела