Journal articles on the topic 'Heterogeneous programming'

To see the other types of publications on this topic, follow the link: Heterogeneous programming.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Heterogeneous programming.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Kirgizov, G. V., and I. A. Kirilenko. "Heterogeneous Architectures Programming Library." Proceedings of the Institute for System Programming of the RAS 30, no. 4 (2018): 45–62. http://dx.doi.org/10.15514/ispras-2018-30(4)-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Viñas, Moisés, Zeki Bozkus, and Basilio B. Fraguela. "Exploiting heterogeneous parallelism with the Heterogeneous Programming Library." Journal of Parallel and Distributed Computing 73, no. 12 (December 2013): 1627–38. http://dx.doi.org/10.1016/j.jpdc.2013.07.013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Sampson, Adrian, Kathryn S. McKinley, and Todd Mytkowicz. "Static stages for heterogeneous programming." Proceedings of the ACM on Programming Languages 1, OOPSLA (October 12, 2017): 1–27. http://dx.doi.org/10.1145/3133895.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Chiang, Chia-Chu. "Implicit heterogeneous and parallel programming." ACM SIGSOFT Software Engineering Notes 30, no. 3 (May 2005): 1–6. http://dx.doi.org/10.1145/1061874.1061887.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Eckhardt, Jason, Roumen Kaiabachev, Emir Pasalic, Kedar Swadi, and Walid Taha. "Implicitly Heterogeneous Multi-Stage Programming." New Generation Computing 25, no. 3 (May 2007): 305–36. http://dx.doi.org/10.1007/s00354-007-0020-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kunzman, David M., and Laxmikant V. Kalé. "Programming Heterogeneous Clusters with Accelerators Using Object-Based Programming." Scientific Programming 19, no. 1 (2011): 47–62. http://dx.doi.org/10.1155/2011/525717.

Full text
Abstract:
Heterogeneous clusters that include accelerators have become more common in the realm of high performance computing because of the high GFlop/s rates such clusters are capable of achieving. However, heterogeneous clusters are typically considered hard to program as they usually require programmers to interleave architecture-specific code within application code. We have extended the Charm++ programming model and runtime system to support heterogeneous clusters (with host cores that differ in their architecture) that include accelerators. We are currently focusing on clusters that include commodity processors, Cell processors, and Larrabee devices. When our extensions are used to develop code, the resulting code is portable between various homogeneous and heterogeneous clusters that may or may not include accelerators. Using a simple example molecular dynamics (MD) code, we demonstrate our programming model extensions and runtime system modifications on a heterogeneous cluster comprised of Xeon and Cell processors. Even though there is no architecture-specific code in the example MD program, it is able to successfully make use of three core types, each with a different ISA (Xeon, PPE, SPE), three SIMD instruction extensions (SSE, AltiVec/VMX and the SPE's SIMD instructions), and two memory models (cache hierarchies and scratchpad memories) in a single execution. Our programming model extensions abstract away hardware complexities while our runtime system modifications automatically adjust application data to account for architectural differences between the various cores.
APA, Harvard, Vancouver, ISO, and other styles
7

Watts, Gordon. "hep_tables: Heterogeneous Array Programming for HEP." EPJ Web of Conferences 251 (2021): 03061. http://dx.doi.org/10.1051/epjconf/202125103061.

Full text
Abstract:
Array operations are one of the most concise ways of expressing common filtering and simple aggregation operations that are the hallmark of a particle physics analysis: selection, filtering, basic vector operations, and filling histograms. The High Luminosity run of the Large Hadron Collider (HL-LHC), scheduled to start in 2026, will require physicists to regularly skim datasets that are over a PB in size, and repeatedly run over datasets that are 100’s of TB’s – too big to fit in memory. Declarative programming techniques are a way of separating the intent of the physicist from the mechanics of finding the data and using distributed computing to process and make histograms. This paper describes a library that implements a declarative distributed framework based on array programming. This prototype library provides a framework for different sub-systems to cooperate in producing plots via plug-in’s. This prototype has a ServiceX data-delivery sub-system and an awkward array sub-system cooperating to generate requested data or plots. The ServiceX system runs against ATLAS xAOD data and flat ROOT TTree’s and awkward on the columnar data produced by ServiceX.
APA, Harvard, Vancouver, ISO, and other styles
8

Chen, Cheng, Wenxiang Yang, Fang Wang, Dan Zhao, Yang Liu, Liang Deng, and Canqun Yang. "Reverse Offload Programming on Heterogeneous Systems." IEEE Access 7 (2019): 10787–97. http://dx.doi.org/10.1109/access.2019.2891740.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Bisiani, R., and A. Forin. "Multilanguage parallel programming of heterogeneous machines." IEEE Transactions on Computers 37, no. 8 (August 1988): 930–45. http://dx.doi.org/10.1109/12.2245.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Song, Changxu. "Analysis on Heterogeneous Computing." Journal of Physics: Conference Series 2031, no. 1 (September 1, 2021): 012049. http://dx.doi.org/10.1088/1742-6596/2031/1/012049.

Full text
Abstract:
Abstract In the Internet industry, with the popularization of informatization and the rapid increase in data volume, people have new requirements for storage space. At the same time, computer applications such as artificial intelligence and big data have rapidly increased demand for computing power and diversified application scenarios. Heterogeneous computing has become the focus of research. This article introduces the choice of architecture for heterogeneous computing systems and programming languages for heterogeneous computing. Some typical technologies of heterogeneous computing are illustrated, including data communication and access, task division and mapping between processors. However, this also brings difficulties. The challenges facing hybrid parallel computing, such as programming difficulties, poor portability of the algorithm, complex data access, unbalanced resource load. Studies have shown that there are many ways to improve the status quo and solve problems, including the development of a unified programming method, a good programming model and the integration of storage and computing, intelligent task allocation, as well as the development of better packaging technologies. Finally, the application prospects and broad market prospects of heterogeneous computing systems are prospected. In the next ten years, due to the various advantages of heterogeneous computing systems, innovation in more fields will be stimulated and heterogeneous computing systems will shine in the AI artificial intelligence fields such as smart self-service equipment, smart robots, and smart driving cars. Moreover, this emerging technology will bring new industries and new jobs, thereby driving economic prosperity and social development and even benefiting the entire human society.
APA, Harvard, Vancouver, ISO, and other styles
11

Saha, Bratin, Xiaocheng Zhou, Hu Chen, Ying Gao, Shoumeng Yan, Mohan Rajagopalan, Jesse Fang, Peinan Zhang, Ronny Ronen, and Avi Mendelson. "Programming model for a heterogeneous x86 platform." ACM SIGPLAN Notices 44, no. 6 (May 28, 2009): 431–40. http://dx.doi.org/10.1145/1543135.1542525.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Tsoi, Kuen Hung, Anson H. T. Tse, Peter Pietzuch, and Wayne Luk. "Programming framework for clusters with heterogeneous accelerators." ACM SIGARCH Computer Architecture News 38, no. 4 (September 14, 2010): 53–59. http://dx.doi.org/10.1145/1926367.1926377.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

BANSAL, ARVIND K. "A FRAMEWORK FOR HETEROGENEOUS ASSOCIATIVE LOGIC PROGRAMMING." International Journal on Artificial Intelligence Tools 04, no. 01n02 (June 1995): 33–53. http://dx.doi.org/10.1142/s0218213095000036.

Full text
Abstract:
Associative computation is characterized by seamless intertwining of search-by-content and data parallel computation. The search-by-content paradigm is natural to scalable high performance heterogeneous computing since the use of tagged data avoids the need for explicit addressing mechanisms. In this paper, the author presents an algebra for associative logic programming, an associative resolution scheme, and a generic framework of an associative abstract instruction set. The model is based on the integration of data alignment and the use of two types of bags: data element bags and filter bags of Boolean values to select and restrict computation on data elements. The use of filter bags integrated with data alignment reduces computation and data transfer overhead, and the use of tagged data reduces overhead of preparing data before data transmission. The abstract instruction set has been illustrated by an example. Performance results are presented for a simulation in a homogeneous address space.
APA, Harvard, Vancouver, ISO, and other styles
14

SARC European Project. "Parallel Programming Models for Heterogeneous Multicore Architectures." IEEE Micro 30, no. 5 (September 2010): 42–53. http://dx.doi.org/10.1109/mm.2010.94.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Paulino, Hervé, and Eduardo Marques. "Heterogeneous programming with Single Operation Multiple Data." Journal of Computer and System Sciences 81, no. 1 (February 2015): 16–37. http://dx.doi.org/10.1016/j.jcss.2014.06.021.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Ejarque, Jorge, Marc Domínguez, and Rosa M. Badia. "A hierarchic task-based programming model for distributed heterogeneous computing." International Journal of High Performance Computing Applications 33, no. 5 (May 2019): 987–97. http://dx.doi.org/10.1177/1094342019845438.

Full text
Abstract:
Distributed computing platforms are evolving to heterogeneous ecosystems with Clusters, Grids and Clouds introducing in its computing nodes, processors with different core architectures, accelerators (i.e. GPUs, FPGAs), as well as different memories and storage devices in order to achieve better performance with lower energy consumption. As a consequence of this heterogeneity, programming applications for these distributed heterogeneous platforms becomes a complex task. Additionally to the complexity of developing an application for distributed platforms, developers must also deal now with the complexity of the different computing devices inside the node. In this article, we present a programming model that aims to facilitate the development and execution of applications in current and future distributed heterogeneous parallel architectures. This programming model is based on the hierarchical composition of the COMP Superscalar and Omp Superscalar programming models that allow developers to implement infrastructure-agnostic applications. The underlying runtime enables applications to adapt to the infrastructure without the need of maintaining different versions of the code. Our programming model proposal has been evaluated on real platforms, in terms of heterogeneous resource usage, performance and adaptation.
APA, Harvard, Vancouver, ISO, and other styles
17

ABEL, ANDREAS. "Implementing a normalizer using sized heterogeneous types." Journal of Functional Programming 19, no. 3-4 (July 2009): 287–310. http://dx.doi.org/10.1017/s0956796809007266.

Full text
Abstract:
AbstractIn the simply typed λ-calculus, a hereditary substitution replaces a free variable in a normal formrby another normal formsof typea, removing freshly created redexes on the fly. It can be defined by lexicographic induction onaandr, thus giving rise to a structurally recursive normalizer for the simply typed λ-calculus. We implement hereditary substitutions in a functional programming language with sized heterogeneous inductive types$\Fhat$, arriving at an interpreter whose termination can be tracked by the type system of its host programming language.
APA, Harvard, Vancouver, ISO, and other styles
18

Mccaskey, Alexander, Thien Nguyen, Anthony Santana, Daniel Claudino, Tyler Kharazi, and Hal Finkel. "Extending C++ for Heterogeneous Quantum-Classical Computing." ACM Transactions on Quantum Computing 2, no. 2 (July 2021): 1–36. http://dx.doi.org/10.1145/3462670.

Full text
Abstract:
We present qcor—a language extension to C++ and compiler implementation that enables heterogeneous quantum-classical programming, compilation, and execution in a single-source context. Our work provides a first-of-its-kind C++ compiler enabling high-level quantum kernel (function) expression in a quantum-language agnostic manner, as well as a hardware-agnostic, retargetable compiler workflow targeting a number of physical and virtual quantum computing backends. qcor leverages novel Clang plugin interfaces and builds upon the XACC system-level quantum programming framework to provide a state-of-the-art integration mechanism for quantum-classical compilation that leverages the best from the community at-large. qcor translates quantum kernels ultimately to the XACC intermediate representation, and provides user-extensible hooks for quantum compilation routines like circuit optimization, analysis, and placement. This work details the overall architecture and compiler workflow for qcor, and provides a number of illuminating programming examples demonstrating its utility for near-term variational tasks, quantum algorithm expression, and feed-forward error correction schemes.
APA, Harvard, Vancouver, ISO, and other styles
19

Viñas, Moisés, Basilio B. Fraguela, Zeki Bozkus, and Diego Andrade. "Improving OpenCL Programmability with the Heterogeneous Programming Library." Procedia Computer Science 51 (2015): 110–19. http://dx.doi.org/10.1016/j.procs.2015.05.208.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Pu, Jing, Steven Bell, Xuan Yang, Jeff Setter, Stephen Richardson, Jonathan Ragan-Kelley, and Mark Horowitz. "Programming Heterogeneous Systems from an Image Processing DSL." ACM Transactions on Architecture and Code Optimization 14, no. 3 (September 6, 2017): 1–25. http://dx.doi.org/10.1145/3107953.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Bodin, Francois, and Stephane Bihan. "Heterogeneous Multicore Parallel Programming for Graphics Processing Units." Scientific Programming 17, no. 4 (2009): 325–36. http://dx.doi.org/10.1155/2009/784893.

Full text
Abstract:
Hybrid parallel multicore architectures based on graphics processing units (GPUs) can provide tremendous computing power. Current NVIDIA and AMD Graphics Product Group hardware display a peak performance of hundreds of gigaflops. However, exploiting GPUs from existing applications is a difficult task that requires non-portable rewriting of the code. In this paper, we present HMPP, a Heterogeneous Multicore Parallel Programming workbench with compilers, developed by CAPS entreprise, that allows the integration of heterogeneous hardware accelerators in a unintrusive manner while preserving the legacy code.
APA, Harvard, Vancouver, ISO, and other styles
22

ÖNAL, HAYRI, and BRUCE A. McCARL. "Aggregation of heterogeneous firms in mathematical programming models." European Review of Agricultural Economics 16, no. 4 (1989): 499–513. http://dx.doi.org/10.1093/erae/16.4.499.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Zhou, Kai, Xi Chen, Zhijiang Shao, Wei Wan, and Lorenz T. Biegler. "Heterogeneous parallel method for mixed integer nonlinear programming." Computers & Chemical Engineering 66 (July 2014): 290–300. http://dx.doi.org/10.1016/j.compchemeng.2013.11.009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Garcia, J. Daniel, and Diego R. Llanos. "High-level parallel programming in a heterogeneous world." Concurrency and Computation: Practice and Experience 31, no. 5 (October 30, 2018): e5052. http://dx.doi.org/10.1002/cpe.5052.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Chen, S., M. M. Eshaghian, R. F. Freund, J. L. Potter, and Y. C. Wu. "Evaluation of Two Programming Paradigms for Heterogeneous Computing." Journal of Parallel and Distributed Computing 31, no. 1 (November 1995): 41–55. http://dx.doi.org/10.1006/jpdc.1995.1143.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

García-Blas, Javier, and Christopher Brown. "High-level programming for heterogeneous and hierarchical parallel systems." International Journal of High Performance Computing Applications 32, no. 6 (November 2018): 804–6. http://dx.doi.org/10.1177/1094342018807840.

Full text
Abstract:
High-Level Heterogeneous and Hierarchical Parallel Systems (HLPGPU) aims to bring together researchers and practitioners to present new results and ongoing work on those aspects of high-level programming relevant, or specific to general-purpose computing on graphics processing units (GPGPUs) and new architectures. The 2016 HLPGPU symposium was an event co-located with the HiPEAC conference in Prague, Czech Republic. HLPGPU is targeted at high-level parallel techniques, including programming models, libraries and languages, algorithmic skeletons, refactoring tools and techniques for parallel patterns, tools and systems to aid parallel programming, heterogeneous computing, timing analysis and statistical performance models.
APA, Harvard, Vancouver, ISO, and other styles
27

Amela, Ramon, Cristian Ramon-Cortes, Jorge Ejarque, Javier Conejero, and Rosa M. Badia. "Executing linear algebra kernels in heterogeneous distributed infrastructures with PyCOMPSs." Oil & Gas Science and Technology – Revue d’IFP Energies nouvelles 73 (2018): 47. http://dx.doi.org/10.2516/ogst/2018047.

Full text
Abstract:
Python is a popular programming language due to the simplicity of its syntax, while still achieving a good performance even being an interpreted language. The adoption from multiple scientific communities has evolved in the emergence of a large number of libraries and modules, which has helped to put Python on the top of the list of the programming languages [1]. Task-based programming has been proposed in the recent years as an alternative parallel programming model. PyCOMPSs follows such approach for Python, and this paper presents its extensions to combine task-based parallelism and thread-level parallelism. Also, we present how PyCOMPSs has been adapted to support heterogeneous architectures, including Xeon Phi and GPUs. Results obtained with linear algebra benchmarks demonstrate that significant performance can be obtained with a few lines of Python.
APA, Harvard, Vancouver, ISO, and other styles
28

Mitrovic, Dejan, Mirjana Ivanovic, Zoran Budimac, and Milan Vidakovic. "Supporting heterogeneous agent mobility with ALAS." Computer Science and Information Systems 9, no. 3 (2012): 1203–29. http://dx.doi.org/10.2298/csis120102025m.

Full text
Abstract:
Networks of multi-agent systems are considered to be heterogeneous if they include systems with different sets of APIs, running on different virtual machines. Developing an agent that can operate in this kind of a setting is a difficult task, because the process requires regeneration of the agent?s executable code, as well as modifications in the way it communicates with the environment. With the main goal of providing an effective solution to the heterogeneous agent mobility problem, a novel agent-oriented programming language, named ALAS, is proposed. The new language also provides a set of programming constructs that effectively hide the complexity of the overall agent development process. The design of the ALAS platform and an experiment presented in this paper will show that an agent written in ALAS is able to work in truly heterogeneous networks of multi-agent systems.
APA, Harvard, Vancouver, ISO, and other styles
29

Yanakova, E. S., G. T. Macharadze, L.G. Gagarina, and A. A. Shvachko. "Parallel-Pipelined Video Processing in Multicore Heterogeneous Systems on Chip." Proceedings of Universities. Electronics 26, no. 2 (April 2021): 172–83. http://dx.doi.org/10.24151/1561-5405-2021-26-2-172-183.

Full text
Abstract:
A turn from homogeneous to heterogeneous architectures permits to achieve the advantages of the efficiency, size, weight and power consumption, which is especially important for the built-in solutions. However, the development of the parallel software for heterogeneous computer systems is rather complex task due to the requirements of high efficiency, easy programming and the process of scaling. In the paper the efficiency of parallel-pipelined processing of video information in multiprocessor heterogeneous systems on a chip (SoC) such as DSP, GPU, ISP, VDP, VPU and others, has been investigated. A typical scheme of parallel-pipelined processing of video data using various accelerators has been presented. The scheme of the parallel-pipelined video data on heterogeneous SoC 1892VM248 has been developed. The methods of efficient parallel-pipelined processing of video data in heterogeneous computers (SoC), consisting of the operating system level, programming technologies level and the application level, have been proposed. A comparative analysis of the most common programming technologies, such as OpenCL, OpenMP, MPI, OpenAMP, has been performed. The analysis has shown that depend-ing on the device finite purpose two programming paradigms should be applied: based on OpenCL technology (for built-in system) and MPI technology (for inter-cell and inter processor interaction). The results obtained of the parallel-pipelined processing within the framework of the face recognition have confirmed the effectiveness of the chosen solutions.
APA, Harvard, Vancouver, ISO, and other styles
30

Syschikov, A. Yu, B. N. Sedov, and Yu E. Sheynin. "INTEGRATED DOMAIN-SPECIFIC PROGRAMMING ENVIRONMENT FOR HETEROGENEOUS MANYCORE PLATFORMS." Issues of radio electronics, no. 8 (August 20, 2018): 133–44. http://dx.doi.org/10.21778/2218-5453-2018-8-133-144.

Full text
Abstract:
Different classes of problems on the embedded systems market and its needs make manufacturers of embedded systems to design heterogeneous many/multi core hardware platforms. Such platforms includes dozens of different cores: CPU, GPU, DSP, FPGA etc. That makes them incredibly hard to program, especially in case when domain experts are involved in the development process. Usually, domain expert has knowledge in his domain area, but does not fully understand the specificity of programming for heterogeneous manycore platforms. In this article, we propose the complex technology and tools that allows involving domain experts in software development for embedded systems. The proposed technology has various aspects and abilities that can be used to build verifiable and portable software for a wide range of heterogeneous embedded platforms.
APA, Harvard, Vancouver, ISO, and other styles
31

Wang, Junchang, Shaojin Cheng, and Xiong Fu. "SDN Programming for Heterogeneous Switches with Flow Table Pipelining." Scientific Programming 2018 (November 21, 2018): 1–13. http://dx.doi.org/10.1155/2018/2848232.

Full text
Abstract:
High-level programming is one of the critical building blocks of the effective use of software-defined networking (SDN). Existing solutions, however, either (1) cannot utilize the state-of-the-art switches with flow table pipelining, a key technique to prevent flow rule set explosion or (2) force programmers to manually organize and manage hardware flow table pipelines, which is time-consuming and error-prone. This paper presents a high-level SDN programming framework to address these issues. The framework can automatically (1) generate rule sets for heterogeneous switches with different flow table pipelining designs and (2) update installed rules when the network state changes. As a result, the framework can not only generate efficient rule sets for switches but also provide programmers a centralized, intuitive, and hence easy-to-use programming API. Experiments show that the framework can generate compact rule sets that are 29–116 times smaller than those generated by other open-source SDN controllers. Besides, the framework is 5 times faster to recover from network link failures in comparison to other controllers.
APA, Harvard, Vancouver, ISO, and other styles
32

Syschikov, Alexey, Yuriy Sheynin, Boris Sedov, and Vera Ivanova. "Domain-Specific Programming Environment for Heterogeneous Multicore Embedded Systems." International Journal of Embedded and Real-Time Communication Systems 5, no. 4 (October 2014): 1–23. http://dx.doi.org/10.4018/ijertcs.2014100101.

Full text
Abstract:
Nowadays embedded systems are used in a broad range of domains such as avionics, space, automotive, mobile, domestic appliances etc. Sophisticated software determines the quality of embedded systems and requires high-qualified experts for software development. Software becomes the main assert of embedded systems that is valuable to retain in changing computing platforms in embedded systems evolution. Computing platforms for embedded systems became multicore processors and SoC, they can change in the embedded system lifetime that could be long (dozen of years for an automobile and airplane). It requires software porting to new platforms as a regular process. Many tools and approaches allow developing of software for domain area experts, but mainly for general-purpose computing systems. In this paper the authors present the complex technology and tools that allows involving domain experts in software development for embedded systems. The proposed technology has various aspects and abilities that can be used to build verifiable and portable software for a wide range of embedded platforms.
APA, Harvard, Vancouver, ISO, and other styles
33

DURAN, ALEJANDRO, EDUARD AYGUADÉ, ROSA M. BADIA, JESÚS LABARTA, LUIS MARTINELL, XAVIER MARTORELL, and JUDIT PLANAS. "OmpSs: A PROPOSAL FOR PROGRAMMING HETEROGENEOUS MULTI-CORE ARCHITECTURES." Parallel Processing Letters 21, no. 02 (June 2011): 173–93. http://dx.doi.org/10.1142/s0129626411000151.

Full text
Abstract:
In this paper, we present OmpSs, a programming model based on OpenMP and StarSs, that can also incorporate the use of OpenCL or CUDA kernels. We evaluate the proposal on different architectures, SMP, GPUs, and hybrid SMP/GPU environments, showing the wide usefulness of the approach. The evaluation is done with six different benchmarks, Matrix Multiply, BlackScholes, Perlin Noise, Julia Set, PBPI and FixedGrid. We compare the results obtained with the execution of the same benchmarks written in OpenCL or OpenMP, on the same architectures. The results show that OmpSs greatly outperforms both environments. With the use of OmpSs the programming environment is more flexible than traditional approaches to exploit multiple accelerators, and due to the simplicity of the annotations, it increases programmer's productivity.
APA, Harvard, Vancouver, ISO, and other styles
34

Sbîrlea, Alina, Yi Zou, Zoran Budimlíc, Jason Cong, and Vivek Sarkar. "Mapping a data-flow programming model onto heterogeneous platforms." ACM SIGPLAN Notices 47, no. 5 (May 18, 2012): 61–70. http://dx.doi.org/10.1145/2345141.2248428.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Ko, Bongsuk, Seunghun Han, Yongjun Park, Moongu Jeon, and Byeongcheol Lee. "A Comparative Study of Programming Environments Exploiting Heterogeneous Systems." IEEE Access 5 (2017): 10081–92. http://dx.doi.org/10.1109/access.2017.2708738.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Stone, John E., David Gohara, and Guochun Shi. "OpenCL: A Parallel Programming Standard for Heterogeneous Computing Systems." Computing in Science & Engineering 12, no. 3 (May 2010): 66–73. http://dx.doi.org/10.1109/mcse.2010.69.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Grasso, Ivan, Simone Pellegrini, Biagio Cosenza, and Thomas Fahringer. "A uniform approach for programming distributed heterogeneous computing systems." Journal of Parallel and Distributed Computing 74, no. 12 (December 2014): 3228–39. http://dx.doi.org/10.1016/j.jpdc.2014.08.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Demiriz, Ayhan, Nader Bagherzadeh, and Ozcan Ozturk. "Voltage island based heterogeneous NoC design through constraint programming." Computers & Electrical Engineering 40, no. 8 (November 2014): 307–16. http://dx.doi.org/10.1016/j.compeleceng.2014.08.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Fenton, Michael, David Lynch, Stepan Kucera, Holger Claussen, and Michael O'Neill. "Multilayer Optimization of Heterogeneous Networks Using Grammatical Genetic Programming." IEEE Transactions on Cybernetics 47, no. 9 (September 2017): 2938–50. http://dx.doi.org/10.1109/tcyb.2017.2688280.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Bisiani, Roberto, and Alessandro Forin. "Architectural support for multilanguage parallel programming on heterogeneous systems." ACM SIGARCH Computer Architecture News 15, no. 5 (November 1987): 21–30. http://dx.doi.org/10.1145/36177.36180.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Bisiani, Roberto, and Alessandro Forin. "Architectural support for multilanguage parallel programming on heterogeneous systems." ACM SIGOPS Operating Systems Review 21, no. 4 (October 1987): 21–30. http://dx.doi.org/10.1145/36204.36180.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Bisiani, Roberto, and Alessandro Forin. "Architectural support for multilanguage parallel programming on heterogeneous systems." ACM SIGPLAN Notices 22, no. 10 (October 1987): 21–30. http://dx.doi.org/10.1145/36205.36180.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Nedeljkovlc, Nenad, and Michael J. Quinn. "Data-parallel programming on a network of heterogeneous workstations." Concurrency: Practice and Experience 5, no. 4 (June 1993): 257–68. http://dx.doi.org/10.1002/cpe.4330050404.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Cohen, Noy. "Programming the equilibrium swelling response of heterogeneous polymeric gels." International Journal of Solids and Structures 178-179 (December 2019): 81–90. http://dx.doi.org/10.1016/j.ijsolstr.2019.06.023.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Swatman, S. N., A. Krasznahorkay, and P. Gessinger. "Managing heterogeneous device memory using C++17 memory resources." Journal of Physics: Conference Series 2438, no. 1 (February 1, 2023): 012050. http://dx.doi.org/10.1088/1742-6596/2438/1/012050.

Full text
Abstract:
Abstract Programmers using the C++ programming language are increasingly taught to manage memory implicitly through containers provided by the C++ standard library. However, heterogeneous programming platforms often require explicit allocation and deallocation of memory. This discrepancy in memory management strategies can be daunting and problematic for C++ developers who are not already familiar with heterogeneous programming. The C++17 standard introduces the concept of memory resources, which allow the user to control how standard library containers allocate memory; we believe that this addition to the C++17 standard is a powerful tool towards the unification of memory management for heterogeneous systems with best-practice C++ development. In this paper, we present vecmem, a library of memory resources which allows efficient and user-friendly allocation of memory on CUDA, HIP, and SYCL devices through standard C++ containers. We investigate the design and use cases of such a library, the potential performance gains over naive memory allocation, and the limitations of this memory allocation model.
APA, Harvard, Vancouver, ISO, and other styles
46

Nozal, Raúl, and Jose Luis Bosque. "Straightforward Heterogeneous Computing with the oneAPI Coexecutor Runtime." Electronics 10, no. 19 (September 29, 2021): 2386. http://dx.doi.org/10.3390/electronics10192386.

Full text
Abstract:
Heterogeneous systems are the core architecture of most computing systems, from high-performance computing nodes to embedded devices, due to their excellent performance and energy efficiency. Efficiently programming these systems has become a major challenge due to the complexity of their architectures and the efforts required to provide them with co-execution capabilities that can fully exploit the applications. There are many proposals to simplify the programming and management of acceleration devices and multi-core CPUs. However, in many cases, portability and ease of use compromise the efficiency of different devices—even more so when co-executing. Intel oneAPI, a new and powerful standards-based unified programming model, built on top of SYCL, addresses these issues. In this paper, oneAPI is provided with co-execution strategies to run the same kernel between different devices, enabling the exploitation of static and dynamic policies. This work evaluates the performance and energy efficiency for a well-known set of regular and irregular HPC benchmarks, using two heterogeneous systems composed of an integrated GPU and CPU. Static and dynamic load balancers are integrated and evaluated, highlighting single and co-execution strategies and the most significant key points of this promising technology. Experimental results show that co-execution is worthwhile when using dynamic algorithms and improves the efficiency even further when using unified shared memory.
APA, Harvard, Vancouver, ISO, and other styles
47

Wu, Linghong. "Research on the Development and Application of Parallel Programming Technology in Heterogeneous Systems." Journal of Physics: Conference Series 2173, no. 1 (January 1, 2022): 012042. http://dx.doi.org/10.1088/1742-6596/2173/1/012042.

Full text
Abstract:
Abstract In recent years, with the continuous improvement of high performance computing hardware platform,the application of high performance for a wide range of high-performance, traditional parallel programming environment tools cannot meet the growing demand for high-performance applications and the face of parallel software development environment in a relatively backward state. The heterogeneous system of the new coprocessor with a high performance and power ratio can have high performance computing needs to provide the task to accelerate. As a result, heterogeneous systems are becoming a new trend in high performance computing. However, with the introduction of the new coprocessor and the expansion of the system scale, heterogeneous systems are difficult to program the problem is also increasingly prominent. In view of this problem, this paper adopts the abstract technology to enhance the granularity of parallel programming, and designs a structured parallel programming prototype system which can achieve different levels of implicit parallelism
APA, Harvard, Vancouver, ISO, and other styles
48

RYAN, STEPHEN W., and ARVIND K. BANSAL. "A SCALABLE DISTRIBUTED MULTIMEDIA KNOWLEDGE RETRIEVAL SYSTEM ON A CLUSTER OF HETEROGENEOUS HIGH PERFORMANCE ARCHITECTURES." International Journal on Artificial Intelligence Tools 09, no. 03 (September 2000): 343–67. http://dx.doi.org/10.1142/s0218213000000227.

Full text
Abstract:
This paper describes a system to distribute and retrieve multimedia knowledge on a cluster of heterogeneous high performance architectures distributed over the Internet. The knowledge is represented using facts and rules in an associative logic-programming model. Associative computation facilitates distribution of facts and rules, and exploits coarse grain data parallel computation. Associative logic programming uses a flat data model that can be easily mapped onto heterogeneous architectures. The paper describes an abstract instruction set for the distributed version of the associative logic programming and the corresponding implementation. The implementation uses a message-passing library for architecture independence within a cluster, uses object oriented programming for modularity and portability, and uses Java as a front-end interface to provide a graphical user interface and multimedia capability and remote access via the Internet. The performance results on a cluster of IBM RS 6000 workstations are presented. The results show that distribution of data improves the performance almost linearly for small number of processors in a cluster.
APA, Harvard, Vancouver, ISO, and other styles
49

Marongiu, Andrea, Alessandro Capotondi, Giuseppe Tagliavini, and Luca Benini. "Simplifying Many-Core-Based Heterogeneous SoC Programming With Offload Directives." IEEE Transactions on Industrial Informatics 11, no. 4 (August 2015): 957–67. http://dx.doi.org/10.1109/tii.2015.2449994.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Bridi, Thomas, Andrea Bartolini, Michele Lombardi, Michela Milano, and Luca Benini. "A Constraint Programming Scheduler for Heterogeneous High-Performance Computing Machines." IEEE Transactions on Parallel and Distributed Systems 27, no. 10 (October 1, 2016): 2781–94. http://dx.doi.org/10.1109/tpds.2016.2516997.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography