Journal articles on the topic 'Parallel programming (Computer science)'

To see the other types of publications on this topic, follow the link: Parallel programming (Computer science).

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Parallel programming (Computer science).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Shiau, Liejune. "Exploring Quasi-Concurrency in Introductory Computer Science." Journal of Educational Computing Research 15, no. 1 (July 1996): 53–66. http://dx.doi.org/10.2190/7ldf-va2r-vk66-qq8d.

Full text
Abstract:
Most programming courses taught today are focused on managing batch-oriented problems. It is primarily because parallel computers are not commonly available, therefore problems with concurrent nature could not be explored. This consequence, at the same time, causes student's under preparation to meet the challenge of modern multi-process computation technologies. This article demonstrates an easy solution for implementing concurrent programming projects in computer labs. This solution does not require special hardware support or special programming languages. The goal is to facilitate a means to deal with the concept and usefulness of multi-process software systems in the early stage of computer science curriculum. We also include detailed descriptions on a few creative and interesting concurrent examples to illustrate this idea.
APA, Harvard, Vancouver, ISO, and other styles
2

Graham, John R. "Integrating parallel programming techniques into traditional computer science curricula." ACM SIGCSE Bulletin 39, no. 4 (December 2007): 75–78. http://dx.doi.org/10.1145/1345375.1345419.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

MORIARTY, K. J. M., and T. TRAPPENBERG. "PROGRAMMING TOOLS FOR PARALLEL COMPUTERS." International Journal of Modern Physics C 04, no. 06 (December 1993): 1285–94. http://dx.doi.org/10.1142/s0129183193001002.

Full text
Abstract:
Although software tools already have a place on serial and vector computers they are becoming increasingly important for parallel computing. Message passing libraries, parallel operating systems and high level parallel languages are the basic software tools necessary to implement a parallel processing program. These tools up to now have been specific to each parallel computer system and a short survey will be given. The aim of another class of software tools for parallel computers is to help in writing or rewriting application programs. Because automatic parallelization tools are not very successful, an interactive component has to be incorporated. We will concentrate here on the discussion of SPEFY, a parallel program development facility.
APA, Harvard, Vancouver, ISO, and other styles
4

Clark, K. L. "Parallel Logic Programming." Computer Journal 33, no. 6 (June 1, 1990): 482–93. http://dx.doi.org/10.1093/comjnl/33.6.482.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Blelloch, Guy E. "Programming parallel algorithms." Communications of the ACM 39, no. 3 (March 1996): 85–97. http://dx.doi.org/10.1145/227234.227246.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ben Ahmed, M. "Parallel programming." Information and Software Technology 32, no. 6 (July 1990): 445. http://dx.doi.org/10.1016/0950-5849(90)90136-f.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Anthes, Gary. "Researchers simplify parallel programming." Communications of the ACM 57, no. 11 (October 27, 2014): 13–15. http://dx.doi.org/10.1145/2667109.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ben Ahmed, M. "Programming parallel processors." Information and Software Technology 32, no. 9 (November 1990): 638–39. http://dx.doi.org/10.1016/0950-5849(90)90213-b.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

SHEERAN, MARY. "Functional and dynamic programming in the design of parallel prefix networks." Journal of Functional Programming 21, no. 1 (December 6, 2010): 59–114. http://dx.doi.org/10.1017/s0956796810000304.

Full text
Abstract:
AbstractA parallel prefix network of width n takes n inputs, a1, a2, . . ., an, and computes each yi = a1 ○ a2 ○ ⋅ ⋅ ⋅ ○ ai for 1 ≤ i ≤ n, for an associative operator ○. This is one of the fundamental problems in computer science, because it gives insight into how parallel computation can be used to solve an apparently sequential problem. As parallel programming becomes the dominant programming paradigm, parallel prefix or scan is proving to be a very important building block of parallel algorithms and applications. There are many different parallel prefix networks, with different properties such as number of operators, depth and allowed fanout from the operators. In this paper, ideas from functional programming are combined with search to enable a deep exploration of parallel prefix network design. Networks that improve on the best known previous results are generated. It is argued that precise modelling in a functional programming language, together with simple visualization of the networks, gives a new, more experimental, approach to parallel prefix network design, improving on the manual techniques typically employed in the literature. The programming idiom that marries search with higher order functions may well have wider application than the network generation described here.
APA, Harvard, Vancouver, ISO, and other styles
10

Nowicki, Marek, Magdalena Ryczkowska, Łukasz Gorski, Michał Szynkiewicz, and Piotr Bała. "PCJ - a Java Library for Heterogenous Parallel Computing." WSEAS TRANSACTIONS ON COMPUTERS 21 (March 23, 2022): 81–87. http://dx.doi.org/10.37394/23205.2022.21.12.

Full text
Abstract:
Marek Nowicki, MagdaWith the wide adoption of the multicore and multiprocessor systems the parallel programming became a very important element of the computer science. The programming of the multicore systems is still complicated and far to be easy. The difficulties are caused, amongst others, by the parallel tools, libraries and programming models which are not easy especially for a nonexperienced programmer. In this paper, we present PCJ - a Java library for parallel programming of heterogeneous multicore systems. The PCJ is adopting Partitioned Global Address Space paradigm which makes programming easy. We present basic functionality pf the PCJ library and its usage for parallelization of selected applications. The scalability of the genetic algorithm implementation is presented. The parallelization of the N-body algorithm implementation with PCJ is also described.
APA, Harvard, Vancouver, ISO, and other styles
11

POLYCHRONOPOULOS, C. D. "PARALLEL PROGRAMMING ISSUES." International Journal of High Speed Computing 05, no. 03 (September 1993): 413–73. http://dx.doi.org/10.1142/s0129053393000189.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Falcou, Joel. "Parallel Programming with Skeletons." Computing in Science & Engineering 11, no. 3 (May 2009): 58–63. http://dx.doi.org/10.1109/mcse.2009.57.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Mattson, Timothy G. "How Good is OpenMP." Scientific Programming 11, no. 2 (2003): 81–93. http://dx.doi.org/10.1155/2003/124373.

Full text
Abstract:
The OpenMP standard defines an Application Programming Interface (API) for shared memory computers. Since its introduction in 1997, it has grown to become one of the most commonly used API's for parallel programming. But success in the market doesn't necessarily imply successful computer science. Is OpenMP a "good" programming environment? What does it even mean to call a programming environment good? And finally, once we understand how good or bad OpenMP is; what can we do to make it even better? In this paper, we will address these questions.
APA, Harvard, Vancouver, ISO, and other styles
14

Assenmacher, H., T. Breitbach, P. Buhler, V. H�bsch, H. Peine, and R. Schwarz. "Parallel programming inPanda." Journal of Supercomputing 9, no. 1-2 (March 1995): 71–90. http://dx.doi.org/10.1007/bf01245398.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

GAVA, FRÉDÉRIC. "A MODULAR IMPLEMENTATION OF DATA STRUCTURES IN BULK-SYNCHRONOUS PARALLEL ML." Parallel Processing Letters 18, no. 01 (March 2008): 39–53. http://dx.doi.org/10.1142/s0129626408003211.

Full text
Abstract:
A functional data-parallel language called BSML has been designed for programming Bulk-Synchronous Parallel algorithms. Many sequential algorithms do not have parallel counterparts and many non-computer science researchers do not want to deal with parallel programming. In sequential programming environments, common data structures are often provided through reusable libraries to simplify the development of applications. A parallel representation of such data structures is thus a solution for writing parallel programs without suffering from disadvantages of all the features of a parallel language. In this paper we describe a modular implementation in BSML of some data structures and show how those data types can address the needs of many potential users of parallel machines who have so far been deterred by the complexity of parallelizing code.
APA, Harvard, Vancouver, ISO, and other styles
16

Hains, G. "Parallel Functional Programming with Arrays." Computer Journal 36, no. 3 (March 1, 1993): 238–45. http://dx.doi.org/10.1093/comjnl/36.3.238.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Drepper, Ulrich. "Parallel Programming with Transactional Memory." Queue 6, no. 5 (September 2008): 38–45. http://dx.doi.org/10.1145/1454456.1454464.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Drepper, Ulrich. "Parallel programming with transactional memory." Communications of the ACM 52, no. 2 (February 2009): 38–43. http://dx.doi.org/10.1145/1461928.1461943.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Nickolls, John, Ian Buck, Michael Garland, and Kevin Skadron. "Scalable Parallel Programming with CUDA." Queue 6, no. 2 (March 2008): 40–53. http://dx.doi.org/10.1145/1365490.1365500.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Sitsylitsyn, Yuriy. "Methods and tools for teaching parallel and distributed computing in universities: a systematic review of the literature." SHS Web of Conferences 75 (2020): 04017. http://dx.doi.org/10.1051/shsconf/20207504017.

Full text
Abstract:
As computer hardware becomes more and more parallel, there is a need for software engineers who are experienced in developing parallel programs, not only by “parallelizing” sequential designs. Teach students a parallelism in elementary courses in computer science this is a very important step towards building the competencies of future software engineers. We have conducted research on “teaching parallel and distributed computing” and “parallel programming” publications in the Scopus database, published in English between 2008 and 2019. After quality assessment, 26 articles were included in the analysis. As a result, the main tool for teaching parallel and distributed computing is a lab course with a C++ programming language and MPI library.
APA, Harvard, Vancouver, ISO, and other styles
21

Sitsylitsyn, Yuriy. "A systematic review of the literature on methods and technologies for teaching parallel and distributed computing in universities." Ukrainian Journal of Educational Studies and Information Technology 11, no. 2 (June 30, 2023): 111–21. http://dx.doi.org/10.32919/uesit.2023.02.04.

Full text
Abstract:
There is a growing demand for software developers who have experience writing parallel programs rather than just "parallelizing" sequential systems as computer hardware gets more and more parallel. In order to develop the skills of future software engineers, it is crucial to teach pupils parallelism in elementary computer science courses. We searched the Scopus database for articles on "teaching parallel and distributed computing" and "parallel programming," published in English between 2008 and 2019. 26 papers were included in the study after quality review. As a result, a lab course using the C++ programming language and MPI library serves as the primary teaching tool for parallel and distributed computing.
APA, Harvard, Vancouver, ISO, and other styles
22

Gay, David, Joel Galenson, Mayur Naik, and Kathy Yelick. "Yada: Straightforward parallel programming." Parallel Computing 37, no. 9 (September 2011): 592–609. http://dx.doi.org/10.1016/j.parco.2011.02.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

DeLozier, Christian, and James Shey. "Using Visual Programming Games to Study Novice Programmers." International Journal of Serious Games 10, no. 2 (June 7, 2023): 115–36. http://dx.doi.org/10.17083/ijsg.v10i2.577.

Full text
Abstract:
Enabling programmers to write correct and efficient parallel code remains an important challenge, and the prevalence of on-chip accelerators exacerbates this challenge. Novice programmers, especially those in disciplines outside of Computer Science and Computer Engineering, need to be able to write code that exploits parallelism and heterogeneity, but the frameworks for writing parallel and heterogeneous programs expect expert knowledge and experience. More effort must be put into understanding how novice programmers solve parallel problems. Unfortunately, novice programmers are difficult to study because they are, by definition, novices. We have designed a visual programming language and game-based framework for studying how novice programmers solve parallel problems. This tool was used to conduct an initial study on 95 undergraduate students with little to no prior programming experience. 71% of all volunteer participants completed the study in 48 minutes on average. This study demonstrated that novice programmers could solve parallel problems, and this framework can be used to conduct more thorough studies of how novice programmers approach parallel code.
APA, Harvard, Vancouver, ISO, and other styles
24

Rolfe, Timothy J. "A specimen of parallel programming." ACM Inroads 1, no. 4 (December 2010): 72–79. http://dx.doi.org/10.1145/1869746.1869767.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Luszczek, Piotr. "Parallel Programming in MATLAB." International Journal of High Performance Computing Applications 23, no. 3 (June 16, 2009): 277–83. http://dx.doi.org/10.1177/1094342009106194.

Full text
Abstract:
A visit to the neighborhood PC retail store provides ample proof that we are in the multi-core era. The key differentiator among manufacturers today is the number of cores that they pack onto a single chip. The clock frequency of commodity processors has reached its limit, however, and is likely to stay below 4 GHz for years to come. As a result, adding cores is not synonymous with increasing computational power. To take full advantage of the performance enhancements offered by the new multi-core hardware, a corresponding shift must take place in the software infrastructure — a shift to parallel computing.
APA, Harvard, Vancouver, ISO, and other styles
26

Powers, Franklin E., and Gita Alaghband. "The Hydra Parallel Programming System." Concurrency and Computation: Practice and Experience 20, no. 1 (2007): 1–27. http://dx.doi.org/10.1002/cpe.1205.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Huang, Miaoqing, Chenggang Lai, Xuan Shi, Zhijun Hao, and Haihang You. "Study of parallel programming models on computer clusters with Intel MIC coprocessors." International Journal of High Performance Computing Applications 31, no. 4 (April 13, 2015): 303–15. http://dx.doi.org/10.1177/1094342015580864.

Full text
Abstract:
Coprocessors based on the Intel Many Integrated Core (MIC) Architecture have been adopted in many high-performance computer clusters. Typical parallel programming models, such as MPI and OpenMP, are supported on MIC processors to achieve the parallelism. In this work, we conduct a detailed study on the performance and scalability of the MIC processors under different programming models using the Beacon computer cluster. Our findings are as follows. (1) The native MPI programming model on the MIC processors is typically better than the offload programming model, which offloads the workload to MIC cores using OpenMP. (2) On top of the native MPI programming model, multithreading inside each MPI process can further improve the performance for parallel applications on computer clusters with MIC coprocessors. (3) Given a fixed number of MPI processes, it is a good strategy to schedule these MPI processes to as few MIC processors as possible to reduce the cross-processor communication overhead. (4) The hybrid MPI programming model, in which data processing is distributed to both MIC cores and CPU cores, can outperform the native MPI programming model.
APA, Harvard, Vancouver, ISO, and other styles
28

Peyton Jones, S. L. "Parallel Implementations of Functional Programming Languages." Computer Journal 32, no. 2 (February 1, 1989): 175–86. http://dx.doi.org/10.1093/comjnl/32.2.175.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Skillicorn, D. B. "Practical Concurrent Programming for Parallel Machines." Computer Journal 34, no. 4 (April 1, 1991): 302–10. http://dx.doi.org/10.1093/comjnl/34.4.302.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Canetti, R., L. P. Fertig, S. A. Kravitz, D. Malki, R. Y. Pinter, S. Porat, and A. Teperman. "The parallel C (pC) programming language." IBM Journal of Research and Development 35, no. 5.6 (September 1991): 727–41. http://dx.doi.org/10.1147/rd.355.0727.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

McKenney, Paul E. "Selecting locking primitives for parallel programming." Communications of the ACM 39, no. 10 (October 1996): 75–82. http://dx.doi.org/10.1145/236156.236174.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

TALIA, DOMENICO. "MODELS AND TRENDS IN PARALLEL PROGRAMMING." Parallel Algorithms and Applications 16, no. 2 (January 2001): 145–80. http://dx.doi.org/10.1080/01495730108935270.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Carvalho, Tiago, Hugo Silva, and Luís Miguel Pinho. "A Real-Time Parallel Programming Approach for Rust." ACM SIGAda Ada Letters 43, no. 2 (June 6, 2024): 57–61. http://dx.doi.org/10.1145/3672359.3672366.

Full text
Abstract:
The development of real-time systems is one of the areas with the highest relevance in computer science, and the number of critical systems has increased significantly. These systems considers several applications running concurrently, and inside each of those applications code might be parallelized to improve their performance and control the priority of each parallelizable task. Several efforts have been done in different programming languages to provide real-time systems with parallel programming models, whether by code extensions or annotations, or with specific features in the actual language core.
APA, Harvard, Vancouver, ISO, and other styles
34

Hains, Gaétan, Frédéric Loulergue, and John Mullins. "Concrete data structures and functional parallel programming." Theoretical Computer Science 258, no. 1-2 (May 2001): 233–67. http://dx.doi.org/10.1016/s0304-3975(00)00010-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Xu, Zhiwei, Yongqiang He, Wei Lin, and Li Zha. "Four styles of parallel and net programming." Frontiers of Computer Science in China 3, no. 3 (May 16, 2009): 290–301. http://dx.doi.org/10.1007/s11704-009-0028-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Eisenbach, Susan, and Chris Sadler. "Parallel architecture for functional programming." Information and Software Technology 30, no. 6 (July 1988): 355–64. http://dx.doi.org/10.1016/0950-5849(88)90013-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Traxler, J. "Parallel programming: a new approach." Information and Software Technology 35, no. 11-12 (November 1993): 702–3. http://dx.doi.org/10.1016/0950-5849(93)90093-i.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Parallax. "The bards on parallel programming." International Journal of Parallel Programming 15, no. 3 (June 1986): 277. http://dx.doi.org/10.1007/bf01414557.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Arbab, F., P. Ciancarini, and C. Hankin. "Coordination languages for parallel programming." Parallel Computing 24, no. 7 (July 1998): 989–1004. http://dx.doi.org/10.1016/s0167-8191(98)00039-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Fisher, Allan L., and Thomas Gross. "Teaching the programming of parallel computers." ACM SIGCSE Bulletin 23, no. 1 (March 1991): 102–7. http://dx.doi.org/10.1145/107005.107024.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

ESHAGHIAN, MARY M., and MUHAMMAD E. SHAABAN. "CLUSTER-M PARALLEL PROGRAMMING PARADIGM." International Journal of High Speed Computing 06, no. 02 (June 1994): 287–309. http://dx.doi.org/10.1142/s0129053394000147.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Seevers, Bradley K., Michael J. Quinn, and Philip J. Hatcher. "A parallel programming environment supporting multiple data-parallel modules." International Journal of Parallel Programming 21, no. 5 (October 1992): 363–86. http://dx.doi.org/10.1007/bf01407837.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Rosing, Matthew, Robert B. Schnabel, and Robert P. Weaver. "The DINO parallel programming language." Journal of Parallel and Distributed Computing 13, no. 1 (September 1991): 30–42. http://dx.doi.org/10.1016/0743-7315(91)90107-k.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Anisimov, A. V., and P. P. Kulyabko. "Programming of parallel processors in control spaces." Cybernetics 20, no. 3 (1985): 404–18. http://dx.doi.org/10.1007/bf01068474.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Trevisan, L. "Parallel Approximation Algorithms by Positive Linear Programming." Algorithmica 21, no. 1 (May 1998): 72–88. http://dx.doi.org/10.1007/pl00009209.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

KRUSCHE, PETER. "EXPERIMENTAL EVALUATION OF BSP PROGRAMMING LIBRARIES." Parallel Processing Letters 18, no. 01 (March 2008): 7–21. http://dx.doi.org/10.1142/s0129626408003193.

Full text
Abstract:
The model of bulk-synchronous parallel computation (BSP) helps to implement portable general purpose algorithms while maintaining predictable performance on different parallel computers. Nevertheless, when programming in ‘BSP style’, the running time of the implementation of an algorithm can be very dependent on the underlying communication library. In this study, an overview of existing approaches to practical BSP programming in C/C++ or Fortran is given and benchmarks were run for the two main BSP-like communication libraries, the Oxford BSP Toolset and PUB. Furthermore, a memory efficient matrix multiplication algorithm was implemented and used to compare their performance on different parallel computers and to evaluate the compliance with predictions by theoretical results.
APA, Harvard, Vancouver, ISO, and other styles
47

Mattson, Timothy G., Todd A. Anderson, Giorgis Georgakoudis, Konrad Hinsen, and Anshu Dubey. "PyOMP: Multithreaded Parallel Programming in Python." Computing in Science & Engineering 23, no. 6 (November 1, 2021): 77–80. http://dx.doi.org/10.1109/mcse.2021.3128806.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Carpenter, B., and G. Fox. "HPJava: A data parallel programming alternative." Computing in Science & Engineering 5, no. 3 (May 2003): 60–64. http://dx.doi.org/10.1109/mcise.2003.1196308.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Adamo, J. M., and L. Trejo. "Programming Environment for Phase-Reconfigurable Parallel Programming on SuperNode." Journal of Parallel and Distributed Computing 23, no. 3 (December 1994): 278–92. http://dx.doi.org/10.1006/jpdc.1994.1140.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Pan, Lei, Ming Kin Lai, Koji Noguchi, Javid J. Huseynov, Lubomir F. Bic, and Michael B. Dillencourt. "Distributed Parallel Computing Using Navigational Programming." International Journal of Parallel Programming 32, no. 1 (February 2004): 1–37. http://dx.doi.org/10.1023/b:ijpp.0000015563.36375.17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography