Academic literature on the topic 'Parallel programming (Computer science)'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Parallel programming (Computer science).'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Parallel programming (Computer science)"
Shiau, Liejune. "Exploring Quasi-Concurrency in Introductory Computer Science." Journal of Educational Computing Research 15, no. 1 (July 1996): 53–66. http://dx.doi.org/10.2190/7ldf-va2r-vk66-qq8d.
Full textGraham, John R. "Integrating parallel programming techniques into traditional computer science curricula." ACM SIGCSE Bulletin 39, no. 4 (December 2007): 75–78. http://dx.doi.org/10.1145/1345375.1345419.
Full textMORIARTY, K. J. M., and T. TRAPPENBERG. "PROGRAMMING TOOLS FOR PARALLEL COMPUTERS." International Journal of Modern Physics C 04, no. 06 (December 1993): 1285–94. http://dx.doi.org/10.1142/s0129183193001002.
Full textClark, K. L. "Parallel Logic Programming." Computer Journal 33, no. 6 (June 1, 1990): 482–93. http://dx.doi.org/10.1093/comjnl/33.6.482.
Full textBlelloch, Guy E. "Programming parallel algorithms." Communications of the ACM 39, no. 3 (March 1996): 85–97. http://dx.doi.org/10.1145/227234.227246.
Full textBen Ahmed, M. "Parallel programming." Information and Software Technology 32, no. 6 (July 1990): 445. http://dx.doi.org/10.1016/0950-5849(90)90136-f.
Full textAnthes, Gary. "Researchers simplify parallel programming." Communications of the ACM 57, no. 11 (October 27, 2014): 13–15. http://dx.doi.org/10.1145/2667109.
Full textBen Ahmed, M. "Programming parallel processors." Information and Software Technology 32, no. 9 (November 1990): 638–39. http://dx.doi.org/10.1016/0950-5849(90)90213-b.
Full textSHEERAN, MARY. "Functional and dynamic programming in the design of parallel prefix networks." Journal of Functional Programming 21, no. 1 (December 6, 2010): 59–114. http://dx.doi.org/10.1017/s0956796810000304.
Full textNowicki, Marek, Magdalena Ryczkowska, Łukasz Gorski, Michał Szynkiewicz, and Piotr Bała. "PCJ - a Java Library for Heterogenous Parallel Computing." WSEAS TRANSACTIONS ON COMPUTERS 21 (March 23, 2022): 81–87. http://dx.doi.org/10.37394/23205.2022.21.12.
Full textDissertations / Theses on the topic "Parallel programming (Computer science)"
Gamble, James Graham. "Explicit parallel programming." Thesis, This resource online, 1990. http://scholar.lib.vt.edu/theses/available/etd-06082009-171019/.
Full textRoe, Paul. "Parallel programming using functional languages." Thesis, Connect to e-thesis, 1991. http://theses.gla.ac.uk/1052.
Full textHandler, Caroline. "Parallel process placement." Thesis, Rhodes University, 1989. http://hdl.handle.net/10962/d1002033.
Full textBergstrom, Lars. "Parallel functional programming with mutable state." Thesis, The University of Chicago, 2013. http://pqdtopen.proquest.com/#viewpdf?dispub=3568360.
Full textImmutability greatly simplifies the implementation of parallel languages. In the absence of mutable state the language implementation is free to perform parallel operations with fewer locks and fewer restrictions on scheduling and data replication. In the Manticore project, we have achieved nearly perfect speedups across both Intel and AMD manycore machines on a variety of benchmarks using this approach.
There are parallel stateful algorithms, however, that exhibit significantly better performance than the corresponding parallel algorithm without mutable state. For example, in many search problems, the same problem configuration can be reached through multiple execution paths. Parallel stateful algorithms share the results of evaluating the same configuration across threads, but parallel mutation-free algorithms are required to either duplicate work or thread their state through a sequential store. Additionally, in algorithms where each parallel task mutates an independent portion of the data, non-conflicting mutations can be performed in parallel. The parallel state-free algorithm will have to merge each of those changes individually, which is a sequential operation at each step.
In this dissertation, we extend Manticore with two techniques that address these problems while preserving its current scalability. Memoization , also known as function caching, is a technique that stores previously returned values from functions, making them available to parallel threads of executions that call that same function with those same values. We have taken this deterministic technique and combined it with a high-performance implementation of a dynamically sized, parallel hash table to provide scalable performance. We have also added mutable state along with two execution models — one of which is deterministic — that allow the user to share arbitrary results across parallel threads under several execution models, all of which preserve the ability to reason locally about the behavior of code.
For both of these techniques, we present a detailed description of their implementations, examine a set of relevant benchmarks, and specify their semantics.
Lee, I.-Ting Angelina. "Memory abstractions for parallel programming." Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/75636.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (p. 156-163).
A memory abstraction is an abstraction layer between the program execution and the memory that provides a different "view" of a memory location depending on the execution context in which the memory access is made. Properly designed memory abstractions help ease the task of parallel programming by mitigating the complexity of synchronization or admitting more efficient use of resources. This dissertation describes five memory abstractions for parallel programming: (i) cactus stacks that interoperate with linear stacks, (ii) efficient reducers, (iii) reducer arrays, (iv) ownershipaware transactions, and (v) location-based memory fences. To demonstrate the utility of memory abstractions, my collaborators and I developed Cilk-M, a dynamically multithreaded concurrency platform which embodies the first three memory abstractions. Many dynamic multithreaded concurrency platforms incorporate cactus stacks to support multiple stack views for all the active children simultaneously. The use of cactus stacks, albeit essential, forces concurrency platforms to trade off between performance, memory consumption, and interoperability with serial code due to its incompatibility with linear stacks. This dissertation proposes a new strategy to build a cactus stack using thread-local memory mapping (or TLMM), which enables Cilk-M to satisfy all three criteria simultaneously. A reducer hyperobject allows different branches of a dynamic multithreaded program to maintain coordinated local views of the same nonlocal variable. With reducers, one can use nonlocal variables in a parallel computation without restructuring the code or introducing races. This dissertation introduces memory-mapped reducers, which admits a much more efficient access compared to existing implementations. When used in large quantity, reducers incur unnecessarily high overhead in execution time and space consumption. This dissertation describes support for reducer arrays, which offers the same functionality as an array of reducers with significantly less overhead. Transactional memory is a high-level synchronization mechanism, designed to be easier to use and more composable than fine-grain locking. This dissertation presents ownership-aware transactions, the first transactional memory design that provides provable safety guarantees for "opennested" transactions. On architectures that implement memory models weaker than sequential consistency, programs communicating via shared memory must employ memory-fences to ensure correct execution. This dissertation examines the concept of location-based memoryfences, which unlike traditional memory fences, incurs latency only when synchronization is necessary.
by I-Ting Angelina Lee.
Ph.D.
Child, Christopher H. T. "Approximate dynamic programming with parallel stochastic planning operators." Thesis, City University London, 2011. http://openaccess.city.ac.uk/1109/.
Full textLewis, E. Christopher. "Achieving robust performance in parallel programming languages /." Thesis, Connect to this title online; UW restricted, 2001. http://hdl.handle.net/1773/6996.
Full textDing, Weiren. "Selsyn-C : a self-synchronizing parallel programming language." Thesis, McGill University, 1992. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=22494.
Full textWe outline our approach by presenting: (1) our motivation, (2) an overview of the extensions to C that form the SELSYN-C programming language, and (3) the development of a new scheduling mechanism that can be used to effectively compile SELSYN-C programs for a real parallel processor, the BBN Butterfly GP-1000. Different scheduling strategies for this mechanism were studied via several experimental tests and the results of these experiments are reported.
A source-to-source compiler supporting the SELSYN-C language has been implemented. Included in this thesis is a description of both the compiler and associated run-time environment.
Vaudin, John. "A unified programming system for a multi-paradigm parallel architecture." Thesis, University of Warwick, 1991. http://wrap.warwick.ac.uk/108849/.
Full textDazzi, Patrizio. "Tools and models for high level parallel and Grid programming." Thesis, IMT Alti Studi Lucca, 2008. http://e-theses.imtlucca.it/12/1/Dazzi_phdthesis.pdf.
Full textBooks on the topic "Parallel programming (Computer science)"
Baker, Louis. Parallel programming. New York: McGraw-Hill, 1996.
Find full textBauer, Barr E. Practical parallel programming. San Diego: Academic Press, 1992.
Find full textPerrott, Ronald H. Parallel programming. Wokingham, England: Addison-Wesley Pub. Co., 1987.
Find full textPolychronopoulos, C. D. Parallel Programming and Compilers. Boston, MA: Springer US, 1988.
Find full textA, Sanders Beverly, and Massingill Berna, eds. Patterns for parallel programming. Boston: Addison-Wesley, 2005.
Find full textG, Babb Robert, ed. Programming parallel processors. Reading, Mass: Addison-Wesley Pub. Co., 1988.
Find full textWilson, Greg. Practical parallel programming. Cambridge, Mass: MIT Press, 1995.
Find full textBrawer, Steven. Introduction to parallel programming. Boston: Academic Press, 1989.
Find full text1963-, Wilson Greg, and Lu Paul, eds. Parallel programming using C++. Cambridge, Mass: MIT Press, 1996.
Find full textMichael, Allen C., ed. Parallel programming: Techniques and applications using networked workstations and parallel computers. Upper Saddle River, N.J: Prentice Hall, 1999.
Find full textBook chapters on the topic "Parallel programming (Computer science)"
Takeuchi, Akikazu, and Koichi Furukawa. "Parallel logic programming languages." In Lecture Notes in Computer Science, 242–54. Berlin, Heidelberg: Springer Berlin Heidelberg, 1986. http://dx.doi.org/10.1007/3-540-16492-8_79.
Full textVeldhorst, Marinus. "Parallel dynamic programming algorithms." In Lecture Notes in Computer Science, 393–402. Berlin, Heidelberg: Springer Berlin Heidelberg, 1986. http://dx.doi.org/10.1007/3-540-16811-7_194.
Full textDowney, Carlton, and Mengjie Zhang. "Parallel Linear Genetic Programming." In Lecture Notes in Computer Science, 178–89. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-20407-4_16.
Full textBal, Henri E. "Languages for parallel programming." In Lecture Notes in Computer Science, 1–23. Berlin, Heidelberg: Springer Berlin Heidelberg, 1991. http://dx.doi.org/10.1007/3-540-54132-2_47.
Full textFrachtenberg, Eitan, Kei Davis, Fabrizio Petrini, Juan Fernandez, and José Carlos Sancho. "Designing Parallel Operating Systems via Parallel Programming." In Lecture Notes in Computer Science, 689–96. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-27866-5_90.
Full textDarlington, J., A. J. Field, P. G. Harrison, P. H. J. Kelly, D. W. N. Sharp, Q. Wu, and R. L. While. "Parallel programming using skeleton functions." In Lecture Notes in Computer Science, 146–60. Berlin, Heidelberg: Springer Berlin Heidelberg, 1993. http://dx.doi.org/10.1007/3-540-56891-3_12.
Full textBo, Cheng, Guo Zhenyu, Bai Zhifeng, and Cao Binggang. "Parallel Chaos Immune Evolutionary Programming." In Lecture Notes in Computer Science, 224–32. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11941439_26.
Full textHale, Roger, and Ben Moszkowski. "Parallel programming in Temporal Logic." In Lecture Notes in Computer Science, 277–96. Berlin, Heidelberg: Springer Berlin Heidelberg, 1987. http://dx.doi.org/10.1007/3-540-17945-3_16.
Full textKasim, Henry, Verdi March, Rita Zhang, and Simon See. "Survey on Parallel Programming Model." In Lecture Notes in Computer Science, 266–75. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-88140-7_24.
Full textCole, Murray. "Why Structured Parallel Programming Matters." In Lecture Notes in Computer Science, 37. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-27866-5_4.
Full textConference papers on the topic "Parallel programming (Computer science)"
Conte, Davi Jose, Paulo Sergio Lopes de Souza, Guilherme Martins, and Sarita Mazzini Bruschi. "Teaching Parallel Programming for Beginners in Computer Science." In 2020 IEEE Frontiers in Education Conference (FIE). IEEE, 2020. http://dx.doi.org/10.1109/fie44824.2020.9274155.
Full textCiesko, Jan, David Poliakoff, Daisy S. Hollman, Christian C. Trott, and Damien Lebrun-Grandie. "Towards Generic Parallel Programming in Computer Science Education with Kokkos." In 2020 IEEE/ACM Workshop on Education for High-Performance Computing (EduHPC). IEEE, 2020. http://dx.doi.org/10.1109/eduhpc51895.2020.00010.
Full textCiesko, Jan. "Towards Generic Parallel Programming in Computer Science Education with Kokkos." In Proposed for presentation at the EduHPC @ SC20 held November 9-19, 2020. US DOE, 2020. http://dx.doi.org/10.2172/1831033.
Full textYin Qiu, Xiang Ding, and Dong Shao. "Productive parallel programming with Pomelo." In 2011 International Conference on Computer Science and Service System (CSSS). IEEE, 2011. http://dx.doi.org/10.1109/csss.2011.5974814.
Full textVasconcelos, Leonardo B. A., Felipe A. L. Soares, Pedro Henrique M. M. Penna, Max V. Machado, Luis Fabricio W. Goes, Carlos Augusto P. S. Martins, and Henrique C. Freitas. "Teaching Parallel Programming to Freshmen in an Undergraduate Computer Science Program." In 2019 IEEE Frontiers in Education Conference (FIE). IEEE, 2019. http://dx.doi.org/10.1109/fie43999.2019.9028566.
Full textChen, Yinong, and Gennaro De Luca. "VIPLE: Visual IoT/Robotics Programming Language Environment for Computer Science Education." In 2016 IEEE International Parallel and Distributed Processing Symposium: Workshops (IPDPSW). IEEE, 2016. http://dx.doi.org/10.1109/ipdpsw.2016.55.
Full textHou, Ke, Jing Zhang, and Jun-huai Li. "Review of data-parallel programming model." In 2012 7th International Conference on Computer Science & Education (ICCSE 2012). IEEE, 2012. http://dx.doi.org/10.1109/iccse.2012.6295154.
Full textLuo, Zuomin, Quanfa Zheng, Xinhong Hei, and Nasser Giacaman. "Parallel Programming Based on Microsoft.NET TPL." In 2nd International Conference on Computer Science and Electronics Engineering (ICCSEE 2013). Paris, France: Atlantis Press, 2013. http://dx.doi.org/10.2991/iccsee.2013.129.
Full textZhou, Bei, Lei Wang, and Yong-Zhong Huang. "TSPI: A Tuplespace Based Parallel Programming Library." In 2008 International Conference on Computer Science and Information Technology. IEEE, 2008. http://dx.doi.org/10.1109/iccsit.2008.71.
Full textGoldweber, Mikey. "Session details: Recursion and parallel programming." In ITiCSE '08: 13th Annual Conference on Innovation and Technology in Computer Science Education. New York, NY, USA: ACM, 2008. http://dx.doi.org/10.1145/3255586.
Full textReports on the topic "Parallel programming (Computer science)"
Markova, Oksana M., Serhiy O. Semerikov, Andrii M. Striuk, Hanna M. Shalatska, Pavlo P. Nechypurenko, and Vitaliy V. Tron. Implementation of cloud service models in training of future information technology specialists. [б. в.], September 2019. http://dx.doi.org/10.31812/123456789/3270.
Full textAnderson, Loren James, and Marion Kei Davis. Functional Programming in Computer Science. Office of Scientific and Technical Information (OSTI), January 2016. http://dx.doi.org/10.2172/1237221.
Full textBailey, David, James Demmel, Khaled Ibrahim, Alex Kaiser, Alice Koniges, Kamesh Madduri, John Shalf, Erich Strohmaier, and Samuel Williams. A Testbed of Parallel Kernels for Computer Science Research. Office of Scientific and Technical Information (OSTI), April 2010. http://dx.doi.org/10.2172/983273.
Full textPerumalla, Kalyan, Maximilian Bremer, Kevin Brown, Cy Chan, Stephan Eidenbenz, K. Scott Hemmert, Adolfy Hoisie, et al. Computer Science Research Needs for Parallel Discrete Event Simulation (PDES). Office of Scientific and Technical Information (OSTI), May 2022. http://dx.doi.org/10.2172/1855247.
Full textPerumalla, K., M. Bremer, K. Brown, C. Chan, S. Eidenbenz, K. Hemmert, A. Hoisie, et al. Computer Science Research Needs for Parallel Discrete Event Simulation (PDES). Office of Scientific and Technical Information (OSTI), September 2022. http://dx.doi.org/10.2172/1889525.
Full textProskura, Svitlana L., and Svitlana H. Lytvynova. The approaches to Web-based education of computer science bachelors in higher education institutions. [б. в.], July 2020. http://dx.doi.org/10.31812/123456789/3892.
Full textHlushak, Oksana M., Volodymyr V. Proshkin, and Oksana S. Lytvyn. Using the e-learning course “Analytic Geometry” in the process of training students majoring in Computer Science and Information Technology. [б. в.], September 2019. http://dx.doi.org/10.31812/123456789/3268.
Full textTkachuk, Viktoriia V., Vadym P. Shchokin, and Vitaliy V. Tron. The Model of Use of Mobile Information and Communication Technologies in Learning Computer Sciences to Future Professionals in Engineering Pedagogy. [б. в.], November 2018. http://dx.doi.org/10.31812/123456789/2668.
Full textModlo, Yevhenii O., Serhiy O. Semerikov, Pavlo P. Nechypurenko, Stanislav L. Bondarevskyi, Olena M. Bondarevska, and Stanislav T. Tolmachev. The use of mobile Internet devices in the formation of ICT component of bachelors in electromechanics competency in modeling of technical objects. [б. в.], September 2019. http://dx.doi.org/10.31812/123456789/3264.
Full textBalyk, Nadiia, Svitlana Leshchuk, and Dariia Yatsenyak. Developing a Mini Smart House model. [б. в.], February 2020. http://dx.doi.org/10.31812/123456789/3741.
Full text