Academic literature on the topic 'Parallel programming; Sequential'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Parallel programming; Sequential.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Parallel programming; Sequential"

1

Cheang, Sin Man, Kwong Sak Leung, and Kin Hong Lee. "Genetic Parallel Programming: Design and Implementation." Evolutionary Computation 14, no. 2 (June 2006): 129–56. http://dx.doi.org/10.1162/evco.2006.14.2.129.

Full text
Abstract:
This paper presents a novel Genetic Parallel Programming (GPP) paradigm for evolving parallel programs running on a Multi-Arithmetic-Logic-Unit (Multi-ALU) Processor (MAP). The MAP is a Multiple Instruction-streams, Multiple Data-streams (MIMD), general-purpose register machine that can be implemented on modern Very Large-Scale Integrated Circuits (VLSIs) in order to evaluate genetic programs at high speed. For human programmers, writing parallel programs is more difficult than writing sequential programs. However, experimental results show that GPP evolves parallel programs with less computational effort than that of their sequential counterparts. It creates a new approach to evolving a feasible problem solution in parallel program form and then serializes it into a sequential programif required. The effectiveness and efficiency of GPP are investigated using a suite of 14 well-studied benchmark problems. Experimental results show that GPP speeds up evolution substantially.
APA, Harvard, Vancouver, ISO, and other styles
2

Baravykaite, M., and R. Šablinskas. "THE TEMPLATE PROGRAMMING OF PARALLEL ALGORITHMS." Mathematical Modelling and Analysis 7, no. 1 (June 30, 2002): 11–20. http://dx.doi.org/10.3846/13926292.2002.9637173.

Full text
Abstract:
The parallel programming tools and packages are evolving rapidly. However the complexity of parallel thinking does not allow to implement many algorithms for the end user. In most cases only expert programmers risk to involve in parallel programming and program debugging. In this paper we extend the ideas from [3] of template programming for a certain class of problems which could be solved by using general master‐slave paradigm. The template is suitable for solution of the coarse grain and middle grain granularity problem set. Actually, it could be applied to solve any problem P, which is decomposable into a set of tasks P = U i N=0ti. The most effective application cases are obtained for the problems where all ti are independent. The template programming sets some requirements for the sequential version of the user program: The main program must comprise of several code blocks: data initialization, computation of one task ti and the processing of the result. The user has to define the data structures: initial data, one task data, the result data. These requirements do not require to rewrite the existing sequential code but to organize it into some logical parts. After these requirements (and naming conventions) are fulfilled, the parallel version of the code is obtained automatically by compiling and linking the code with the Master‐Slave Template library. In this paper we introduce the idea of the template programming and describe the layer structure of the Master‐Slave Template library. We show how the user has to adjust the sequential code to obtain a valid parallel version of the initial program. We also give examples of the prime number search problem and the Mandelbrot set calculation problem.
APA, Harvard, Vancouver, ISO, and other styles
3

del Rio Astorga, David, Manuel F. Dolz, Luis Miguel Sánchez, J. Daniel García, Marco Danelutto, and Massimo Torquati. "Finding parallel patterns through static analysis in C++ applications." International Journal of High Performance Computing Applications 32, no. 6 (March 9, 2017): 779–88. http://dx.doi.org/10.1177/1094342017695639.

Full text
Abstract:
Since the ‘free lunch’ of processor performance is over, parallelism has become the new trend in hardware and architecture design. However, parallel resources deployed in data centers are underused in many cases, given that sequential programming is still deeply rooted in current software development. To address this problem, new methodologies and techniques for parallel programming have been progressively developed. For instance, parallel frameworks, offering programming patterns, allow expressing concurrency in applications to better exploit parallel hardware. Nevertheless, a large portion of production software, from a broad range of scientific and industrial areas, is still developed sequentially. Considering that these software modules contain thousands, or even millions, of lines of code, an extremely large amount of effort is needed to identify parallel regions. To pave the way in this area, this paper presents Parallel Pattern Analyzer Tool, a software component that aids the discovery and annotation of parallel patterns in source codes. This tool simplifies the transformation of sequential source code to parallel. Specifically, we provide support for identifying Map, Farm, and Pipeline parallel patterns and evaluate the quality of the detection for a set of different C++ applications.
APA, Harvard, Vancouver, ISO, and other styles
4

GAVA, FRÉDÉRIC. "A MODULAR IMPLEMENTATION OF DATA STRUCTURES IN BULK-SYNCHRONOUS PARALLEL ML." Parallel Processing Letters 18, no. 01 (March 2008): 39–53. http://dx.doi.org/10.1142/s0129626408003211.

Full text
Abstract:
A functional data-parallel language called BSML has been designed for programming Bulk-Synchronous Parallel algorithms. Many sequential algorithms do not have parallel counterparts and many non-computer science researchers do not want to deal with parallel programming. In sequential programming environments, common data structures are often provided through reusable libraries to simplify the development of applications. A parallel representation of such data structures is thus a solution for writing parallel programs without suffering from disadvantages of all the features of a parallel language. In this paper we describe a modular implementation in BSML of some data structures and show how those data types can address the needs of many potential users of parallel machines who have so far been deterred by the complexity of parallelizing code.
APA, Harvard, Vancouver, ISO, and other styles
5

LOOGEN, RITA, YOLANDA ORTEGA-MALLÉN, and RICARDO PEÑA-MARÍ. "Parallel functional programming in Eden." Journal of Functional Programming 15, no. 3 (May 2005): 431–75. http://dx.doi.org/10.1017/s0956796805005526.

Full text
Abstract:
Eden extends the non-strict functional language Haskell with constructs to control parallel evaluation of processes. Although processes are defined explicitly, communication and synchronisation issues are handled in a way transparent to the programmer. In order to offer effective support for parallel evaluation, Eden's coordination constructs override the inherently sequential demand-driven (lazy) evaluation strategy of its computation language Haskell. Eden is a general-purpose parallel functional language suitable for developing sophisticated skeletons – which simplify parallel programming immensely – as well as for exploiting more irregular parallelism that cannot easily be captured by a predefined skeleton. The paper gives a comprehensive description of Eden, its semantics, its skeleton-based programming methodology – which is applied in three case studies – its implementation and performance. Furthermore it points at many additional results that have been achieved in the context of the Eden project.
APA, Harvard, Vancouver, ISO, and other styles
6

Li, Xiang, Fei Li, and Chang Hao Wang. "Research of Parallel Processing Technology Based on Multi-Core." Applied Mechanics and Materials 182-183 (June 2012): 639–43. http://dx.doi.org/10.4028/www.scientific.net/amm.182-183.639.

Full text
Abstract:
In this paper, five kinds of typical multi-core processers are compared from thread, cache, inter-core interconnect and etc. Two kinds of multi-core programming environments and some new programming languages are introduced. Thread-level speculation (TLS) and transactional memory (TM) are introduced to solve the problem of parallelization of sequential program. TLS automatically analyze and speculate the part of sequential process which can be parallel implement, and then automatically generate parallel code. TM systems provide an efficient and easy mechanism for parallel programming on multi-core processors. Typical TM likes TCC, UTM, LogTM, LogTM-SE and SigTM are introduced. Combined the TLS and TM can more effectively improve the sequential program running on the multi-core processors. Typical extended TM systems to support TLS likes TCC, TTM, PTT and STMlite are introduced.
APA, Harvard, Vancouver, ISO, and other styles
7

Jézéquel, J. M., F. Bergheul, and F. André. "Programming massively parallel architectures with sequential object oriented languages." Future Generation Computer Systems 10, no. 1 (April 1994): 59–70. http://dx.doi.org/10.1016/0167-739x(94)90051-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Chen, Zhong. "Parallel Iterative Methods for Nonlinear Programming Problems." Advanced Materials Research 159 (December 2010): 105–10. http://dx.doi.org/10.4028/www.scientific.net/amr.159.105.

Full text
Abstract:
In this paper, we present two parallel multiplicative algorithms for convex programming. If the objective function is differentiable and convex on the positive orthant of , and it has compact level sets and has a locally Lipschitz continuous gradient, we prove these algorithms converge to a solution of minimization problem. For the proofs there are essentially used the results of sequential methods shown by Eggermont[1].
APA, Harvard, Vancouver, ISO, and other styles
9

Boekkooi-Timminga, Ellen. "The Construction of Parallel Tests From IRT-Based Item Banks." Journal of Educational Statistics 15, no. 2 (June 1990): 129–45. http://dx.doi.org/10.3102/10769986015002129.

Full text
Abstract:
The construction of parallel tests from IRT-based item banks is discussed. Tests are considered to be parallel whenever their information functions are identical. Simultaneous and sequential parallel test construction methods based on the use of 0–1 programming are examined for the Rasch and 3-parameter logistic model. Sequential methods construct the tests one after another; simultaneous methods construct them all at the same time. A heuristic procedure is used for solving the 0–1 programming problems. Satisfactory results are obtained, both in terms of the CPU-time needed and differences between the information functions of the parallel tests selected.
APA, Harvard, Vancouver, ISO, and other styles
10

Qawasmeh, Ahmad, Salah Taamneh, Ashraf H. Aljammal, Nabhan Hamadneh, Mustafa Banikhalaf, and Mohammad Kharabsheh. "Parallelism exploration in sequential algorithms via animation tool." Multiagent and Grid Systems 17, no. 2 (August 23, 2021): 145–58. http://dx.doi.org/10.3233/mgs-210347.

Full text
Abstract:
Different high performance techniques, such as profiling, tracing, and instrumentation, have been used to tune and enhance the performance of parallel applications. However, these techniques do not show how to explore the potential of parallelism in a given application. Animating and visualizing the execution process of a sequential algorithm provide a thorough understanding of its usage and functionality. In this work, an interactive web-based educational animation tool was developed to assist users in analyzing sequential algorithms to detect parallel regions regardless of the used parallel programming model. The tool simplifies algorithms’ learning, and helps students to analyze programs efficiently. Our statistical t-test study on a sample of students showed a significant improvement in their perception of the mechanism and parallelism of applications and an increase in their willingness to learn algorithms and parallel programming.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Parallel programming; Sequential"

1

Thornley, John William Chandy K. Mani Chandy K. Mani. "A parallel programming model with sequential semantics /." Diss., Pasadena, Calif. : California Institute of Technology, 1996. http://resolver.caltech.edu/CaltechETD:etd-01042008-085720.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Chen, Y. "Formal methods for global synchrony." Thesis, University of Oxford, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.365295.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Manthey, Norbert. "Towards Next Generation Sequential and Parallel SAT Solvers." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2015. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-158672.

Full text
Abstract:
This thesis focuses on improving the SAT solving technology. The improvements focus on two major subjects: sequential SAT solving and parallel SAT solving. To better understand sequential SAT algorithms, the abstract reduction system Generic CDCL is introduced. With Generic CDCL, the soundness of solving techniques can be modeled. Next, the conflict driven clause learning algorithm is extended with the three techniques local look-ahead, local probing and all UIP learning that allow more global reasoning during search. These techniques improve the performance of the sequential SAT solver Riss. Then, the formula simplification techniques bounded variable addition, covered literal elimination and an advanced cardinality constraint extraction are introduced. By using these techniques, the reasoning of the overall SAT solving tool chain becomes stronger than plain resolution. When using these three techniques in the formula simplification tool Coprocessor before using Riss to solve a formula, the performance can be improved further. Due to the increasing number of cores in CPUs, the scalable parallel SAT solving approach iterative partitioning has been implemented in Pcasso for the multi-core architecture. Related work on parallel SAT solving has been studied to extract main ideas that can improve Pcasso. Besides parallel formula simplification with bounded variable elimination, the major extension is the extended clause sharing level based clause tagging, which builds the basis for conflict driven node killing. The latter allows to better identify unsatisfiable search space partitions. Another improvement is to combine scattering and look-ahead as a superior search space partitioning function. In combination with Coprocessor, the introduced extensions increase the performance of the parallel solver Pcasso. The implemented system turns out to be scalable for the multi-core architecture. Hence iterative partitioning is interesting for future parallel SAT solvers. The implemented solvers participated in international SAT competitions. In 2013 and 2014 Pcasso showed a good performance. Riss in combination with Copro- cessor won several first, second and third prices, including two Kurt-Gödel-Medals. Hence, the introduced algorithms improved modern SAT solving technology.
APA, Harvard, Vancouver, ISO, and other styles
4

Jouvelot, Pierre. "Parallelisation semantique : une approche denotationnelle non-standard pour la parallelisation de programmes imperatifs sequentiels." Paris 6, 1986. http://www.theses.fr/1986PA066559.

Full text
Abstract:
Notre principe consiste a voir les transformations de programmes introduites par la parllelisation comme definissant des semantiques denotationnelles non-standards du langage de programmation. Nous montrons comment utiliser ce concept pour detecter, dans un langage imperatif simplifie all, des instructions complexes parallelisables, reconnaitre des reductions et prendre en compte certains programmes avec indirections
APA, Harvard, Vancouver, ISO, and other styles
5

Thornley, John William. "A parallel programming model with sequential semantics." Thesis, 1996. https://thesis.library.caltech.edu/26/1/Thornley_jw_1996.pdf.

Full text
Abstract:
Parallel programming is more difficult than sequential programming in part because of the complexity of reasoning, testing, and debugging in the context of concurrency. In this thesis, we present and investigate a parallel programming model that provides direct control of parallelism in a notation with sequential semantics. Our model consists of a standard sequential imperative programming notation extended with the following three pragmas: 1. The parallelizable sequence of statements pragma indicates that a sequence of statements can be executed as parallel threads. 2. The parallelizable for-loop statement pragma indicates that the iterations of a for-loop statement can be executed as parallel threads. 3. The single-assignment type pragma indicates that variables of a given type are assigned at most once and that ordinary assignment and evaluation operations can be used as implicit communication and synchronization operations between parallel threads. In our model, a parallel program is simply an equivalent sequential program with added pragmas. The placement of the pragmas is subject to a small set of restrictions that ensure the equivalence of the parallel and sequential semantics. We prove that if standard sequential execution of a program (by ignoring the pragmas) satisfies a given specification and the pragmas are used correctly, parallel execution of the program (as directed by the pragmas) is guaranteed to satisfy the same specification. Our model allows parallel programs to be developed using sequential reasoning, testing, and debugging techniques, prior to parallel execution for performance. Since parallelism is specified directly, sophisticated analysis and compilation techniques are not required to extract parallelism from programs. However, it is important that parallel performance issues such as granularity, load balancing, and locality be considered throughout algorithm and program development. We describe a series of programming experiments performed on up to 32 processors of a shared-memory multiprocessor system. These experiments indicate that for a wide range of problems: 1. Our model can express sophisticated parallel algorithms with significantly less complication than traditional explicit parallel programming models. 2. Parallel programs in our model execute as efficiently as sequential programs on one processor and deliver good speedups on multiple processors. 3. Program development with our model is less difficult than with traditional explicit parallel programming models because reasoning, testing, and debugging are performed using sequential methods. We believe that our model provides the basis of the method of choice for a large number of moderate-scale, medium-grained parallel programming applications.
APA, Harvard, Vancouver, ISO, and other styles
6

"Supervisory Control Optimization with Sequential Quadratic Programming for Parallel Hybrid Vehicle with Synchronous Power Sources." Master's thesis, 2017. http://hdl.handle.net/2286/R.I.44420.

Full text
Abstract:
abstract: The thesis covers the development and modeling of the supervisory hybrid controller using two different methods to achieve real-world optimization and power split of a parallel hybrid vehicle with a fixed shaft connecting the Internal Combustion Engine (ICE) and Electric Motor (EM). The first strategy uses a rule based controller to determine modes the vehicle should operate in. This approach is well suited for real-world applications. The second approach uses Sequential Quadratic Programming (SQP) approach in conjunction with an Equivalent Consumption Minimization Strategy (ECMS) strategy to keep the vehicle in the most efficient operating regions. This latter method is able to operate the vehicle in various drive cycles while maintaining the SOC with-in allowed charge sustaining (CS) limits. Further, the overall efficiency of the vehicle for all drive cycles is increased. The limitation here is the that process is computationally expensive; however, with advent of the low cost high performance hardware this method can be used for the hybrid vehicle control.
Dissertation/Thesis
Masters Thesis Engineering 2017
APA, Harvard, Vancouver, ISO, and other styles
7

Manthey, Norbert. "Towards Next Generation Sequential and Parallel SAT Solvers." Doctoral thesis, 2014. https://tud.qucosa.de/id/qucosa%3A28471.

Full text
Abstract:
This thesis focuses on improving the SAT solving technology. The improvements focus on two major subjects: sequential SAT solving and parallel SAT solving. To better understand sequential SAT algorithms, the abstract reduction system Generic CDCL is introduced. With Generic CDCL, the soundness of solving techniques can be modeled. Next, the conflict driven clause learning algorithm is extended with the three techniques local look-ahead, local probing and all UIP learning that allow more global reasoning during search. These techniques improve the performance of the sequential SAT solver Riss. Then, the formula simplification techniques bounded variable addition, covered literal elimination and an advanced cardinality constraint extraction are introduced. By using these techniques, the reasoning of the overall SAT solving tool chain becomes stronger than plain resolution. When using these three techniques in the formula simplification tool Coprocessor before using Riss to solve a formula, the performance can be improved further. Due to the increasing number of cores in CPUs, the scalable parallel SAT solving approach iterative partitioning has been implemented in Pcasso for the multi-core architecture. Related work on parallel SAT solving has been studied to extract main ideas that can improve Pcasso. Besides parallel formula simplification with bounded variable elimination, the major extension is the extended clause sharing level based clause tagging, which builds the basis for conflict driven node killing. The latter allows to better identify unsatisfiable search space partitions. Another improvement is to combine scattering and look-ahead as a superior search space partitioning function. In combination with Coprocessor, the introduced extensions increase the performance of the parallel solver Pcasso. The implemented system turns out to be scalable for the multi-core architecture. Hence iterative partitioning is interesting for future parallel SAT solvers. The implemented solvers participated in international SAT competitions. In 2013 and 2014 Pcasso showed a good performance. Riss in combination with Copro- cessor won several first, second and third prices, including two Kurt-Gödel-Medals. Hence, the introduced algorithms improved modern SAT solving technology.
APA, Harvard, Vancouver, ISO, and other styles
8

Srinivasa, Murthy Karthik 1983. "A proposed memory consistency model for Chapel." Thesis, 2010. http://hdl.handle.net/2152/ETD-UT-2010-08-1990.

Full text
Abstract:
A memory consistency model for a language defines the order of memory operations performed by each thread in a parallel execution. Such a constraint is necessary to prevent the compiler and hardware optimizations from reordering certain memory operations, since such reordering might lead to unintuitive results. In this thesis, we propose a memory consistency model for Chapel, a parallel programming language from Cray Inc. Our memory model for Chapel is based on the idea of multiresolution and aims to provide a migration path from a program that is easy to reason about to a program that has better performance efficiency. Our model allows a programmer to write a parallel program with sequential consistency semantics, and then migrate to a performance-oriented version by increasingly changing different parts of the program to follow relaxed semantics. Sequential semantics helps in reasoning about the correctness of the parallel program and is provided by the strict sequential consistency model in our proposed memory model. The performance-oriented versions can be obtained either by using the compiler sequential consistency model, which maintains the sequential semantics, or by the relaxed consistency model, which maintains consistency only at global synchronization points. Our proposed memory model for Chapel thus combines strict sequential consistency model, compiler sequential consistency model and relaxed consistency model. We analyze the performance of the three consistency models by implementing three applications: Barnes-Hut, FFT and Random-Access in Chapel, and the hybrid model of MPI and Pthread. We conclude the following: The strict sequential consistency model is the best model to determine algorithmic errors in the applications, though it leads to the worst performance; the relaxed consistency model gives the best performance among the three models, but relies on the programmer to enforce synchronization correctly; the performance of the compiler sequential model depends on accuracy of the dependence analysis performed by the compiler; the relative performance of the consistency models across Chapel and the hybrid programming model of MPI and Pthread are the same. This shows that our model is not tightly bound to Chapel and can be applied on other programming models/languages.
text
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Parallel programming; Sequential"

1

Miller, Russ. Algorithms, sequential & parallel: A unified approach. Upper Saddle River, N.J: Prentice Hall, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Best, Eike. Semantics of sequential and parallel programs. London: Prentice Hall, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Laurence, Boxer, ed. Algorithms sequential and parallel: A unified approach. 2nd ed. Hingham, MA: Charles River Media, 2005.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Communicating sequential processes. Englewood Cliffs, N.J: Prentice/Hall International, 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Communicating sequential processes. Englewood Cliffs, N.J: Prentice-Hall, 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Computational complexity of sequential and parallel algorithms. Chichester: Wiley, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kronsjö, Lydia I. Computational complexity of sequential and parallel algorithms. Chichester: Wiley, 1986.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Algorithms Sequential & Parallel: A Unified Approach. Cengage Learning, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Boxer, Laurence, and Russ Miller. Algorithms Sequential and Parallel: A Unified Approach. Pearson Education, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Boxer, Laurence, and Russ Miller. Algorithms Sequential and Parallel: A Unified Approach. Pearson Education, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Parallel programming; Sequential"

1

Patriksson, Michael. "Parallel and sequential decomposition CA algorithms." In Nonlinear Programming and Variational Inequality Problems, 211–51. Boston, MA: Springer US, 1999. http://dx.doi.org/10.1007/978-1-4757-2991-7_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Vollmer, Heribert. "Relations among parallel and sequential computation models." In Concurrency and Parallelism, Programming, Networking, and Security, 23–32. Berlin, Heidelberg: Springer Berlin Heidelberg, 1996. http://dx.doi.org/10.1007/bfb0027776.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Pratt, Vaughan. "Time and information in sequential and concurrent computation." In Theory and Practice of Parallel Programming, 1–24. Berlin, Heidelberg: Springer Berlin Heidelberg, 1995. http://dx.doi.org/10.1007/bfb0026562.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Perret, Jérôme, and Philippe Gautron. "SIMCDL: Simulating parallel programming in a sequential environment." In PARLE '92 Parallel Architectures and Languages Europe, 977–78. Berlin, Heidelberg: Springer Berlin Heidelberg, 1992. http://dx.doi.org/10.1007/3-540-55599-4_150.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Breitinger, Silvia, Ulrike Klusik, and Rita Loogen. "From (sequential) Haskell to (parallel) Eden: An implementation point of view." In Principles of Declarative Programming, 318–34. Berlin, Heidelberg: Springer Berlin Heidelberg, 1998. http://dx.doi.org/10.1007/bfb0056623.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Jézéquel, J. M., F. Bergheul, and F. André. "Programming massively parallel architectures with sequential object oriented languages." In PARLE '92 Parallel Architectures and Languages Europe, 329–44. Berlin, Heidelberg: Springer Berlin Heidelberg, 1992. http://dx.doi.org/10.1007/3-540-55599-4_97.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Rümmer, Philipp. "Sequential, Parallel, and Quantified Updates of First-Order Structures." In Logic for Programming, Artificial Intelligence, and Reasoning, 422–36. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11916277_29.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Archibald, Blair, Fraser Dunlop, Ruth Hoffmann, Ciaran McCreesh, Patrick Prosser, and James Trimble. "Sequential and Parallel Solution-Biased Search for Subgraph Algorithms." In Integration of Constraint Programming, Artificial Intelligence, and Operations Research, 20–38. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-19212-9_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Kubota, Atsushi, Tomohiro Matsushita, and Naohisa Happo. "Parallelization of Atomic Image Reconstruction from X-ray Fluorescence Holograms with XcalableMP." In XcalableMP PGAS Programming Language, 205–18. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-7683-6_8.

Full text
Abstract:
AbstractX-ray fluorescence holography is a three-dimensional middle range local structure analysis method, which can provide three-dimensional atomic images around specific elements within a radius of a few nanometers. Three-dimensional atomic images are reconstructed by applying discrete Fourier transform (DFT) to hologram data. Presently, it takes long time to process this DFT. In this study, the DFT program is parallelized by using a parallel programming language XcalableMP. The DFT process, whose input is 21 holograms data of 179 × 360 points and output is a three-dimensional atomic image of 1923 points, is executed on PC cluster which consists of 8 nodes of Intel Xeon X5660 processors and 96 cores in total and we confirmed that the parallelized DFT execution is 94 times faster than the sequential execution.
APA, Harvard, Vancouver, ISO, and other styles
10

Van Renterghem, Patrick. "The Bridge from Present (Sequential) Systems to Future (Parallel) Systems: The Parallel Programming Environments Express and CSTools." In Scientific Computing on Supercomputers II, 175–94. Boston, MA: Springer US, 1990. http://dx.doi.org/10.1007/978-1-4613-0659-7_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Parallel programming; Sequential"

1

Raman, Arun, Jae W. Lee, and David I. August. "From sequential programming to flexible parallel execution." In the 2012 international conference. New York, New York, USA: ACM Press, 2012. http://dx.doi.org/10.1145/2380403.2380417.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hollman, David S., Janine C. Bennett, Hemanth Kolla, Jonathan Lifflander, Nicole Slattengren, and Jeremiah Wilke. "Metaprogramming-Enabled Parallel Execution of Apparently Sequential C++ Code." In 2016 Second International Workshop on Extreme Scale Programming Models and Middleware (ESPM2). IEEE, 2016. http://dx.doi.org/10.1109/espm2.2016.009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Schmidt-Schauß, Manfred, David Sabel, and Nils Dallmeyer. "Sequential and Parallel Improvements in a Concurrent Functional Programming Language." In PPDP '18: The 20th International Symposium on Principles and Practice of Declarative Programming. New York, NY, USA: ACM, 2018. http://dx.doi.org/10.1145/3236950.3236952.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

del Rio Astorga, David, Manuel F. Dolz, Luis Miguel Sanchez, and J. Daniel García. "Discovering Pipeline Parallel Patterns in Sequential Legacy C++ Codes." In PPoPP '16: 21st ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming. New York, NY, USA: ACM, 2016. http://dx.doi.org/10.1145/2883404.2883411.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Koval, Nikita, Dan Alistarh, and Roman Elizarov. "Lock-free channels for programming via communicating sequential processes." In PPoPP '19: 24th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3293883.3297000.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Matsuzaki, Kiminori, Hideya Iwasaki, Kento Emoto, and Zhenjiang Hu. "A library of constructive skeletons for sequential style of parallel programming." In the 1st international conference. New York, New York, USA: ACM Press, 2006. http://dx.doi.org/10.1145/1146847.1146860.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Jiang, Peng, and Gagan Agrawal. "Revealing parallel scans and reductions in sequential loops through function reconstruction." In PPoPP '18: 23nd ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming. New York, NY, USA: ACM, 2018. http://dx.doi.org/10.1145/3178487.3178523.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Liu, Ke, Sven Löffler, and Petra Hofstedt. "Solving the Social Golfers Problems by Constraint Programming in Sequential and Parallel." In 11th International Conference on Agents and Artificial Intelligence. SCITEPRESS - Science and Technology Publications, 2019. http://dx.doi.org/10.5220/0007252300290039.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Chandrashekhar, B. N., and H. A. Sanjay. "Performance Analysis of Sequential and Parallel Programming Paradigms on CPU-GPUs Cluster." In 2021 Third International Conference on Intelligent Communication Technologies and Virtual Mobile Networks (ICICV). IEEE, 2021. http://dx.doi.org/10.1109/icicv50876.2021.9388469.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Gu, Xuejun, Hyun K. Kim, James Masciotti, and Andreas H. Hielscher. "A parallel reduced-space sequential-quadratic programming algorithm for frequency-domain small animal optical tomography." In SPIE BiOS: Biomedical Optics, edited by Bruce J. Tromberg, Arjun G. Yodh, Mamoru Tamura, Eva M. Sevick-Muraca, and Robert R. Alfano. SPIE, 2009. http://dx.doi.org/10.1117/12.809484.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Parallel programming; Sequential"

1

Amela, R., R. Badia, S. Böhm, R. Tosi, C. Soriano, and R. Rossi. D4.2 Profiling report of the partner’s tools, complete with performance suggestions. Scipedia, 2021. http://dx.doi.org/10.23967/exaqute.2021.2.023.

Full text
Abstract:
This deliverable focuses on the proling activities developed in the project with the partner's applications. To perform this proling activities, a couple of benchmarks were dened in collaboration with WP5. The rst benchmark is an embarrassingly parallel benchmark that performs a read and then multiple writes of the same object, with the objective of stressing the memory and storage systems and evaluate the overhead when these reads and writes are performed in parallel. A second benchmark is dened based on the Continuation Multi Level Monte Carlo (C-MLMC) algorithm. While this algorithm is normally executed using multiple levels, for the proling and performance analysis objectives, the execution of a single level was enough since the forthcoming levels have similar performance characteristics. Additionally, while the simulation tasks can be executed as parallel (multi-threaded tasks), in the benchmark, single threaded tasks were executed to increase the number of simulations to be scheduled and stress the scheduling engines. A set of experiments based on these two benchmarks have been executed in the MareNostrum 4 supercomputer and using PyCOMPSs as underlying programming model and dynamic scheduler of the tasks involved in the executions. While the rst benchmark was executed several times in a single iteration, the second benchmark was executed in an iterative manner, with cycles of 1) Execution and trace generation; 2) Performance analysis; 3) Improvements. This had enabled to perform several improvements in the benchmark and in the scheduler of PyCOMPSs. The initial iterations focused on the C-MLMC structure itself, performing re-factors of the code to remove ne grain and sequential tasks and merging them in larger granularity tasks. The next iterations focused on improving the PyCOMPSs scheduler, removing existent bottlenecks and increasing its performance by making the scheduler a multithreaded engine. While the results can still be improved, we are satised with the results since the granularity of the simulations run in this evaluation step are much ner than the one that will be used for the real scenarios. The deliverable nishes with some recommendations that should be followed along the project in order to obtain good performance in the execution of the project codes.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography