Academic literature on the topic 'Parallelisation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Parallelisation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Parallelisation"

1

Ierotheou, C. S., S. P. Johnson, P. F. Leggett, M. Cross, E. W. Evans, H. Jin, M. Frumkin, and J. Yan. "The Semi-Automatic Parallelisation of Scientific Application Codes Using a Computer Aided Parallelisation Toolkit." Scientific Programming 9, no. 2-3 (2001): 163–73. http://dx.doi.org/10.1155/2001/327048.

Full text
Abstract:
The shared-memory programming model can be an effective way to achieve parallelism on shared memory parallel computers. Historically however, the lack of a programming standard using directives and the limited scalability have affected its take-up. Recent advances in hardware and software technologies have resulted in improvements to both the performance of parallel programs with compiler directives and the issue of portability with the introduction of OpenMP. In this study, the Computer Aided Parallelisation Toolkit has been extended to automatically generate OpenMP-based parallel programs with nominal user assistance. We categorize the different loop types and show how efficient directives can be placed using the toolkit's in-depth interprocedural analysis. Examples are taken from the NAS parallel benchmarks and a number of real-world application codes. This demonstrates the great potential of using the toolkit to quickly parallelise serial programs as well as the good performance achievable on up to 300 processors for hybrid message passing-directive parallelisations.
APA, Harvard, Vancouver, ISO, and other styles
2

Wolffs, Zef, Patrick Bos, Lydia Brenner, Wouter Verkerke, and Ivo van Vulpen. "Efficient Parallelization of RooFit Computations for Accelerated Higgs Combination Fits." EPJ Web of Conferences 295 (2024): 06007. http://dx.doi.org/10.1051/epjconf/202429506007.

Full text
Abstract:
In the context of High Energy Physics (HEP) analyses the advent of large-scale combination fits forms an increasing computational challenge for the underlying software frameworks on which these fits rely. RooFit, being the central tool for HEP statistical model creation and fitting, intends to address this challenge through an efficient and versatile parallelisation framework on top of which two parallel implementations were developed in the present research. The first implementation, the parallelisation of the gradient, shows good scaling behaviour and is sufficiently robust to consistently minimize real large-scale fits. The latter, the parallelisation of the line search, is still work in progress for some specific likelihood components but shows promising results in realistic testcases. Enabling just gradient parallelisation speeds up the full fit of a recently published Higgs combination from the ATLAS experiment by a factor of 4.6 with sixteen workers. As the improvements presented in this research are currently publicly available in ROOT 6.28, we invite users to enable at least gradient parallelisation for robust accelerated fitting with RooFit.
APA, Harvard, Vancouver, ISO, and other styles
3

Kriauzienė, Rima, Andrej Bugajev, and Raimondas Čiegis. "A THREE-LEVEL PARALLELISATION SCHEME AND APPLICATION TO THE NELDER-MEAD ALGORITHM." Mathematical Modelling and Analysis 25, no. 4 (October 13, 2020): 584–607. http://dx.doi.org/10.3846/mma.2020.12139.

Full text
Abstract:
We consider a three-level parallelisation scheme. The second and third levels define a classical two-level parallelisation scheme and some load balancing algorithm is used to distribute tasks among processes. It is well-known that for many applications the efficiency of parallel algorithms of these two levels starts to drop down after some critical parallelisation degree is reached. This weakness of the twolevel template is addressed by introduction of one additional parallelisation level. s an alternative to the basic solver some new or modified algorithms are considered on this level. The idea of the proposed methodology is to increase the parallelisation degree by using possibly less efficient algorithms in comparison with the basic solver. As an example we investigate two modified Nelder-Mead methods. For the selected application, a Schro¨dinger equation is solved numerically on the second level, and on the third level the parallel Wang’s algorithm is used to solve systems of linear equations with tridiagonal matrices. A greedy workload balancing heuristic is proposed, which is oriented to the case of a large number of available processors. The complexity estimates of the computational tasks are model-based, i.e. they use empirical computational data.
APA, Harvard, Vancouver, ISO, and other styles
4

Kaber, Sidi-Mahmoud, Amine Loumi, and Philippe Parnaudeau. "Parallel Solution of Linear Systems." East Asian Journal on Applied Mathematics 6, no. 3 (July 20, 2016): 278–89. http://dx.doi.org/10.4208/eajam.210715.250316a.

Full text
Abstract:
AbstractComputational scientists generally seek more accurate results in shorter times, and to achieve this a knowledge of evolving programming paradigms and hardware is important. In particular, optimising solvers for linear systems is a major challenge in scientific computation, and numerical algorithms must be modified or new ones created to fully use the parallel architecture of new computers. Parallel space discretisation solvers for Partial Differential Equations (PDE) such as Domain Decomposition Methods (DDM) are efficient and well documented. At first glance, parallelisation seems to be inconsistent with inherently sequential time evolution, but parallelisation is not limited to space directions. In this article, we present a new and simple method for time parallelisation, based on partial fraction decomposition of the inverse of some special matrices. We discuss its application to the heat equation and some limitations, in associated numerical experiments.
APA, Harvard, Vancouver, ISO, and other styles
5

Janssen, R., M. Dracopoulos, K. Parrott, E. Slessor, P. Alotto, P. Molfino, M. Nervi, and J. Simkin. "Parallelisation of electromagnetic simulation codes." IEEE Transactions on Magnetics 34, no. 5 (1998): 3423–26. http://dx.doi.org/10.1109/20.717806.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Hershey, J. E., K. Molnar, and A. Hassan. "Parallelisation of suboptimal spectrum search." Electronics Letters 28, no. 18 (1992): 1721. http://dx.doi.org/10.1049/el:19921094.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

AL-Sudani, Asaad A. M. "SYSTEM PARALLELISATION FOR COMPUTER VISION." Journal of Engineering 9, no. 02 (June 1, 2003): 249–64. http://dx.doi.org/10.31026/j.eng.2003.02.08.

Full text
Abstract:
This paper delineates the parallelisation of a computer vision system. It presents the system proposal and the relevant design phases of a laboratory based model. This model involves special purpose hardware implementing the early stages of processing with very high data rate. It Incorporates facilities enabling the user to capture, retain, retrieve, compare, and analyse video images. The output of this hardware is to be processed by a software running in a parallel processor. The latter is a VMEbus-based multiprocessing machine accommodating the system hardware and ensures for better flexibility. It also participates in a reasonable distribution of the systern processing power. The kernel philosophy here depends on the concept of modularisation to attain higher degree of design consistency. It believes that the spatiotemporal pixel variation of two adjacent video frames involves sufficient information to detect movement. This implies pixel encoding and motion parameters estimation. The system software is based on a data compressive technique (Strip Encoding of Adjacent Frames) to solve the bottlenecks problem in the whole system throughput. The research hereby attempts to attain a match in the degree of sophistication between the system hardware and software structures. This yields to make the system processing power better meets the system applications requirements. The research investigates the above presented design phases along with their logical, functional, technical, and modular specifications. The research is adequate for development in a wide range of applications (requiring parallel architectures for image processing) like: Artificial Intelligence, Features Extraction and Pattern Recognition, Expert Systems, Computer Vision and Robotic Vision, Industrial Control, and other civil and military applications.
APA, Harvard, Vancouver, ISO, and other styles
8

Padalko, Mikhail Alexandrovich, Yuriy Andreevich Shevchenko, Vitalii Yurievich Kapitan, and Konstantin Valentinovich Nefedev. "Parallel Computing of Edwards—Anderson Model." Algorithms 15, no. 1 (December 27, 2021): 13. http://dx.doi.org/10.3390/a15010013.

Full text
Abstract:
A scheme for parallel computation of the two-dimensional Edwards—Anderson model based on the transfer matrix approach is proposed. Free boundary conditions are considered. The method may find application in calculations related to spin glasses and in quantum simulators. Performance data are given. The scheme of parallelisation for various numbers of threads is tested. Application to a quantum computer simulator is considered in detail. In particular, a parallelisation scheme of work of quantum computer simulator.
APA, Harvard, Vancouver, ISO, and other styles
9

Liu, Jun-Jie, Kang-Too Tsang, and Yu-Hui Deng. "A variant RSA acceleration with parallelisation." International Journal of Parallel, Emergent and Distributed Systems 37, no. 3 (January 12, 2022): 318–32. http://dx.doi.org/10.1080/17445760.2021.2024535.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Boukerram, Abdallah, and Samira Ait Kaci Azzou. "Parallelisation of Algorithms of Mathematical Morphology." Journal of Computer Science 2, no. 8 (August 1, 2006): 615–18. http://dx.doi.org/10.3844/jcssp.2006.615.618.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Parallelisation"

1

Schuilenburg, Alexander Marius. "Parallelisation of algorithms." Master's thesis, University of Cape Town, 1990. http://hdl.handle.net/11427/22211.

Full text
Abstract:
Most numerical software involves performing an extremely large volume of algebraic computations. This is both costly and time consuming in respect of computer resources and, for large problems, often super-computer power is required in order for results to be obtained in a reasonable amount of time. One method whereby both the cost and time can be reduced is to use the principle "Many hands make light work", or rather, allow several computers to operate simultaneously on the code, working towards a common goal, and hopefully obtaining the required results in a fraction of the time and cost normally used. This can be achieved through the modification of the costly, time consuming code, breaking it up into separate individual code segments which may be executed concurrently on different processors. This is termed parallelisation of code. This document describes communication between sequential processes, protocols, message routing and parallelisation of algorithms. In particular, it deals with these aspects with reference to the Transputer as developed by INMOS and includes two parallelisation examples, namely parallelisation of code to study airflow and of code to determine far field patterns of antennas. This document also reports on the practical experiences with programming in parallel.
APA, Harvard, Vancouver, ISO, and other styles
2

Calinescu, Radu C. "Architecture-independent loop parallelisation." Thesis, University of Oxford, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.299406.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Orange, David Joseph. "Parallelisation on MIMD architectures." Thesis, University of Liverpool, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.317211.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Nagy, Lesleis. "Parallelisation of micromagnetic simulations." Thesis, University of Edinburgh, 2016. http://hdl.handle.net/1842/20433.

Full text
Abstract:
The field of paleomagnetism attempts to understand in detail the the processes of the Earth by studying naturally occurring magnetic samples. These samples are quite unlike those fabricated in the laboratory. They have irregular shapes; they have been squeezed and stretched, heated and cooled and subjected to oxidation. However micromagnetic modelling allows us to simulate such samples and gain some understanding of how a paleomagnetic signal is acquired and how it is retained. Micromagnetics provides a theory for understanding how the domain structure of a magnetic sample alters subject to what it is made from and the environment that it is in. It furnishes the mathematics that describe the energy of a given domain structure and how that domain structure evolves in time. Combining micromagnetics and ever increasing computer power, it has been possible to produce simulations of small to medium size grains within the so-called single to pseudo single domain state range. However processors are no longer built with increasing speed but with increasing parallelism and it is this that must be exploited to model larger and larger paleomagnetic samples. The purpose of the work presented here is twofold. Firstly a micromagnetics code that is parallel and scalable is presented. This code is based on FEniCS, an existing finite element framework, and is shown to run on ARCHER the UK’s national supercomputing service. The strategy of using existing libraries and frameworks allow future extension and inclusion of new science in the code base. In order to achieve scalability, a spatial mapping technique is used to calculate the demagnetising field - the most computationally intensive part of micromagnetic calculations. This allows grain geometries to be partitioned in such a way that no global communication is required between parallel processes - the source of favourable scaling behaviour. The second part of the theses presents an exploration of domain state evolution in increasing sizes of magnetite grains. This simulation, whilst a first approximation that excludes magneto-elastic effects, is the first attempt to map out the transition from pseudo-single domain states to multi domain states using a full micromagnetic simulation.
APA, Harvard, Vancouver, ISO, and other styles
5

Zhou, Ruoyu. "Guided automatic binary parallelisation." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/274753.

Full text
Abstract:
For decades, the software industry has amassed a vast repository of pre-compiled libraries and executables which are still valuable and actively in use. However, for a significant fraction of these binaries, most of the source code is absent or is written in old languages, making it practically impossible to recompile them for new generations of hardware. As the number of cores in chip multi-processors (CMPs) continue to scale, the performance of this legacy software becomes increasingly sub-optimal. Rewriting new optimised and parallel software would be a time-consuming and expensive task. Without source code, existing automatic performance enhancing and parallelisation techniques are not applicable for legacy software or parts of new applications linked with legacy libraries. In this dissertation, three tools are presented to address the challenge of optimising legacy binaries. The first, GBR (Guided Binary Recompilation), is a tool that recompiles stripped application binaries without the need for the source code or relocation information. GBR performs static binary analysis to determine how recompilation should be undertaken, and produces a domain-specific hint program. This hint program is loaded and interpreted by the GBR dynamic runtime, which is built on top of the open-source dynamic binary translator, DynamoRIO. In this manner, complicated recompilation of the target binary is carried out to achieve optimised execution on a real system. The problem of limited dataflow and type information is addressed through cooperation between the hint program and JIT optimisation. The utility of GBR is demonstrated by software prefetch and vectorisation optimisations to achieve performance improvements compared to their original native execution. The second tool is called BEEP (Binary Emulator for Estimating Parallelism), an extension to GBR for binary instrumentation. BEEP is used to identify potential thread-level parallelism through static binary analysis and binary instrumentation. BEEP performs preliminary static analysis on binaries and encodes all statically-undecided questions into a hint program. The hint program is interpreted by GBR so that on-demand binary instrumentation codes are inserted to answer the questions from runtime information. BEEP incorporates a few parallel cost models to evaluate identified parallelism under different parallelisation paradigms. The third tool is named GABP (Guided Automatic Binary Parallelisation), an extension to GBR for parallelisation. GABP focuses on loops from sequential application binaries and automatically extracts thread-level parallelism from them on-the-fly, under the direction of the hint program, for efficient parallel execution. It employs a range of runtime schemes, such as thread-level speculation and synchronisation, to handle runtime data dependences. GABP achieves a geometric mean of speedup of 1.91x on binaries from SPEC CPU2006 on a real x86-64 eight-core system compared to native sequential execution. Performance is obtained for SPEC CPU2006 executables compiled from a variety of source languages and by different compilers.
APA, Harvard, Vancouver, ISO, and other styles
6

Jouvelot, Pierre. "Parallelisation semantique : une approche denotationnelle non-standard pour la parallelisation de programmes imperatifs sequentiels." Paris 6, 1986. http://www.theses.fr/1986PA066559.

Full text
Abstract:
Notre principe consiste a voir les transformations de programmes introduites par la parllelisation comme definissant des semantiques denotationnelles non-standards du langage de programmation. Nous montrons comment utiliser ce concept pour detecter, dans un langage imperatif simplifie all, des instructions complexes parallelisables, reconnaitre des reductions et prendre en compte certains programmes avec indirections
APA, Harvard, Vancouver, ISO, and other styles
7

Botinčan, Matko. "Formal verification-driven parallelisation synthesis." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/274136.

Full text
Abstract:
Concurrency is often an optimisation, rather than intrinsic to the functional behaviour of a program, i.e., a concurrent program is often intended to achieve the same effect of a simpler sequential counterpart, just faster. Error-free concurrent programming remains a tricky problem, beyond the capabilities of most programmers. Consequently, an attractive alternative to manually developing a concurrent program is to automatically synthesise one. This dissertation presents two novel formal verification-based methods for safely transforming a sequential program into a concurrent one. The first method---an instance of proof-directed synthesis---takes as the input a sequential program and its safety proof, as well as annotations on where to parallelise, and produces a correctly-synchronised parallelised program, along with a proof of that program. The method uses the sequential proof to guide the insertion of synchronisation barriers to ensure that the parallelised program has the same behaviour as the original sequential version. The sequential proof, written in separation logic, need only represent shape properties, meaning we can parallelise complex heap-manipulating programs without verifying every aspect of their behaviour. The second method proposes specification-directed synthesis: given a sequential program, we extract a rich, stateful specification compactly summarising program behaviour, and use that specification for parallelisation. At the heart of the method is a learning algorithm which combines dynamic and static analysis. In particular, dynamic symbolic execution and the computational learning technique grammar induction are used to conjecture input-output specifications, and counterexample-guided abstraction refinement to confirm or refute the equivalence between the conjectured specification and the original program. Once equivalence checking succeeds, from the inferred specifications we synthesise code that executes speculatively in parallel---enabling automated parallelisation of irregular loops that are not necessary polyhedral, disjoint or with a static pipeline structure.
APA, Harvard, Vancouver, ISO, and other styles
8

Boulet, Pierre. "Outils pour la parallelisation automatique." Lyon, École normale supérieure (sciences), 1996. http://www.theses.fr/1996ENSL0013.

Full text
Abstract:
La parallelisation automatique est une des approches visant a une plus grande facilite d'utilisation des ordinateurs paralleles. La parallelisation consiste a prendre un programme ecrit pour une machine sequentielle (qui n'a qu'un processeur) et de l'adapter a une machine parallele. L'interet de faire faire cette parallelisation automatiquement par un programme appele paralleliseur est qu'on pourrait alors reutiliser tout le code deja ecrit en fortran pour machine sequentielles, apres parallelisation, sur des machines paralleles. Nous n'y sommes pas encore, mais on s'en approche. C'est dans ce cadre que se situe mon travail. Une moitie approximativement de ma these est consacre a la realisation d'un logiciel qui parallelise automatiquement une classe reduite de programmes (les nids de boucles uniformes qui utilisent des translations comme acces aux tableaux de donnees) en hpf (high performance fortran). J'insiste surtout sur la partie generation de code hpf, qui est la partie la plus novatrice de ce programme. Outre la realisation de bouclettes, ma contribution au domaine est aussi theorique avec une etude sur un partitionnement des donnees appele pavage par des parallelepipedes et une etude de l'optimisation des calculs d'expressions de tableaux dans le langage high performance fortran. Le pavage est une technique permettant d'optimiser la taille des taches qu'on repartit sur les processeurs pour diminuer le temps passe en communications. L'evaluation d'expressions de tableaux est une etape d'optimisation du compilateur parallele (le programme qui traduit le code parallele ecrit dans un langage de haut niveau comme hpf en code machine directement executable par l'ordinateur parallele)
APA, Harvard, Vancouver, ISO, and other styles
9

Tournavitis, Georgios. "Profile-driven parallelisation of sequential programs." Thesis, University of Edinburgh, 2011. http://hdl.handle.net/1842/5287.

Full text
Abstract:
Traditional parallelism detection in compilers is performed by means of static analysis and more specifically data and control dependence analysis. The information that is available at compile time, however, is inherently limited and therefore restricts the parallelisation opportunities. Furthermore, applications written in C – which represent the majority of today’s scientific, embedded and system software – utilise many lowlevel features and an intricate programming style that forces the compiler to even more conservative assumptions. Despite the numerous proposals to handle this uncertainty at compile time using speculative optimisation and parallelisation, the software industry still lacks any pragmatic approaches that extracts coarse-grain parallelism to exploit the multiple processing units of modern commodity hardware. This thesis introduces a novel approach for extracting and exploiting multiple forms of coarse-grain parallelism from sequential applications written in C. We utilise profiling information to overcome the limitations of static data and control-flow analysis enabling more aggressive parallelisation. Profiling is performed using an instrumentation scheme operating at the Intermediate Representation (Ir) level of the compiler. In contrast to existing approaches that depend on low-level binary tools and debugging information, Ir-profiling provides precise and direct correlation of profiling information back to the Ir structures of the compiler. Additionally, our approach is orthogonal to existing automatic parallelisation approaches and additional fine-grain parallelism may be exploited. We demonstrate the applicability and versatility of the proposed methodology using two studies that target different forms of parallelism. First, we focus on the exploitation of loop-level parallelism that is abundant in many scientific and embedded applications. We evaluate our parallelisation strategy against the Nas and Spec Fp benchmarks and two different multi-core platforms (a shared-memory Intel Xeon Smp and a heterogeneous distributed-memory Ibm Cell blade). Empirical evaluation shows that our approach not only yields significant improvements when compared with state-of- the-art parallelising compilers, but comes close to and sometimes exceeds the performance of manually parallelised codes. On average, our methodology achieves 96% of the performance of the hand-tuned parallel benchmarks on the Intel Xeon platform, and a significant speedup for the Cell platform. The second study, addresses the problem of partially sequential loops, typically found in implementations of multimedia codecs. We develop a more powerful whole-program representation based on the Program Dependence Graph (Pdg) that supports profiling, partitioning and codegeneration for pipeline parallelism. In addition we demonstrate how this enhances conventional pipeline parallelisation by incorporating support for multi-level loops and pipeline stage replication in a uniform and automatic way. Experimental results using a set of complex multimedia and stream processing benchmarks confirm the effectiveness of the proposed methodology that yields speedups up to 4.7 on a eight-core Intel Xeon machine.
APA, Harvard, Vancouver, ISO, and other styles
10

Bratvold, Tore Andreas. "Skeleton-based parallelisation of functional programs." Thesis, Heriot-Watt University, 1994. http://hdl.handle.net/10399/1355.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Parallelisation"

1

Calinescu, R. C. Architecture-Independence Loop Parallelisation. London: Springer London, 2000. http://dx.doi.org/10.1007/978-1-4471-0763-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Calinescu, R. C. Architecture-Independence Loop Parallelisation. London: Springer London, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Calinescu, Radu C. Architecture-Independent Loop Parallelisation. Brand: Springer, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Architecture-Independent Loop Parallelisation. Springer, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Calinescu, Radu C. Architecture-Independent Loop Parallelisation (Distinguished Dissertations). Springer, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Größlinger, Armin. Challenges of Non-linear Parameters and Variables in Automatic Loop Parallelisation. Lulu Press, Inc., 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Parallelisation"

1

Calinescu, R. C. "Template-Matching Parallelisation." In Architecture-Independence Loop Parallelisation, 43–83. London: Springer London, 2000. http://dx.doi.org/10.1007/978-1-4471-0763-7_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Calinescu, R. C. "Generic Loop Nest Parallelisation." In Architecture-Independence Loop Parallelisation, 85–107. London: Springer London, 2000. http://dx.doi.org/10.1007/978-1-4471-0763-7_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Calinescu, R. C. "Introduction." In Architecture-Independence Loop Parallelisation, 1–4. London: Springer London, 2000. http://dx.doi.org/10.1007/978-1-4471-0763-7_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Calinescu, R. C. "The Bulk-Synchronous Parallel Model." In Architecture-Independence Loop Parallelisation, 5–12. London: Springer London, 2000. http://dx.doi.org/10.1007/978-1-4471-0763-7_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Calinescu, Radu C. "Data Dependence Analysis and Code Transformation." In Architecture-Independence Loop Parallelisation, 13–22. London: Springer London, 2000. http://dx.doi.org/10.1007/978-1-4471-0763-7_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Calinescu, Radu C. "Communication Overheads in Loop Nest Scheduling." In Architecture-Independence Loop Parallelisation, 23–42. London: Springer London, 2000. http://dx.doi.org/10.1007/978-1-4471-0763-7_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Calinescu, R. C. "A Strategy and a Tool for Architecture-Independent Loop Parallelisation." In Architecture-Independence Loop Parallelisation, 109–23. London: Springer London, 2000. http://dx.doi.org/10.1007/978-1-4471-0763-7_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Calinescu, Radu C. "The Effectiveness of Architecture-Independent Loop Parallelisation." In Architecture-Independence Loop Parallelisation, 125–35. London: Springer London, 2000. http://dx.doi.org/10.1007/978-1-4471-0763-7_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Calinescu, R. C. "Conclusions." In Architecture-Independence Loop Parallelisation, 139–43. London: Springer London, 2000. http://dx.doi.org/10.1007/978-1-4471-0763-7_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Winterstein, Felix. "Heap Partitioning and Parallelisation." In Separation Logic for High-level Synthesis, 57–84. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-53222-6_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Parallelisation"

1

Gonçalves, Rui C., and João L. Sobral. "Pluggable parallelisation." In the 18th ACM international symposium. New York, New York, USA: ACM Press, 2009. http://dx.doi.org/10.1145/1551609.1551614.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Mak, Jonathan, and Alan Mycroft. "Critical-Path-Guided Interactive Parallelisation." In 2011 International Conference on Parallel Processing Workshops (ICPPW). IEEE, 2011. http://dx.doi.org/10.1109/icppw.2011.26.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Saxby, B. A., J. Simkin, and S. P. Malton. "Parallelisation of Coil Field Calculations." In 9th IET International Conference on Computation in Electromagnetics (CEM 2014). Institution of Engineering and Technology, 2014. http://dx.doi.org/10.1049/cp.2014.0170.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Khoshnevisan, Hessam, and Mohamad Afshar. "Mechanical parallelisation of database applications." In the 1994 ACM symposium. New York, New York, USA: ACM Press, 1994. http://dx.doi.org/10.1145/326619.326804.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

"Adaptive MCMC parallelisation in Stan." In 25th International Congress on Modelling and Simulation. Modelling and Simulation Society of Australia and New Zealand, 2023. http://dx.doi.org/10.36334/modsim.2023.stenborg.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Jisheng Zhao, I. Rogers, C. Kirkham, and I. Watson. "Loop Parallelisation for the Jikes RVM." In Sixth International Conference on Parallel and Distributed Computing Applications and Technologies (PDCAT'05). IEEE, 2005. http://dx.doi.org/10.1109/pdcat.2005.164.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Jackson, Adrian, Adam Carter, Joachim Hein, Jan Westerholm, Mats Aspnäs, Matti Ropo, and Alejandro Soba. "EUFORIA HPC: Massive Parallelisation for Fusion Community." In 2010 18th Euromicro International Conference on Parallel, Distributed and Network-Based Processing (PDP). IEEE, 2010. http://dx.doi.org/10.1109/pdp.2010.79.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Mukherjee, N., and J. R. Gurd. "A comparative analysis of four parallelisation schemes." In the 13th international conference. New York, New York, USA: ACM Press, 1999. http://dx.doi.org/10.1145/305138.305204.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Gurunathan, Karthik, Kaustubh Kartikey, TSB Sudarshan, and KN Divyaprabha. "Case for Dynamic Parallelisation using Learning Techniques." In 2020 IEEE 9th International Conference on Communication Systems and Network Technologies (CSNT). IEEE, 2020. http://dx.doi.org/10.1109/csnt48778.2020.9115757.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Anandhanarayanan, Karuppanasamy, Markandan Nagarathinam, and Suresh Deshpande. "Parallelisation of a Gridfree Kinetic Upwind Solver." In 17th AIAA Computational Fluid Dynamics Conference. Reston, Virigina: American Institute of Aeronautics and Astronautics, 2005. http://dx.doi.org/10.2514/6.2005-4628.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Parallelisation"

1

Ayoul-Guilmard, Q., S. Ganesh, F. Nobile, R. Rossi, and C. Soriano. D6.3 Report on stochastic optimisation for simple problems. Scipedia, 2021. http://dx.doi.org/10.23967/exaqute.2021.2.001.

Full text
Abstract:
This report addresses the general matter of optimisation under uncertainties, following a previous report on stochastic sensitivities (deliverable 6.2). It describes several theoretical methods, as well their application into implementable algorithms. The specific case of the conditional value at risk chosen as risk measure, with its challenges, is prominently discussed. In particular, the issue of smoothness – or lack thereof – is addressed through several possible approaches. The whole report is written in the context of high-performance computing, with concern for parallelisation and cost-efficiency.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography