Auswahl der wissenschaftlichen Literatur zum Thema „Parallelisation in time“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Parallelisation in time" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Parallelisation in time"

1

Ajtonyi, István, und Gábor Terstyánszky. „Real-Time Requirements and Parallelisation in Fault Diagnosis“. IFAC Proceedings Volumes 28, Nr. 5 (Mai 1995): 471–77. http://dx.doi.org/10.1016/s1474-6670(17)47268-5.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Kaber, Sidi-Mahmoud, Amine Loumi und Philippe Parnaudeau. „Parallel Solution of Linear Systems“. East Asian Journal on Applied Mathematics 6, Nr. 3 (20.07.2016): 278–89. http://dx.doi.org/10.4208/eajam.210715.250316a.

Der volle Inhalt der Quelle
Annotation:
AbstractComputational scientists generally seek more accurate results in shorter times, and to achieve this a knowledge of evolving programming paradigms and hardware is important. In particular, optimising solvers for linear systems is a major challenge in scientific computation, and numerical algorithms must be modified or new ones created to fully use the parallel architecture of new computers. Parallel space discretisation solvers for Partial Differential Equations (PDE) such as Domain Decomposition Methods (DDM) are efficient and well documented. At first glance, parallelisation seems to be inconsistent with inherently sequential time evolution, but parallelisation is not limited to space directions. In this article, we present a new and simple method for time parallelisation, based on partial fraction decomposition of the inverse of some special matrices. We discuss its application to the heat equation and some limitations, in associated numerical experiments.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Drysdale, Timothy David, und Tomasz P. Stefanski. „Parallelisation of Implicit Time Domain Methods: Progress with ADI-FDTD“. PIERS Online 5, Nr. 2 (2009): 117–20. http://dx.doi.org/10.2529/piers080905063810.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Poulhaon, Fabien, Francisco Chinesta und Adrien Leygue. „A first step toward a PGD-based time parallelisation strategy“. European Journal of Computational Mechanics 21, Nr. 3-6 (30.08.2012): 300–311. http://dx.doi.org/10.1080/17797179.2012.714985.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Dodson, S. J., S. P. Walker und M. J. Bluck. „Parallelisation issues for high speed time domain integral equation analysis“. Parallel Computing 25, Nr. 8 (September 1999): 925–42. http://dx.doi.org/10.1016/s0167-8191(99)00031-9.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Niculescu, Virginia, und Robert Manuel Ştefănică. „Tries-Based Parallel Solutions for Generating Perfect Crosswords Grids“. Algorithms 15, Nr. 1 (13.01.2022): 22. http://dx.doi.org/10.3390/a15010022.

Der volle Inhalt der Quelle
Annotation:
A general crossword grid generation is considered an NP-complete problem and theoretically it could be a good candidate to be used by cryptography algorithms. In this article, we propose a new algorithm for generating perfect crosswords grids (with no black boxes) that relies on using tries data structures, which are very important for reducing the time for finding the solutions, and offers good opportunity for parallelisation, too. The algorithm uses a special tries representation and it is very efficient, but through parallelisation the performance is improved to a level that allows the solution to be obtained extremely fast. The experiments were conducted using a dictionary of almost 700,000 words, and the solutions were obtained using the parallelised version with an execution time in the order of minutes. We demonstrate here that finding a perfect crossword grid could be solved faster than has been estimated before, if we use tries as supporting data structures together with parallelisation. Still, if the size of the dictionary is increased by a lot (e.g., considering a set of dictionaries for different languages—not only for one), or through a generalisation to a 3D space or multidimensional spaces, then the problem still could be investigated for a possible usage in cryptography.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Iman Fitri Ismail, Akmal Nizam Mohammed, Bambang Basuno, Siti Aisyah Alimuddin und Mustafa Alas. „Evaluation of CFD Computing Performance on Multi-Core Processors for Flow Simulations“. Journal of Advanced Research in Applied Sciences and Engineering Technology 28, Nr. 1 (11.09.2022): 67–80. http://dx.doi.org/10.37934/araset.28.1.6780.

Der volle Inhalt der Quelle
Annotation:
Previous parallel computing implementations for Computational Fluid Dynamics (CFD) focused extensively on Complex Instruction Set Computer (CISC). Parallel programming was incorporated into the previous generation of the Raspberry Pi Reduced Instruction Set Computer (RISC). However, it yielded poor computing performance due to the processing power limits of the time. This research focuses on utilising two Raspberry Pi 3 B+ with increased processing capability compared to its previous generation to tackle fluid flow problems using numerical analysis and CFD. Parallel computing elements such as Secure Shell (SSH) and the Message Passing Interface (MPI) protocol were implemented for Advanced RISC Machine (ARM) processors. The parallel network was then validated by a processor call attempt and core execution test. Parallelisation of the processors enables the study of fluid flow and computational fluid dynamics (CFD) problems, such as validation of the NACA 0012 airfoil and an additional case of the Laplace equation for computing the temperature distribution via the parallel system. The experimental NACA 0012 data was validated using the parallel system, which can simulate the airfoil's physics. Each core was enabled and tested to determine the system's performance in parallelising the execution of various programming algorithms such as pi calculation. A comparison of the execution time for the NACA 0012 validation case yielded a parallelisation efficiency above 50%. The case studies confirmed the Raspberry Pi 3 B+'s successful parallelisation independent of external software and machines, making it a self-sustaining compact demonstration cluster of parallel computers for CFD.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

HAMMERSLEY, ANDREW. „Parallelisation of a 2-D Fast Fourier Transform Algorithm“. International Journal of Modern Physics C 02, Nr. 01 (März 1991): 363–66. http://dx.doi.org/10.1142/s0129183191000494.

Der volle Inhalt der Quelle
Annotation:
The calculation of two and higher-dimension Fast Fourier Transforms (FFT’s) are of great importance in many areas of data analysis and computational physics. The two-dimensional FFT is implemented for a parallel network using a master-slave approach. In-place performance is good, but the use of this technique as an “accelerator” is limited by the communications time between the host and the network. The total time is reduced by performing the host-master communications in parallel with the master-slave communications. Results for the calculation of the two-dimensional FFT of real-valued datasets are presented.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

MCAVANEY, CHRISTOPHER, und ANDRZEJ GOSCINSKI. „AUTOMATIC PARALLELISATION AND EXECUTION OF APPLICATIONS ON CLUSTERS“. Journal of Interconnection Networks 02, Nr. 03 (September 2001): 331–43. http://dx.doi.org/10.1142/s0219265901000427.

Der volle Inhalt der Quelle
Annotation:
Parallel execution is a very efficient means of processing vast amounts of data in a small amount of time. Creating parallel applications has never been easy, and requires much knowledge of the task and the execution environment used to execute parallel processes. The process of creating parallel applications can be made easier through using a compiler that automatically parallelises a supplied application. Executing the parallel application is also simplified when a well designed execution environment is used. Such an execution environment provides very powerful operations to the programmer transparently. Combining both a parallelising compiler and execution environment and providing a fully automated parallelisation and execution tool is the aim of this research. The advantage of using such a fully automated tool is that the user does not need to provide any additional input to gain the benefits of parallel execution. This report shows the tool and how it transparently supports the programmer creating parallel applications and supports their execution.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Taygan, Ugur, und Adnan Ozsoy. „Performance analysis and GPU parallelisation of ECO object tracking algorithm“. New Trends and Issues Proceedings on Advances in Pure and Applied Sciences, Nr. 12 (30.04.2020): 109–18. http://dx.doi.org/10.18844/gjpaas.v0i12.4991.

Der volle Inhalt der Quelle
Annotation:
The classification and tracking of objects has gained popularity in recent years due to the variety and importance of their application areas. Although object classification does not necessarily have to be real time, object tracking is often intended to be carried out in real time. While the object tracking algorithm mainly focuses on robustness and accuracy, the speed of the algorithm may degrade significantly. Due to their parallelisable nature, the use of GPUs and other parallel programming tools are increasing in the object tracking applications. In this paper, we run experiments on the Efficient Convolution Operators object tracking algorithm, in order to detect its time-consuming parts, which are the bottlenecks of the algorithm, and investigate the possibility of GPU parallelisation of the bottlenecks to improve the speed of the algorithm. Finally, the candidate methods are implemented and parallelised using the Compute Unified Device Architecture. Keywords: Object tracking, parallel programming.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Dissertationen zum Thema "Parallelisation in time"

1

Didier, Keryan. „Contributions to the safe and efficient parallelisation of hard real-time systems“. Electronic Thesis or Diss., Sorbonne université, 2019. http://www.theses.fr/2019SORUS485.

Der volle Inhalt der Quelle
Annotation:
L'implémentation de systèmes temps-réel implique de nombreuses étapes qui sont jusqu'aujourd'hui faites manuellement. La complexité de tels systèmes et celle des plateformes matérielles sur lesquelles ils s'exécutent rendent de plus en plus difficile d'assurer la correction de ces étapes de conception (en particulier dans de cadre d'exécutions sur plateformes multi-cœurs). Cela rend l'automatisation de tout le processus d'implémentation inévitable. Cette thèse propose une méthode de parallélisation automatique de systèmes temps-réel. La méthode rapproche les domaines du temps-réel et de la compilation en intégrant les étapes de parallélisation, d'ordonnancement, d'allocation mémoire et de génération de code autour d'une analyse et d'un modèle temporel précis qui s'appuient sur des hypothèses fortes sur la plateforme d'exécution et la forme du code généré. Cette thèse propose également un modèle d'implémentation pour du logiciel flot-de-données multithreadé. En utilisant la même base formelle que précédemment (les formalismes flot-de-données synchrones), un modèle représente une implémentation multithreadé dans un langage comme Lustre, étendu avec des annotations de mapping. Cette modélisation permet un raisonnement formel de toutes les décisions d'implémentation et nous proposons une approche vers la preuve de correction de leur fonctionnalité en rapport à leurs spécifications
The implementation of hard real-time systems involves a lot of steps that are traditionally manual. The growing complexity of such systems and hardware platforms on which they are executed makes increasingly difficult to ensure the correctness of those steps, in particular for the timing properties of the system on multi-core platform. This leads to the need for automation of the whole implementation process. In this thesis, we provide a method for automatic parallel implementation of real-time systems. The method bridge the gap between real-time systems implementation and compilation by integrating parallelization, scheduling, memory allocation, and code generation around a precise timing model and analysis that rely on strong hypothesis on the execution platform and the form of the generated code. The thesis also provides an implementation model for dataflow multithreaded software. Using the same formal ground as the first contribution, the dataflow synchronous formalisms, the model represents multithreaded implementations in a Lustre-like language extended with mapping annotations. This model allows formal reasoning on the correctness of all the mapping decisions used to build the implementation. We propose an approach toward the proof of correctness of the functionality of the implementation with respect to the functional specifications
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Bolis, Alessandro. „Fourier spectral/hp element method : investigation of time-stepping and parallelisation strategies“. Thesis, Imperial College London, 2013. http://hdl.handle.net/10044/1/25140.

Der volle Inhalt der Quelle
Annotation:
As computer hardware has evolved, the time required to perform numerical simulations has reduced, allowing investigations of a wide range of new problems. This thesis focuses on algorithm optimization, to minimize run-time, when solving the incompressible Navier-Stokes equations. Aspects affecting performance related to the discretisation and algorithm parallelization are investigated in the context of high-order methods. The roles played by numerical approximations and computational strategies are highlighted and it is recognized that a versatile implementation provides additional benefits, allowing an ad-hoc selection of techniques to fit the needs of heterogeneous computing environments. We initially describe the building blocks of a spectral/hp element and pure spectral method and how they can be encapsulated and combined to create a 3D discretisation, the Fourier spectral/hp element method. Time-stepping strategies are also described and encapsulated in a flexible framework based on the General Linear Method. After implementing and validating an incompressible Navier-Stokes solver, two canonical turbulent flows are analyzed. Afterward a 2D hyperbolic equation is considered to investigate the efficiency of low- and high-order methods when discretising the spatial and temporal derivatives. We perform parametric studies, monitoring accuracy and CPU-time for different numerical approximations. We identify optimal discretisations, demonstrating that high-order methods are the computationally fastest approach to attain a desired accuracy for this problem. Following the same philosophy, we investigate the benefits of using a hybrid parallel implementation. The message passing model is introduced to parallelize different kernels of an incompressible Navier-Stokes solver. Monitoring the parallel performance of these strategies the most efficient approach is highlighted. We also demonstrate that hybrid parallel solutions can be used to significantly extend the strong scalability limit and support greater parallelism.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Hassine, Khaled. „Contribution à l'élaboration d'une approche de décomposition des traitements itératifs sur des architectures MIMD : application aux traitements de séquences d'images sur réseau de transputers“. Valenciennes, 1994. https://ged.uphf.fr/nuxeo/site/esupversions/6059845e-087a-474f-94a6-09cb96fbf246.

Der volle Inhalt der Quelle
Annotation:
L’objectif de cette thèse consiste à étudier les modèles de parallélisation pour répondre aux besoins en puissance de calcul de deux outils de mesure de la direction du regard et d'analyse gestuelle. Le support, étant des séquences d'images, a introduit des contraintes temporelles sévères. Une étude algorithmique des traitements de séquences d'images et de nos applications a permis de dégager leur adaptation au parallélisme. La conception parallèle est par la suite abordée. Différentes techniques de parallélisation sont décrites. Le modèle Spmd (same program multiple data), le plus adapté à nos traitements, est détaillé. Ce modèle ne tient pas compte explicitement des surcharges inhérentes au parallélisme. Pour remédier à ces insuffisances, une approche de décomposition basée sur le modèle Spmd est proposée. L’approche présente une restructuration du parallélisme inhérent à une application en optimisant les différentes surcharges dues au parallélisme. Une transparence vis-à-vis de la machine cible est assurée. Les éventuelles contraintes temporelles liées aux applications sont prises en considération. L’approche est appliquée pour les traitements envisagés par les deux applications sur un réseau de transputers. Les résultats expérimentaux sont analysés en fonction des contraintes temporelles imposées. Des idées d'extensions sont proposées concernant le développement des applications et l'approche de parallélisation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Marcin, Vladimír. „GPU-akcelerovná syntéza pravděpodobnostních programů“. Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2021. http://www.nusl.cz/ntk/nusl-445566.

Der volle Inhalt der Quelle
Annotation:
V tejto práci sa zoberáme problémom automatizovanej syntézy pravdepodobnostných programov: majme konečnú rodinu kandidátnych programov, v ktorej chceme efektívne identifikovať program spĺňajúci danú špecifikáciu. Aj riešenie tých najjednoduchších syntéznych problémov v praxi predstavuje NP-ťažký problém. Pokrok v tejto oblasti prináša nástroj Paynt, ktorý na riešenie tohto problému používa novú integrovanú metódu syntézy pravdepodobnostných programov. Aj keď sa tento prístup dokáže efektívne vysporiadať s exponenciálnym rastom rodín kandidátnych riešení, stále tu existuje problém spôsobený exponenciálnym rastom jednotlivých členov týchto rodín. S cieľom vysporiadať sa aj s týmto problémom, sme implementovali GPU orientované algoritmy slúžiace na overovanie kandidátnych programov (modelov), ktoré danú úlohu paralelizujú na stavovej úrovni pravdepodobnostých modelov. Celkové zrýchlenie doshiahnuté týmto prístupom za určitých podmienok potom prinieslo takmer teoretický limit možného zrýchlenia syntézneho procesu.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Uhl, Claude. „Architecture de machine pour la simulation d'objets physiques en temps réel“. Grenoble INPG, 1996. http://www.theses.fr/1996INPG0107.

Der volle Inhalt der Quelle
Annotation:
Les algorithmes de simulation, base du modeleur simulateur d'objets physiques cordis-anima, necessitent des performances et une architecture particuliere pour la machine qui va les executer. La conception et la mise en uvre des conditions materielles de la simulation constituent un axe majeur des travaux de l'acroe. L'objectif de ce travail a ete de developper une architecture de machine efficace permettant l'interaction multisensorielle et en temps-reel entre un operateur et un objet simule par modele physique. Les contraintes liees a ce type de simulation sont multiples: les frequences de simulation varient de 1 khz a 44 khz selon la nature de l'objet modelise ; le mode d'execution des algorithmes est necessairement synchrone et de duree constante ; un bon rendu des objets simules exige des puissances de calcul de 500 mflops a 1 gflops. D'autre part, compte tenu du haut degre d'interconnexion entre modules dans la plupart des objets physiques cordis-anima, seule une architecture parallele dont chaque nud fournit une puissance de calcul tres elevee peut solutionner la problematique de simulation physique temps-reel. Suite a l'etude de diverses solutions processeurs, nous avons opte pour un compromis alliant performances, portabilite, facilite de programmation et cout: le processeur r8000 de mips. Les processeurs de calcul sont connectes aux transducteurs de sortie (ecran, clavier gestuel a retour d'effort et haut-parleur) par des cartes specifiques pour le geste, le son et l'image. La synchronisation temps-reel entre les differents elements de l'implantation est realisee par une horloge externe unique. Cette architecture a permis de developper quelques exemples de simulations qui mettent en evidence, a 1 khz, une grande richesse des modeles simules
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Guibert, David. „Analyse de méthodes de résolution parallèles d’EDO/EDA raides“. Thesis, Lyon 1, 2009. http://www.theses.fr/2009LYO10138/document.

Der volle Inhalt der Quelle
Annotation:
La simulation numérique de systèmes d’équations différentielles raides ordinaires ou algébriques est devenue partie intégrante dans le processus de conception des systèmes mécaniques à dynamiques complexes. L’objet de ce travail est de développer des méthodes numériques pour réduire les temps de calcul par le parallélisme en suivant deux axes : interne à l’intégrateur numérique, et au niveau de la décomposition de l’intervalle de temps. Nous montrons l’efficacité limitée au nombre d’étapes de la parallélisation à travers les méthodes de Runge-Kutta et DIMSIM. Nous développons alors une méthodologie pour appliquer le complément de Schur sur le système linéarisé intervenant dans les intégrateurs par l’introduction d’un masque de dépendance construit automatiquement lors de la mise en équations du modèle. Finalement, nous étendons le complément de Schur aux méthodes de type "Krylov Matrix Free". La décomposition en temps est d’abord vue par la résolution globale des pas de temps dont nous traitons la parallélisation du solveur non-linéaire (point fixe, Newton-Krylov et accélération de Steffensen). Nous introduisons les méthodes de tirs à deux niveaux, comme Parareal et Pita dont nous redéfinissons les finesses de grilles pour résoudre les problèmes raides pour lesquels leur efficacité parallèle est limitée. Les estimateurs de l’erreur globale, nous permettent de construire une extension parallèle de l’extrapolation de Richardson pour remplacer le premier niveau de calcul. Et nous proposons une parallélisation de la méthode de correction du résidu
This PhD Thesis deals with the development of parallel numerical methods for solving Ordinary and Algebraic Differential Equations. ODE and DAE are commonly arising when modeling complex dynamical phenomena. We first show that the parallelization across the method is limited by the number of stages of the RK method or DIMSIM. We introduce the Schur complement into the linearised linear system of time integrators. An automatic framework is given to build a mask defining the relationships between the variables. Then the Schur complement is coupled with Jacobian Free Newton-Krylov methods. As time decomposition, global time steps resolutions can be solved by parallel nonlinear solvers (such as fixed point, Newton and Steffensen acceleration). Two steps time decomposition (Parareal, Pita,...) are developed with a new definition of their grids to solved stiff problems. Global error estimates, especially the Richardson extrapolation, are used to compute a good approximation for the second grid. Finally we propose a parallel deferred correction
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Chao, Daphne Yu Fen. „MDRIP : a hybrid approach to parallelisation of discrete event simulation : a thesis submitted in partial fulfilment of the requirements for the degree of Master of Science in the University of Canterbury /“. 2006. http://library.canterbury.ac.nz/etd/adt-NZCU20060331.170722.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "Parallelisation in time"

1

Donaldson, Alastair F., Paul Keir und Anton Lokhmotov. „Compile-Time and Run-Time Issues in an Auto-Parallelisation System for the Cell BE Processor“. In Euro-Par 2008 Workshops - Parallel Processing, 163–73. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-00955-6_21.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Perrin, Dimitri, Heather J. Ruskin und Martin Crane. „In Silico Biology“. In Biocomputation and Biomedical Informatics, 55–74. IGI Global, 2010. http://dx.doi.org/10.4018/978-1-60566-768-3.ch003.

Der volle Inhalt der Quelle
Annotation:
Biological systems are typically complex and adaptive, involving large numbers of entities, or organisms, and many-layered interactions between these. System behaviour evolves over time, and typically benefits from previous experience by retaining memory of previous events. Given the dynamic nature of these phenomena, it is non-trivial to provide a comprehensive description of complex adaptive systems and, in particular, to define the importance and contribution of low-level unsupervised interactions to the overall evolution process. In this chapter, the authors focus on the application of the agent-based paradigm in the context of the immune response to HIV. Explicit implementation of lymph nodes and the associated lymph network, including lymphatic chain structure, is a key objective, and requires parallelisation of the model. Steps taken towards an optimal communication strategy are detailed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Parallelisation in time"

1

Medeiros, Bruno, und Joao L. Sobral. „Checkpoint and Run-Time Adaptation with Pluggable Parallelisation“. In 2011 International Conference on Parallel Processing (ICPP). IEEE, 2011. http://dx.doi.org/10.1109/icpp.2011.83.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Eele, Alison, Jan Maciejowski, Thomas Chau und Wayne Luk. „Parallelisation of Sequential Monte Carlo for real-time control in air traffic management“. In 2013 IEEE 52nd Annual Conference on Decision and Control (CDC). IEEE, 2013. http://dx.doi.org/10.1109/cdc.2013.6760651.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Blokzyl, S., M. Nagler, R. Schmidt und W. Hardt. „10.2 - PARIS - Parallelisation Architecture for Real-time Image Data Exploitation and Sensor Data Fusion“. In ettc2018 - European Test and Telemetry Conference. AMA Service GmbH, Von-Münchhausen-Str. 49, 31515 Wunstorf, Germany, 2018. http://dx.doi.org/10.5162/ettc2018/10.2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Wang, Leran, David J. J. Toal, Andy J. Keane und Felix Stanley. „An Accelerated Medial Object Transformation for Whole Engine Optimisation“. In ASME Turbo Expo 2014: Turbine Technical Conference and Exposition. American Society of Mechanical Engineers, 2014. http://dx.doi.org/10.1115/gt2014-26014.

Der volle Inhalt der Quelle
Annotation:
The following paper proposes an accelerated medial object transformation for the tip clearance optimisation of whole engine assemblies. A considerable reduction in medial object generation time has been achieved through two different mechanisms. Faces leading to unnecessary branches in the medial mesh are removed from the model and parallelisation of the medial object generation is improved through the subdivision of the original 3D CAD model. The time savings offered by these schemes are presented with respect to the generation of the medial objects of two complex gas turbine engine components. It is also demonstrated that the utilization of these techniques within a design optimisation may result in a considerable reduction in wall time.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Kipouros, Timoleon, Massimiliano Molinari, William N. Dawes, Geoffrey T. Parks, Mark Savill und Karl W. Jenkins. „An Investigation of the Potential for Enhancing the Computational Turbomachinery Design Cycle Using Surrogate Models and High Performance Parallelisation“. In ASME Turbo Expo 2007: Power for Land, Sea, and Air. ASMEDC, 2007. http://dx.doi.org/10.1115/gt2007-28106.

Der volle Inhalt der Quelle
Annotation:
This paper describes the deployment of supplementary tools to the modern industrial aerodynamic design cycle in the context of multi-objective optimisation. The benefits arising through the use of these tools are demonstrated through a single-row stator compressor test case, minimising two of the flow characteristics most critical to the efficiency of the turbomachine: blockage and entropy generation rate. The automatic integrated design system used, MOBOS3D, is equipped with an application-specific optimiser and a RBF surrogate model, in order to decrease the computational cost of the design evaluation process. The metamodel is utilised inside the system and is constantly trained in real time from the current database of the simulated test cases. In addition, a new parallelisation strategy is implemented, which exploits the possibility of executing simulations on a cluster of clusters, reducing significantly the wall-clock time required for the design optimisation process of real-world applications.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Masat, Alessandro, Camilla Colombo und Arnaud Boutonnet. „GPU-based Augmented Trajectory Propagation: orbital regularization interface and NVIDIA CUDA Tensor Core performance“. In ESA 12th International Conference on Guidance Navigation and Control and 9th International Conference on Astrodynamics Tools and Techniques. ESA, 2023. http://dx.doi.org/10.5270/esa-gnc-icatt-2023-199.

Der volle Inhalt der Quelle
Annotation:
Several analysis tasks feature the propagation of large sets of trajectories, ranging from monitoring the space debris environment to assessing the compliance of space missions with planetary protection policies. The increasingly stringent accuracy requirements inevitably make high-fidelity analyses become more computationally intensive, easily reaching the need of propagating hundreds of thousands of trajectories for one single task. For this reason, high-performance computing (HPC) and GPU (Graphics Processing Unit) computing techniques become one of the enabling technologies that allow the execution of this kind of analyses. The latter, given its accessible cost and hardware implementation, has increasingly been adopted in the past decade: more modern and powerful graphic cards are launched in the market every year, and new GPU-dedicated algorithms are continuously built and adopted: for reference, GPUs are the technology which the training of the most known artificial intelligence models is made upon. Reference tools for the astrodynamics community are represented by SNAPPshot [1] and CUDAjectory [2]: they both aim at achieving efficient propagations, the former being a CPU-based software suited for planetary protection analyses, the latter being a high-fidelity and efficiency GPU ballistic propagator. Both software work on a traditional step-based logic, that takes initial states and studies their step-by-step evolution in time. The proposed work builds on previously obtained results [3], proposing an alternative algorithm logic specifically designed for HPC and GPU computing, in order to extract all the possible performance from these computational architectures. In contrast to traditional, step-based, numerical schemes, the Picard-Chebyshev (PC) method starts the integration process from samples of a trajectory guess, which are iteratively updated until the supplied dynamical model is matched. The core of this numerical scheme is, other than the evaluation of the dynamics function, a sequence of matrix multiplications: this feature makes the method, in principle, highly suitable for parallel and GPU computing. However, the limited number of trajectory nodes required to reach high accuracy levels (100-200) hinders the parallel efficiency of the algorithm. In other words, the parallel overhead outweighs the possible acceleration, for systems this small. In [3], an augmented version of the basic Picard-Chebyshev simulation scheme for the propagation of large sets of trajectories is proposed. Instead of integrating, either sequentially or in parallel, each trajectory individually, an augmented dynamical system collecting all the samples is built and fed to the PC scheme. This approach outperforms the individual simulations in any parallelisation case, and its GPU implementation is observed to run faster already on low-end graphics cards, compared to a 40-core CPU cluster. This work introduces and implements the latest updated version of the PC scheme, which features iteration error feedback and second-order dynamics adaptability for improved iteration efficiency [4]. These adaptations contribute to reduce the computational time by a factor four, because of the reduced number of iterations required to converge. In addition, the algorithm is adapted to the newest generation NVIDIA graphics card, also exploiting the novel Tensor Core architecture for double precision computation, building an updated GPU software that overall is 50-100 times faster than its original version. Finally, an interface for the proposed scheme for regularised formulations (e.g., [5]) is proposed, aiming at improving the software robustness in tackling near-singular and sets of divergent trajectories. Performance and accuracy comparisons, in terms of number of trajectory samples required by the PC scheme, against the standard Cartesian propagation case are presented. Regularized formulations require a lower amount of trajectory samples to reach a given relative error threshold, compared to the Cartesian case, resulting in turn to a notable decrease in computational runtime. These improved software capabilities are tested in several critical case scenarios, proposing a complete analysis of close encounters, encompassing deep, shallow, and impacting flybys, in the Circular Restricted Three Body problem. Here, a further advantage of regularized formulations comes into play: the impact singularity featuring the gravitational model is removed by construction, making it feasible to treat impacting trajectories and shallow encounters in a single common augmented propagation. [1]Colombo C., Letizia F., Van Der Eynde J., “SNAPPshot ESA planetary protection compliance verification software Final report V1.0, Technical Report ESA-IPL-POM-MB-LE-2015- 315,” University of Southampton, Tech. Rep., 2016 [2]Geda M., Noomen R., Renk F., “Massive Parallelization of Trajectory Propagations using GPUs”, 2019, Master’s thesis, Delft University of Technology, http://resolver.tudelft.nl/uuid:1db3f2d1-c2bb-4188-bd1e-dac67bfd9dab [3]Masat A., Colombo C., Boutonnet A., “GPU-based high-precision orbital propagation of large sets of initial conditions through Picard-Chebyshev augmentation”, 2023, Acta Astronautica, https://doi.org/10.1016/j.actaastro.2022.12.037 [4]Woollands R., Junkins J. L., “Nonlinear differential equation solvers via adaptive Picard-Chebyhsev iteration: application in astrodynamics”, 2019, Journal of Guidance, Control, and Dynamics, https://doi.org/10.2514/1.G003318 [5]Masat A., Colombo C., “Kustaanheimo-Stiefel variables for planetary protection compliance analysis”, 2022, Journal of Guidance, Control, and Dynamics, https://doi.org/10.2514/1.G006255
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie