Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: HTC computation.

Дисертації з теми "HTC computation"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 дисертацій для дослідження на тему "HTC computation".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Farhat, Ahmad. "Trust computation in ad-hoc networks." FIU Digital Commons, 2005. http://digitalcommons.fiu.edu/etd/3251.

Повний текст джерела
Анотація:
With the present need for on the move networking, innovative technologies strive to establish a technological basis for managing secure and reliable systems in a highly interconnected information enabled world, and prevent reliance on a fixed networking infrastructure, hence the implementation of ad-hoc networks. There are numerous applications where ad-hoc networks are deployed including military, tele-health and mobile education. As such the need for security is imperative. Not much research work has been invested in the area of trust in ad hoc networks which proves to be a challenging subject relative to the characteristics of these types of networks. The objective of this thesis was to develop a model for trust computation between the nodes of the network. Eventually, the confidence level for each node was quantified, which lead to a better constancy among the nodes. Therefore, communication was trust worthy, and malicious nodes were punished and secluded from the network.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

McKenzie, Simon Clayton. "Efficient computation of integrals in modern correlated methods." Thesis, University of Sydney, 2020. https://hdl.handle.net/2123/23993.

Повний текст джерела
Анотація:
This thesis improves computational efficiency in two domains of quantum chemistry. Firstly, we improve the efficiency of computing atomic orbital (AO) integrals. We efficiently compute effective core potential integrals, relying on novel recursion relations and rigorous screening strategies. Inspired by PRISM, we create an adaptive algorithm to compute two-electron Gaussian geminal integrals, efficiently handling the contracted nature of both the contracted Gaussian-type orbital and geminal. We implement an efficient non-robust density fitting (DF) algorithm for computing the three-electron energy term in the Unsöld-W12 functional using new integral and screening routines. Secondly, we develop low computational scaling and highly parallel algorithms for MP2 energies. These algorithms rely on spatial quadratures of the electronic co-ordinates. We begin with a Localised Molecular Orbital formalism. This algorithm computes the opposite-spin (OS) MP2 energy and scales formally O(N^6) but, with screening strategies, scales asymptotically O(N^2). Unfortunately, the screened quantities reach their asymptotic scaling too slowly. Instead, we adopt a more local AO formalism. This algorithm demonstrates an almost ideal parallel speedup with more than 800 cores and competitive timings against DF-MP2-OS. In our improved AO algorithm, we develop rigorous screening strategies for eliminating insignificant AOs, extend the method to computing the same-spin MP2 energy, remove the prior sparse memory access bottleneck and implement a hybrid parallelisation strategy. We demonstrate a 51% parallel efficiency on 4644 cores, competitive timings and accuracy compared to DF-MP2. Finally, we extend this methodology to compute the MP2-F12(3*A) correction. We present a novel scaled Coulomb-like term approximation and develop efficient quadrature methods and screening strategies. Our scaled approximation and algorithm achieves chemical accuracy across a range of test sets.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Sen, Sevil. "Evolutionary computation techniques for intrusion detection in mobile ad hoc networks." Thesis, University of York, 2010. http://etheses.whiterose.ac.uk/998/.

Повний текст джерела
Анотація:
Mobile ad hoc networks (MANETs) are one of the fastest growing areas of research. By providing communications in the absence of a fixed infrastructure MANETs are an attractive technology for many applications. However, this flexibility introduces new security threats. Furthermore the traditional way of protecting networks is not directy applicable to MANETs. Many conventional security solutions are ineffective and inefficient for the highly dynamic and resource-constrained environments where MANET use might be expected. Since prevention techniques are never enough, intrusion detection systems (IDSs), which monitor system activities and detect intrusions, are generally used to complement other security mechanisms. %due to the dynamic nature %of MANETs, the lack of central points, and their highly constrained nodes. How to detect intrusions effectively and efficiently on this highly dynamic, distributed and resource-constrained environment is a challenging research problem. In the presence of these complicating factors humans are not particularly adept at making good design choices. That is the reason we propose to use techniques from artificial intelligence to help with this task. We investigate the use of evolutionary computation techniques for synthesising intrusion detection programs on MANETs. We evolve programs to detect the following attacks against MANETs: ad hoc flooding, route disruption, and dropping attacks. The performance of evolved programs is evaluated on simulated networks. The results are also compared with hand-coded programs. A good IDS on MANETs should also consider the resource constraints of the MANET environments. Power is one of the critical resources. Therefore we apply multi-objective optimization techniques (MOO) to discover trade-offs between intrusion detection ability and energy consumption of programs, and optimise these objectives simultaneously. We also investigate a suitable IDS architecture for MANETs in this thesis. Different programs are evolved for two architectures: local and cooperative detection in neighbourhood. Optimal trade-offs between intrusion detection ability and resource usage (energy, bandwidth) of evolved programs are also discovered using MOO techniques.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Natale, Irene. "A study on Friction Boundary Conditions with Unicorn/FEniCS-HPC." Thesis, KTH, Numerisk analys, NA, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-241920.

Повний текст джерела
Анотація:
The aim of this thesis is to present and validate a boundary condition formulation for CFD problems that includes a friction parameter. In the first part of the thesis the incompressible Navier-Stokes system of equations and the friction boundary conditions are presented. Then the Finite Element methodology that is used to discretize the problem is given, with particular emphasis to the a-posteriori error estimate, the adaptive algorithm and the numerical tripping included in the flow. Moreover, since FEniCS-HPC is the software on which this thesis leans on, its framework is explained, together with its powerful parallelization strategy. Then the weak formulation of the Navier Stokes system of equation coupled with friction boundary conditions is presented, together with an initial theoretical derivation of the friction coefficient optimal values. Furthermore, in the last chapter, the preliminary results of a validation study for the lift coefficient of the NACA0012 airfoil benchmark model are included and commented in detail. Even if there still are some aspects to be elucidated, we believe that our preliminary results are very promising and that they open a new pathway for simulation development in aerodynamics-related models.
Målet med denna avhandling är att presentera och validera ett rand-villkor för CFD problem som inkluderar en friktionsparameter. I den första delen av avhandlingen presenterar vi det inkompressibla Navier-Stokes system av ekvationer och dess randvillkor för friktion. Sedan använder vi oss av Finita Elementmetoden som används för att diskretisera problemet som är presenterat, med en särskild betoning på a posteriori feluppskattningen, den adaptiva algoritmen och den numeriska trippingen som fanns med i flödet. Eftersom denna avhandling helt lutar sig mot FEniCS-HPC mjukvara, förklaras dess ramverk, tillsammans med dess kraftfulla parallelliseringsstrategi. Därefter pre-senterar vi den svaga formuleringen av Navier-Stokes system av ekvationer kopplad till friktionsgränserna, tillsammans med en initiell teoretisk härledning av friktionskoefficientens optimala värden. Vidare, i det sista kapitlet, presenteras de preliminära resultaten av en valide-ringsstudie av lyftkoefficienten för modellen som använts vid benchmarking av NACA0012:s bärytan, som är kommenterad i detalj. Även om det fortfarande finns aspekter som bör belysas tror vi att vårt preliminära resultat är väldigt lovande och att det öppnar en ny väg för simuleringsutveckling i aerodynamikrelaterade modeller.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Krishnamurthy, Siddhartha. "Peak Sidelobe Level Distribution Computation for Ad Hoc Arrays using Extreme Value Theory." Thesis, Harvard University, 2014. http://dissertations.umi.com/gsas.harvard:11300.

Повний текст джерела
Анотація:
Extreme Value Theory (EVT) is used to analyze the peak sidelobe level distribution for array element positions with arbitrary probability distributions. Computations are discussed in the context of linear antenna arrays using electromagnetic energy. The results also apply to planar arrays of random elements that can be transformed into linear arrays.
Engineering and Applied Sciences
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Lehmann, Rüdiger. "A universal and robust computation procedure for geometric observations." Hochschule für Technik und Wirtschaft, 2017. https://htw-dresden.qucosa.de/id/qucosa%3A31843.

Повний текст джерела
Анотація:
This contribution describes an automatic and robust method, which can be applied to all classical geodetic computation problems. Starting from given input quantities (e.g. coordinates of known points, observations) computation opportunities for all other relevant quantities are found. For redundant input quantities there exists a multitude of different computation opportunities from different minimal subsets of input quantities, which are all found automatically, and their results are computed and compared. If the computation is non-unique, but only a finite number of solutions exist, then all solutions are found and computed. By comparison of the different computation results we may detect outliers in the input quantities and produce a robust final result. The method does not work stochastically, such that no stochastic model of the observations is required. The description of the algorithm is illustrated for a practical case. It is implemented on a webserver and is available for free via internet.
Der Beitrag beschreibt ein automatisches und robustes Verfahren, welches auf alle klassischen geodätischen Berechnungsprobleme angewendet werden kann. Ausgehend von vorgelegten Eingabegrößen (z.B. Koordinaten bekannter Punkte, Beobachtungen) werden Berechnungsmöglichkeiten für alle anderen relevanten Größen gefunden. Bei redundanten Eingabegrößen existiert eine Vielzahl von verschiedenen Berechnungsmöglichkeiten aus verschiedenen minimalen Untermengen von Eingabegrößen, die alle automatisch gefunden und deren Ergebnisse berechnet und verglichen werden. Wenn die Berechnung nicht eindeutig ist, aber nur eine endliche Anzahl von Lösungen existiert, dann werden alle Lösungen gefunden und berechnet. Durch den Vergleich verschiedener Berechnungsergebnisse können Ausreißer in den Eingabegrößen aufgedeckt werden und ein robustes Endergebnis wird erhalten. Das Verfahren arbeitet nicht stochastisch, so dass kein stochastisches Modell der Beobachtungen erforderlich ist. Die Beschreibung des Algorithmus wird an einem praktischen Fall illustriert. Er ist auf einem Webserver installiert und über das Internet frei verfügbar.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Soundarapandian, Manikandan. "Relational Computing Using HPC Resources: Services and Optimizations." Thesis, Virginia Tech, 2015. http://hdl.handle.net/10919/56586.

Повний текст джерела
Анотація:
Computational epidemiology involves processing, analysing and managing large volumes of data. Such massive datasets cannot be handled efficiently by using traditional standalone database management systems, owing to their limitation in the degree of computational efficiency and bandwidth to scale to large volumes of data. In this thesis, we address management and processing of large volumes of data for modeling, simulation and analysis in epidemiological studies. Traditionally, compute intensive tasks are processed using high performance computing resources and supercomputers whereas data intensive tasks are delegated to standalone databases and some custom programs. DiceX framework is a one-stop solution for distributed database management and processing and its main mission is to leverage and utilize supercomputing resources for data intensive computing, in particular relational data processing. While standalone databases are always on and a user can submit queries at any time for required results, supercomputing resources must be acquired and are available for a limited time period. These resources are relinquished either upon completion of execution or at the expiration of the allocated time period. This kind of reservation based usage style poses critical challenges, including building and launching a distributed data engine onto the supercomputer, saving the engine and resuming from the saved image, devising efficient optimization upgrades to the data engine and enabling other applications to seamlessly access the engine . These challenges and requirements cause us to align our approach more closely with cloud computing paradigms of Infrastructure as a Service(IaaS) and Platform as a Service(PaaS). In this thesis, we propose cloud computing like workflows, but using supercomputing resources to manage and process relational data intensive tasks. We propose and implement several services including database freeze and migrate and resume, ad-hoc resource addition and table redistribution. These services assist in carrying out the workflows defined. We also propose an optimization upgrade to the query planning module of postgres-XC, the core relational data processing engine of the DiceX framework. With a knowledge of domain semantics, we have devised a more robust data distribution strategy that would enable to push down most time consuming sql operations forcefully to the postgres-XC data nodes, bypassing its query planner's default shippability criteria without compromising correctness. Forcing query push down reduces the query processing time by a factor of almost 40%-60% for certain complex spatio-temporal queries on our epidemiology datasets. As part of this work, a generic broker service has also been implemented, which acts as an interface to the DiceX framework by exposing restful apis, which applications can make use of to query and retrieve results irrespective of the programming language or environment.
Master of Science
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Bramsäter, Jenny, and Kajsa Lundgren. "Study on the Dynamic Control of Dam Operating Water Levels of Yayangshan Dam in Flood Season." Thesis, KTH, Industriell ekologi, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-174877.

Повний текст джерела
Анотація:
Water levels up- and downstream of dams are strongly affected by water levels in the reservoir as well as the discharge of the dam. To ensure that no harm comes to buildings, bridges or agricultural land it is important to ensure that the water level in the reservoir is adjusted to handle large floods. This report studies within what range the water level in the reservoir of the Yayangshan dam, located in Lixian River, can vary without causing any flooding downstream the dam or at the Old and New Babian Bridge located upstream the dam. By calculation of the designed flood, flood routing- and backwater computation, initial water level ranges in the reservoir have been set for the pre-flood, main flood and latter flood season for damages to be avoided. Due to the far distance between the dam site and the bridges, backwater effects had no influence on the limitations of the initial water level in the reservoir.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Jamasebi, Reza. "COMPUTATIONAL PHENOTYPE DERIVED FROM PHYSIOLOGICAL TIME SERIES: APPLICATION TO SLEEP DATA ANALYSIS." Case Western Reserve University School of Graduate Studies / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=case1220467153.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

San, Juan Sebastián Pablo. "HPC algorithms for nonnegative decompositions." Doctoral thesis, Universitat Politècnica de València, 2018. http://hdl.handle.net/10251/113069.

Повний текст джерела
Анотація:
Muchos problemas procedentes de aplicaciones del mundo real pueden ser modelados como problemas matemáticos con magnitudes no negativas, y por tanto, las soluciones de estos problemas matemáticos solo tienen sentido si son no negativas. Estas magnitudes no negativas pueden ser, por ejemplo, las frecuencias en una señal sonora, las intensidades de los pixeles de una imagen, etc. Algunos de estos problemas pueden ser modelados utilizando un sistema de ecuaciones lineales sobredeterminado. Cuando la solución de dicho problema debe ser restringida a valores no negativos, aparece un problema llamado problema de mínimos cuadrados no negativos (NNLS por sus siglas en inglés). La solución de dicho problema tiene múltiples aplicaciones en ciencia e ingeniería. Otra descomposición no negativa importante es la Factorización de Matrices No negativas (NMF por sus siglas en inglés). La NMF es una herramienta muy popular utilizada en varios campos, como por ejemplo: clasificación de documentos, aprendizaje automático, análisis de imagen o separación de señales sonoras. Esta factorización intenta aproximar una matriz no negativa con el producto de dos matrices no negativas de menor tamaño, creando habitualmente representaciones por partes de los datos originales. Los algoritmos diseñados para calcular la solución de estos dos problemas no negativos tienen un elevado coste computacional, y debido a ese elevado coste, estas descomposiciones pueden beneficiarse mucho del uso de técnicas de Computación de Altas Prestaciones (HPC por sus siglas en inglés). Estos sistemas computacionales de altas prestaciones incluyen desde los modernos computadores multinucleo a lo último en aceleradores de calculo (Unidades de Procesamiento Gráfico (GPU), Intel Many Integrated Core (MIC), etc.). Para obtener el máximo rendimiento de estos sistemas, los desarrolladores deben utilizar tecnologías software tales como la programación paralela, la vectoración o el uso de librerías de computación altas prestaciones. A pesar de que existen diversos algoritmos para calcular la NMF y resolver el problema NNLS, no todos ellos disponen de una implementación paralela y eficiente. Además, es muy interesante reunir diversos algoritmos con propiedades diferentes en una sola librería computacional. Esta tesis presenta una librería computacional de altas prestaciones que contiene implementaciones paralelas y eficientes de los mejores algoritmos existentes actualmente para calcular la NMF. Además la tesis también incluye una comparación experimental entre las diferentes implementaciones presentadas. Esta librería centrada en el cálculo de la NMF soporta múltiples arquitecturas tales como CPUs multinucleo, GPUs e Intel MIC. El objetivo de esta librería es ofrecer un abanico de algoritmos eficientes para ayudar a científicos, ingenieros o cualquier tipo de profesionales que necesitan hacer uso de la NMF. Otro problema abordado en esta tesis es la actualización de las factorizaciones no negativas. El problema de la actualización se ha estudiado tanto para la solución del problema NNLS como para el calculo de la NMF. Existen problemas no negativos cuya solución es próxima a otros problemas que ya han sido resueltos, el problema de la actualización consiste en aprovechar la solución de un problema A que ya ha sido resuelto, para obtener la solución de un problema B cercano al problema A. Utilizando esta aproximación, el problema B puede ser resuelto más rápido que si se tuviera que resolver sin aprovechar la solución conocida del problema A. En esta tesis se presenta una metodología algorítmica para resolver ambos problemas de actualización: la actualización de la solución del problema NNLS y la actualización de la NMF. Además se presentan evaluaciones empíricas de las soluciones presentadas para ambos problemas. Los resultados de estas evaluaciones muestran que los algoritmos propuestos son más rápidos que reso
Molts problemes procedents de aplicacions del mon real poden ser modelats com problemes matemàtics en magnituts no negatives, i per tant, les solucions de estos problemes matemàtics només tenen sentit si son no negatives. Estes magnituts no negatives poden ser, per eixemple, la concentració dels elements en un compost químic, les freqüències en una senyal sonora, les intensitats dels pixels de una image, etc. Alguns d'estos problemes poden ser modelats utilisant un sistema d'equacions llineals sobredeterminat. Quant la solució de este problema deu ser restringida a valors no negatius, apareix un problema nomenat problema de mínims quadrats no negatius (NNLS per les seues sigles en anglés). La solució de este problema te múltiples aplicacions en ciències i ingenieria. Un atra descomposició no negativa important es la Factorisació de Matrius No negatives(NMF per les seues sigles en anglés). La NMF es una ferramenta molt popular utilisada en diversos camps, com per eixemple: classificacio de documents, aprenentage automàtic, anàlisis de image o separació de senyals sonores. Esta factorisació intenta aproximar una matriu no negativa en el producte de dos matrius no negatives de menor tamany, creant habitualment representacions a parts de les dades originals. Els algoritmes dissenyats per a calcular la solució de estos dos problemes no negatius tenen un elevat cost computacional, i degut a este elevat cost, estes descomposicions poden beneficiar-se molt del us de tècniques de Computació de Altes Prestacions (HPC per les seues sigles en anglés). Estos sistemes de computació de altes prestacions inclouen des dels moderns computadors multinucli a lo últim en acceleradors de càlcul (Unitats de Processament Gràfic (GPU), Intel Many Core (MIC), etc.). Per a obtindre el màxim rendiment de estos sistemes, els desenrolladors deuen utilisar tecnologies software tals com la programació paralela, la vectorisació o el us de llibreries de computació de altes prestacions. A pesar de que existixen diversos algoritmes per a calcular la NMF i resoldre el problema NNLS, no tots ells disponen de una implementació paralela i eficient. Ademés, es molt interessant reunir diversos algoritmes en propietats diferents en una sola llibreria computacional. Esta tesis presenta una llibreria computacional de altes prestacions que conté implementacions paraleles i eficients dels millors algoritmes existents per a calcular la NMF. Ademés, la tesis també inclou una comparació experimental entre les diferents implementacions presentades. Esta llibreria centrada en el càlcul de la NMF soporta diverses arquitectures tals com CPUs multinucli, GPUs i Intel MIC. El objectiu de esta llibreria es oferir una varietat de algoritmes eficients per a ajudar a científics, ingeniers o qualsevol tipo de professionals que necessiten utilisar la NMF. Un atre problema abordat en esta tesis es la actualisació de les factorisacions no negatives. El problema de la actualisació se ha estudiat tant per a la solució del problema NNLS com per a el càlcul de la NMF. Existixen problemes no negatius la solució dels quals es pròxima a atres problemes no negatius que ya han sigut resolts, el problema de la actualisació consistix en aprofitar la solució de un problema A que ya ha sigut resolt, per a obtindre la solució de un problema B pròxim al problema A. Utilisant esta aproximació, el problema B pot ser resolt molt mes ràpidament que si tinguera que ser resolt des de 0 sense aprofitar la solució coneguda del problema A. En esta tesis es presenta una metodologia algorítmica per a resoldre els dos problemes de actualisació: la actualisació de la solució del problema NNLS i la actualisació de la NMF. Ademés es presenten evaluacions empíriques de les solucions presentades per als dos problemes. Els resultats de estes evaluacions mostren que els algoritmes proposts son més ràpits que resoldre el problema des de 0 en tots els
Many real world-problems can be modelled as mathematical problems with nonnegative magnitudes, and, therefore, the solutions of these problems are meaningful only if their values are nonnegative. Examples of these nonnegative magnitudes are the concentration of components in a chemical compound, frequencies in an audio signal, pixel intensities on an image, etc. Some of these problems can be modelled to an overdetermined system of linear equations. When the solution of this system of equations should be constrained to nonnegative values, a new problem arises. This problem is called the Nonnegative Least Squares (NNLS) problem, and its solution has multiple applications in science and engineering, especially for solving optimization problems with nonnegative restrictions. Another important nonnegativity constrained decomposition is the Nonnegative Matrix Factorization (NMF). The NMF is a very popular tool in many fields such as document clustering, data mining, machine learning, image analysis, chemical analysis, and audio source separation. This factorization tries to approximate a nonnegative data matrix with the product of two smaller nonnegative matrices, usually creating parts based representations of the original data. The algorithms that are designed to compute the solution of these two nonnegative problems have a high computational cost. Due to this high cost, these decompositions can benefit from the extra performance obtained using High Performance Computing (HPC) techniques. Nowadays, there are very powerful computational systems that offer high performance and can be used to solve extremely complex problems in science and engineering. From modern multicore CPUs to the newest computational accelerators (Graphics Processing Units(GPU), Intel Many Integrated Core(MIC), etc.), the performance of these systems keeps increasing continuously. To make the most of the hardware capabilities of these HPC systems, developers should use software technologies such as parallel programming, vectorization, or high performance computing libraries. While there are several algorithms for computing the NMF and for solving the NNLS problem, not all of them have an efficient parallel implementation available. Furthermore, it is very interesting to group several algorithms with different properties into a single computational library. This thesis presents a high-performance computational library with efficient parallel implementations of the best algorithms to compute the NMF in the current state of the art. In addition, an experimental comparison between the different implementations is presented. This library is focused on the computation of the NMF supporting multiple architectures like multicore CPUs, GPUs and Intel MIC. The goal of the library is to offer a full suit of algorithms to help researchers, engineers or professionals that need to use the NMF. Another problem that is dealt with in this thesis is the updating of nonnegative decompositions. The updating problem has been studied for both the solution of the NNLS problem and the NMF. Sometimes there are nonnegative problems that are close to other nonnegative problems that have already been solved. The updating problem tries to take advantage of the solution of a problem A, that has already been solved in order to obtain a solution of a new problem B, which is closely related to problem A. With this approach, problem B can be solved faster than solving it from scratch and not taking advantage of the already known solution of problem A. In this thesis, an algorithmic scheme is proposed for both the updating of the solution of NNLS problems and the updating of the NMF. Empirical evaluations for both updating problems are also presented. The results show that the proposed algorithms are faster than solving the problems from scratch in all of the tested cases.
San Juan Sebastián, P. (2018). HPC algorithms for nonnegative decompositions [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/113069
TESIS
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Karlsson, Lars. "Scheduling of parallel matrix computations and data layout conversion for HPC and Multi-Core Architectures." Doctoral thesis, Umeå universitet, Institutionen för datavetenskap, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-41224.

Повний текст джерела
Анотація:
Dense linear algebra represents fundamental building blocks in many computational science and engineering applications. The dense linear algebra algorithms must be numerically stable, robust, and reliable in order to be usable as black-box solvers by expert as well as non-expert users. The algorithms also need to scale and run efficiently on massively parallel computers with multi-core nodes. Developing high-performance algorithms for dense matrix computations is a challenging task, especially since the widespread adoption of multi-core architectures. Cache reuse is an even more critical issue on multi-core processors than on uni-core processors due to their larger computational power and more complex memory hierarchies. Blocked matrix storage formats, in which blocks of the matrix are stored contiguously, and blocked algorithms, in which the algorithms exhibit large amounts of cache reuse, remain key techniques in the effort to approach the theoretical peak performance. In Paper I, we present a packed and distributed Cholesky factorization algorithm based on a new blocked and packed matrix storage format. High performance node computations are obtained as a result of the blocked storage format, and the use of look-ahead leads to improved parallel efficiency. In Paper II and Paper III, we study the problem of in-place matrix transposition in general and in-place matrix storage format conversion in particular. We present and evaluate new high-performance parallel algorithms for in-place conversion between the standard column-major and row-major formats and the four standard blocked matrix storage formats. Another critical issue, besides cache reuse, is that of efficient scheduling of computational tasks. Many weakly scalable parallel algorithms are efficient only when the problem size per processor is relatively large. A current research trend focuses on developing parallel algorithms which are more strongly scalable and hence more efficient also for smaller problems. In Paper IV, we present a framework for dynamic node-scheduling of two-sided matrix computations and demonstrate that by using priority-based scheduling one can obtain an efficient scheduling of a QR sweep. In Paper V and Paper VI, we present a blocked implementation of two-stage Hessenberg reduction targeting multi-core architectures. The main contributions of Paper V are in the blocking and scheduling of the second stage. Specifically, we show that the concept of look-ahead can be applied also to this two-sided factorization, and we propose an adaptive load-balancing technique that allow us to schedule the operations effectively.
Matrisberäkningar är fundamentala byggblock imånga beräkningstunga teknisk-vetenskapliga applikationer. Algoritmerna måste vara numeriskt stabila och robusta för att användaren ska kunna förlita sig på de beräknade resultaten. Algoritmerna måste dessutom skala och kunna köras effektivt på massivt parallella datorer med noder bestående av flerkärniga processorer. Det är utmanande att uveckla högpresterande algoritmer för täta matrisberäkningar, särskilt sedan introduktionen av flerkärniga processorer. Det är ännu viktigare att återanvända data i cache-minnena i en flerkärnig processor på grund av dess höga beräkningsprestanda. Två centrala tekniker i strävan efter algoritmer med optimal prestanda är blockade algoritmer och blockade matrislagringsformat. En blockad algoritm har ett minnesåtkomstmönster som passar minneshierarkin väl. Ett blockat matrislagringsformat placerar matrisens element i minnet så att elementen i specifika matrisblock lagras konsekutivt. I Artikel I presenteras en algoritm för Cholesky-faktorisering av en matris kompakt lagrad i ett distribuerat minne. Det nya lagringsformatet är blockat och möjliggör därigenom hög prestanda. Artikel II och Artikel III beskriver hur en konventionellt lagrad matris kan konverteras till och från ett blockat lagringsformat med hjälp av en ytterst liten mängd extra lagringsutrymme. Lösningen bygger på en ny parallell algoritm för matristransponering av rektangulära matriser. Vid skapandet av en skalbar parallell algoritm måste man även beakta hur de olika beräkningsuppgifterna schemaläggs på ett effektivt sätt. Många så kallade svagt skalbara algoritmer är effektiva endast för relativt stora problem. En nuvarande forskningstrend är att utveckla så kallade starkt skalbara algoritmer, vilka är mer effektiva även för mindre problem. Artikel IV introducerar ett dynamiskt schemaläggningssystem för två-sidiga matrisberäkningar. Beräkningsuppgifterna fördelas statiskt på noderna och schemaläggs sedan dynamiskt inom varje nod. Artikeln visar även hur prioritetsbaserad schemaläggning tar en tidigare ineffektiv algoritm för ett så kallat QR-svep och gör den effektiv. Artikel V och Artikel VI presenterar nya parallella blockade algoritmer, designade för flerkärniga processorer, för en två-stegs Hessenberg-reduktion. De centrala bidragen i Artikel V utgörs av en blockad algoritm för reduktionens andra steg samt en adaptiv lastbalanseringsmetod.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Abaunza, Víctor Eduardo Martínez. "Performance optimization of geophysics stencils on HPC architectures." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2018. http://hdl.handle.net/10183/183165.

Повний текст джерела
Анотація:
A simulação de propagação de onda é uma ferramenta crucial na pesquisa de geofísica (para análise eficiente dos terremotos, mitigação de riscos e a exploração de petróleo e gáz). Devido à sua simplicidade e sua eficiência numérica, o método de diferenças finitas é uma das técnicas implementadas para resolver as equações da propagação das ondas. Estas aplicações são conhecidas como estênceis porque consistem num padrão que replica a mesma computação num domínio multidimensional de dados. A Computação de Alto Desempenho é requerida para solucionar este tipo de problemas, como consequência do grande número de pontos envolvidos nas simulações tridimensionais do subsolo. A optimização do desempenho dos estênceis é um desafio e depende do arquitetura usada. Neste contexto, focamos nosso trabalho em duas partes. Primeiro, desenvolvemos nossa pesquisa nas arquiteturas multicore; analisamos a implementação padrão em OpenMP dos modelos numéricos da transferência de calor (um estêncil Jacobi de 7 pontos), e o aplicativo Ondes3D (um simulador sísmico desenvolvido pela Bureau de Recherches Géologiques et Minières); usamos dois algoritmos conhecidos (nativo, e bloqueio espacial) para encontrar correlações entre os parâmetros da configuração de entrada, na execução, e o desempenho computacional; depois, propusemos um modelo baseado no Aprendizado de Máquina para avaliar, predizer e melhorar o desempenho dos modelos estênceis na arquitetura usada; também usamos um modelo de propagação da onda acústica fornecido pela empresa Petrobras; e predizemos o desempenho com uma alta precisão (até 99%) nas arquiteturas multicore. Segundo, orientamos nossa pesquisa nas arquiteturas heterogêneas, analisamos uma implementação padrão do modelo de propagação de ondas em CUDA, para encontrar os fatores que afetam o desempenho quando o número de aceleradores é aumentado; então, propusemos uma implementação baseada em tarefas para amelhorar o desempenho, de acordo com um conjunto de configuração no tempo de execução (algoritmo de escalonamento, tamanho e número de tarefas), e comparamos o desempenho obtido com as versões de só CPU ou só GPU e o impacto no desempenho das arquiteturas heterogêneas; nossos resultados demostram um speedup significativo (até 25) em comparação com a melhor implementação disponível para arquiteturas multicore.
Wave modeling is a crucial tool in geophysics, for efficient strong motion analysis, risk mitigation and oil & gas exploration. Due to its simplicity and numerical efficiency, the finite-difference method is one of the standard techniques implemented to solve the wave propagation equations. This kind of applications is known as stencils because they consist in a pattern that replicates the same computation on a multi-dimensional domain. High Performance Computing is required to solve this class of problems, as a consequence of a large number of grid points involved in three-dimensional simulations of the underground. The performance optimization of stencil computations is a challenge and strongly depends on the underlying architecture. In this context, this work was directed toward a twofold aim. Firstly, we have led our research on multicore architectures and we have analyzed the standard OpenMP implementation of numerical kernels from the 3D heat transfer model (a 7-point Jacobi stencil) and the Ondes3D code (a full-fledged application developed by the French Geological Survey). We have considered two well-known implementations (naïve, and space blocking) to find correlations between parameters from the input configuration at runtime and the computing performance; thus, we have proposed a Machine Learning-based approach to evaluate, to predict, and to improve the performance of these stencil models on the underlying architecture. We have also used an acoustic wave propagation model provided by the Petrobras company and we have predicted the performance with high accuracy on multicore architectures. Secondly, we have oriented our research on heterogeneous architectures, we have analyzed the standard implementation for seismic wave propagation model in CUDA, to find which factors affect the performance; then, we have proposed a task-based implementation to improve the performance, according to the runtime configuration set (scheduling algorithm, size, and number of tasks), and we have compared the performance obtained with the classical CPU or GPU only versions with the results obtained on heterogeneous architectures.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

López-Yunta, Mariña. "Multimodal ventricular tachycardia analysis : towards the accurate parametrization of predictive HPC electrophysiological computational models." Doctoral thesis, Universitat Politècnica de Catalunya, 2018. http://hdl.handle.net/10803/663730.

Повний текст джерела
Анотація:
After a myocardial infarction, the affected areas of the cardiac tissue suffer changes in their electrical and mechanical properties. This post-infarction scar tissue has been related with a particular type of arrhythmia: ventricular tachycardia (VT). A thorough study on the experimental data acquired with clinical tools is presented in this thesis with the objective of defining the limitations of the clinical data towards predictive computational models. Computational models have a large potential as predictive tools for VT, but the verification, validation and uncertain quantification of the numerical results is required before they can be employed as a clinical tool. Swine experimental data from an invasive electrophysiological study and Cardiac Magnetic Resonance imaging is processed to obtain accurate characterizations of the post-infarction scar. Based on the results, the limitation of each technique is described. Furthermore, the volume of the scar is evaluated as marker for post-infarction VT induction mechanisms. A control case from the animal experimental protocol is employed to build a simulation scenario in which biventricular simulations are done using a detailed cell model adapted to the ionic currents present in the swine myocytes. The uncertainty of the model derived from diffusion and fibre orientation is quantified. Finally, the recovery of the model to an extrastimulus is compared to experimental data by computationally reproducing an S1-S2 protocol. Results from the cardiac computational model show that the propagation wave patterns from numerical results match the one described by the experimental activation maps if the DTI fibre orientations are used. The electrophysiological activation is sensitive to fibre orientation. Therefore simulations including the fibre orientations from DTI are able to reproduce a physiological wave propagation pattern. The diffusion coefficients highly determine the conduction velocity. The S1-S2 protocol produced restitution curves that have similar slopes to the experimental curves. This work is a first step forward towards validation of cardiac electrophysiology simulations. Future work will address the limitations about optimal parametrization of the O'Hara-Rudy cell model to fully validate the cardiac computational model for prediction of VT inducibility.
Tras un infarto de miocardio, las zonas de tejido cardiaco afectadas sufren cambios en sus propiedades eléctricas y mecánicas. Este substrato miocárdico se ha relacionado con la taquicardia ventricular (TV), un tipo de arritmia. En esta tesis se presenta un estudio exhaustivo de los datos experimentales adquiridos con protocolos clínicos con el objetivo de definir las limitaciones de los datos clínicos antes de avanzar hacia modelos computacionales. Los modelos computacionales tienen un gran potencial como herramientas para la predicción de TV, pero es necesaria su verificación, validación y la cuantificación de la incertidumbre en los resultados numéricos antes de poderlos emplear como herramientas clínicas. La caracterización precisa del sustrato miocárdico, cicatriz, se realiza mediante el procesado de los datos experimentales porcinos obtenidos del estudio electrofisiológico invasivo y la resonancia magnética cardiaca. Como consecuencia, se describen las limitaciones de cada técnica. Ademas, se estudia si el volumen da la cicatriz puede actuar como indicador de la aparición de VT. El escenario de simulación para los modelos computacionales biventriulares se construye a partir de los datos experimentales de un caso control incluido en el protocolo experimental. En el, se realizan simulaciones electrofisiológicas empleando un modelo celular detallado adaptado a las propiedades de las corrientes iónicas en los miocitos de los cerdos. Se cuantifica la incertidumbre del modelo generada por la difusión y la orientación de las fibras. Por ultimo, se compara la recuperación del modelo a un extraestímulo con datas experimentales mediante la simulación de un protocolo S1-S2. Los resultado numéricos obtenidos muestran que los patrones de propagación de la onda de las simulación cardiaca coinciden con los descritos por los mapas de activación experimentales si la fibras incluidas en el modelo corresponden a los datos de DTI. El modelo de activación es sensible a la orientación de fibras impuesta. Las simulaciones incluyendo la orientación de fibras de DTI es capaz de reproducir los patrones fisiológicos de la onda de propagación eléctrica en ambos ventrículos. El velocidad de conducción obtenida es muy dependiente del coeficiente de difusión impuesto. El protocolo S1-S2 protocolo genera curvas de restitución con pendientes simulares a las curvas experimentales. Esta tesis es un primer paso hacia la validación de las simulaciones electrofisiológicas cardiacas. En el futuro, se mejoraran las limitaciones relacionadas con una optima parametrización del modelo celular de O?Hara-Rudy para validar por completo el modelo computacional cardiaco para avanzar hacia la predicción de la predicción de VT.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Amin, Kaizar Abdul Husain. "An Integrated Architecture for Ad Hoc Grids." Thesis, University of North Texas, 2006. https://digital.library.unt.edu/ark:/67531/metadc5300/.

Повний текст джерела
Анотація:
Extensive research has been conducted by the grid community to enable large-scale collaborations in pre-configured environments. grid collaborations can vary in scale and motivation resulting in a coarse classification of grids: national grid, project grid, enterprise grid, and volunteer grid. Despite the differences in scope and scale, all the traditional grids in practice share some common assumptions. They support mutually collaborative communities, adopt a centralized control for membership, and assume a well-defined non-changing collaboration. To support grid applications that do not confirm to these assumptions, we propose the concept of ad hoc grids. In the context of this research, we propose a novel architecture for ad hoc grids that integrates a suite of component frameworks. Specifically, our architecture combines the community management framework, security framework, abstraction framework, quality of service framework, and reputation framework. The overarching objective of our integrated architecture is to support a variety of grid applications in a self-controlled fashion with the help of a self-organizing ad hoc community. We introduce mechanisms in our architecture that successfully isolates malicious elements from the community, inherently improving the quality of grid services and extracting deterministic quality assurances from the underlying infrastructure. We also emphasize on the technology-independence of our architecture, thereby offering the requisite platform for technology interoperability. The feasibility of the proposed architecture is verified with a high-quality ad hoc grid implementation. Additionally, we have analyzed the performance and behavior of ad hoc grids with respect to several control parameters.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Cheema, Fahad Islam. "High-Level Parallel Programming of Computation-Intensive Algorithms on Fine-Grained Architecture." Thesis, Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-2620.

Повний текст джерела
Анотація:

Computation-intensive algorithms require a high level of parallelism and programmability, which

make them good candidate for hardware acceleration using fine-grained processor arrays. Using

Hardware Description Language (HDL), it is very difficult to design and manage fine-grained

processing units and therefore High-Level Language (HLL) is a preferred alternative.

This thesis analyzes HLL programming of fine-grained architecture in terms of achieved

performance and resource consumption. In a case study, highly computation-intensive algorithms

(interpolation kernels) are implemented on fine-grained architecture (FPGA) using a high-level

language (Mitrion-C). Mitrion Virtual Processor (MVP) is extracted as an application-specific

fine-grain processor array, and the Mitrion development environment translates high-level design

to hardware description (HDL).

Performance requirements, parallelism possibilities/limitations and resource requirement for

parallelism vary from algorithm to algorithm as well as by hardware platform. By considering

parallelism at different levels, we can adjust the parallelism according to available hardware

resources and can achieve better adjustment of different tradeoffs like gates-performance and

memory-performance tradeoffs. This thesis proposes different design approaches to adjust

parallelism at different design levels. For interpolation kernels, different parallelism levels and

design variants are proposed, which can be mixed to get a well-tuned application and resource

specific design.

Стилі APA, Harvard, Vancouver, ISO та ін.
16

Streit, Achim. "Self-tuning job scheduling strategies for the resource management of HPC systems and computational grids." [S.l. : s.n.], 2003. http://deposit.ddb.de/cgi-bin/dokserv?idn=971579393.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Schirra, Jörg R. J. "Foundation of computational visualistics." Wiesbaden Dt. Univ.-Verl, 2005. http://deposit.ddb.de/cgi-bin/dokserv?id=2686222&prov=M&dok_var=1&dok_ext=htm.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Sandholm, Thomas. "Managing Service Levels in Grid Computing Systems : Quota Policy and Computational Market Approaches." Licentiate thesis, KTH, Numerical Analysis and Computer Science, NADA, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4346.

Повний текст джерела
Анотація:

We study techniques to enforce and provision differentiated service levels in Computational Grid systems. The Grid offers simplified provisioning of peak-capacity for applications with computational requirements beyond local machines and clusters, by sharing resources across organizational boundaries. Current systems have focussed on access control, i.e., managing who is allowed to run applications on remote sites. Very little work has been done on providing differentiated service levels for those applications that are admitted. This leads to a number of problems when scheduling jobs in a fair and efficient way. For example, users with a large number of long-running jobs could starve out others, both intentionally and non-intentionally. We investigate the requirements of High Performance Computing (HPC) applications that run in academic Grid systems, and propose two models of service-level management. Our first model is based on global real-time quota enforcement, where projects are granted resource quota, such as CPU hours, across the Grid by a centralized allocation authority. We implement the SweGrid Accounting System to enforce quota allocated by the Swedish National Allocations Committee in the SweGrid production Grid, which connects six Swedish HPC centers. A flexible authorization policy framework allows provisioning and enforcement of two different service levels across the SweGrid clusters; high-priority and low-priority jobs. As a solution to more fine-grained control over service levels we propose and implement a Grid Market system, using a market-based resource allocator called Tycoon. The conclusion of our research is that although the Grid accounting solution offers better service level enforcement support than state-of-the-art production Grid systems, it turned out to be complex to set the resource price and other policies manually, while ensuring fairness and efficiency of the system. Our Grid Market on the other hand sets the price according to the dynamic demand, and it is further incentive compatible, in that the overall system state remains healthy even in the presence of strategic users.

Стилі APA, Harvard, Vancouver, ISO та ін.
19

Ling, Cheng. "High performance bioinformatics and computational biology on general-purpose graphics processing units." Thesis, University of Edinburgh, 2012. http://hdl.handle.net/1842/6260.

Повний текст джерела
Анотація:
Bioinformatics and Computational Biology (BCB) is a relatively new multidisciplinary field which brings together many aspects of the fields of biology, computer science, statistics, and engineering. Bioinformatics extracts useful information from biological data and makes these more intuitive and understandable by applying principles of information sciences, while computational biology harnesses computational approaches and technologies to answer biological questions conveniently. Recent years have seen an explosion of the size of biological data at a rate which outpaces the rate of increases in the computational power of mainstream computer technologies, namely general purpose processors (GPPs). The aim of this thesis is to explore the use of off-the-shelf Graphics Processing Unit (GPU) technology in the high performance and efficient implementation of BCB applications in order to meet the demands of biological data increases at affordable cost. The thesis presents detailed design and implementations of GPU solutions for a number of BCB algorithms in two widely used BCB applications, namely biological sequence alignment and phylogenetic analysis. Biological sequence alignment can be used to determine the potential information about a newly discovered biological sequence from other well-known sequences through similarity comparison. On the other hand, phylogenetic analysis is concerned with the investigation of the evolution and relationships among organisms, and has many uses in the fields of system biology and comparative genomics. In molecular-based phylogenetic analysis, the relationship between species is estimated by inferring the common history of their genes and then phylogenetic trees are constructed to illustrate evolutionary relationships among genes and organisms. However, both biological sequence alignment and phylogenetic analysis are computationally expensive applications as their computing and memory requirements grow polynomially or even worse with the size of sequence databases. The thesis firstly presents a multi-threaded parallel design of the Smith- Waterman (SW) algorithm alongside an implementation on NVIDIA GPUs. A novel technique is put forward to solve the restriction on the length of the query sequence in previous GPU-based implementations of the SW algorithm. Based on this implementation, the difference between two main task parallelization approaches (Inter-task and Intra-task parallelization) is presented. The resulting GPU implementation matches the speed of existing GPU implementations while providing more flexibility, i.e. flexible length of sequences in real world applications. It also outperforms an equivalent GPPbased implementation by 15x-20x. After this, the thesis presents the first reported multi-threaded design and GPU implementation of the Gapped BLAST with Two-Hit method algorithm, which is widely used for aligning biological sequences heuristically. This achieved up to 3x speed-up improvements compared to the most optimised GPP implementations. The thesis then presents a multi-threaded design and GPU implementation of a Neighbor-Joining (NJ)-based method for phylogenetic tree construction and multiple sequence alignment (MSA). This achieves 8x-20x speed up compared to an equivalent GPP implementation based on the widely used ClustalW software. The NJ method however only gives one possible tree which strongly depends on the evolutionary model used. A more advanced method uses maximum likelihood (ML) for scoring phylogenies with Markov Chain Monte Carlo (MCMC)-based Bayesian inference. The latter was the subject of another multi-threaded design and GPU implementation presented in this thesis, which achieved 4x-8x speed up compared to an equivalent GPP implementation based on the widely used MrBayes software. Finally, the thesis presents a general evaluation of the designs and implementations achieved in this work as a step towards the evaluation of GPU technology in BCB computing, in the context of other computer technologies including GPPs and Field Programmable Gate Arrays (FPGA) technology.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Shabut, Antesar Ramadan M. "Trust computational models for mobile ad hoc networks : recommendation based trustworthiness evaluation using multidimensional metrics to secure routing protocol in mobile ad hoc networks." Thesis, University of Bradford, 2015. http://hdl.handle.net/10454/7501.

Повний текст джерела
Анотація:
Distributed systems like e-commerce and e-market places, peer-to-peer networks, social networks, and mobile ad hoc networks require cooperation among the participating entities to guarantee the formation and sustained existence of network services. The reliability of interactions among anonymous entities is a significant issue in such environments. The distributed entities establish connections to interact with others, which may include selfish and misbehaving entities and result in bad experiences. Therefore, trustworthiness evaluation using trust management techniques has become a significant issue in securing these environments to allow entities decide on the reliability and trustworthiness of other entities, besides it helps coping with defection problems and stimulating entities to cooperate. Recent models on evaluating trustworthiness in distributed systems have heavily focused on assessing trustworthiness of entities and isolate misbehaviours based on single trust metrics. Less effort has been put on the investigation of the subjective nature and differences in the way trustworthiness is perceived to produce a composite multidimensional trust metrics to overcome the limitation of considering single trust metric. In the light of this context, this thesis concerns the evaluation of entities’ trustworthiness by the design and investigation of trust metrics that are computed using multiple properties of trust and considering environment. Based on the concept of probabilistic theory of trust management technique, this thesis models trust systems and designs cooperation techniques to evaluate trustworthiness in mobile ad hoc networks (MANETs). A recommendation based trust model with multi-parameters filtering algorithm, and multidimensional metric based on social and QoS trust model are proposed to secure MANETs. Effectiveness of each of these models in evaluating trustworthiness and discovering misbehaving nodes prior to interactions, as well as their influence on the network performance has been investigated. The results of investigating both the trustworthiness evaluation and the network performance are promising.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Shabut, Antesar R. M. "Trust Computational Models for Mobile Ad Hoc Networks. Recommendation Based Trustworthiness Evaluation using Multidimensional Metrics to Secure Routing Protocol in Mobile Ad Hoc Networks." Thesis, University of Bradford, 2015. http://hdl.handle.net/10454/7501.

Повний текст джерела
Анотація:
Distributed systems like e-commerce and e-market places, peer-to-peer networks, social networks, and mobile ad hoc networks require cooperation among the participating entities to guarantee the formation and sustained existence of network services. The reliability of interactions among anonymous entities is a significant issue in such environments. The distributed entities establish connections to interact with others, which may include selfish and misbehaving entities and result in bad experiences. Therefore, trustworthiness evaluation using trust management techniques has become a significant issue in securing these environments to allow entities decide on the reliability and trustworthiness of other entities, besides it helps coping with defection problems and stimulating entities to cooperate. Recent models on evaluating trustworthiness in distributed systems have heavily focused on assessing trustworthiness of entities and isolate misbehaviours based on single trust metrics. Less effort has been put on the investigation of the subjective nature and differences in the way trustworthiness is perceived to produce a composite multidimensional trust metrics to overcome the limitation of considering single trust metric. In the light of this context, this thesis concerns the evaluation of entities’ trustworthiness by the design and investigation of trust metrics that are computed using multiple properties of trust and considering environment. Based on the concept of probabilistic theory of trust management technique, this thesis models trust systems and designs cooperation techniques to evaluate trustworthiness in mobile ad hoc networks (MANETs). A recommendation based trust model with multi-parameters filtering algorithm, and multidimensional metric based on social and QoS trust model are proposed to secure MANETs. Effectiveness of each of these models in evaluating trustworthiness and discovering misbehaving nodes prior to interactions, as well as their influence on the network performance has been investigated. The results of investigating both the trustworthiness evaluation and the network performance are promising.
Ministry of Higher Education in Libya and the Libyan Cultural Attaché bureau in London
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Agarwal, Dinesh. "Scientific High Performance Computing (HPC) Applications On The Azure Cloud Platform." Digital Archive @ GSU, 2013. http://digitalarchive.gsu.edu/cs_diss/75.

Повний текст джерела
Анотація:
Cloud computing is emerging as a promising platform for compute and data intensive scientific applications. Thanks to the on-demand elastic provisioning capabilities, cloud computing has instigated curiosity among researchers from a wide range of disciplines. However, even though many vendors have rolled out their commercial cloud infrastructures, the service offerings are usually only best-effort based without any performance guarantees. Utilization of these resources will be questionable if it can not meet the performance expectations of deployed applications. Additionally, the lack of the familiar development tools hamper the productivity of eScience developers to write robust scientific high performance computing (HPC) applications. There are no standard frameworks that are currently supported by any large set of vendors offering cloud computing services. Consequently, the application portability among different cloud platforms for scientific applications is hard. Among all clouds, the emerging Azure cloud from Microsoft in particular remains a challenge for HPC program development both due to lack of its support for traditional parallel programming support such as Message Passing Interface (MPI) and map-reduce and due to its evolving application programming interfaces (APIs). We have designed newer frameworks and runtime environments to help HPC application developers by providing them with easy to use tools similar to those known from traditional parallel and distributed computing environment set- ting, such as MPI, for scientific application development on the Azure cloud platform. It is challenging to create an efficient framework for any cloud platform, including the Windows Azure platform, as they are mostly offered to users as a black-box with a set of application programming interfaces (APIs) to access various service components. The primary contributions of this Ph.D. thesis are (i) creating a generic framework for bag-of-tasks HPC applications to serve as the basic building block for application development on the Azure cloud platform, (ii) creating a set of APIs for HPC application development over the Azure cloud platform, which is similar to message passing interface (MPI) from traditional parallel and distributed setting, and (iii) implementing Crayons using the proposed APIs as the first end-to-end parallel scientific application to parallelize the fundamental GIS operations.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Browne, Daniel R. "Application of multi-core and cluster computing to the Transmission Line Matrix method." Thesis, Loughborough University, 2014. https://dspace.lboro.ac.uk/2134/14984.

Повний текст джерела
Анотація:
The Transmission Line Matrix (TLM) method is an existing and established mathematical method for conducting computational electromagnetic (CEM) simulations. TLM models Maxwell s equations by discretising the contiguous nature of an environment and its contents into individual small-scale elements and it is a computationally intensive process. This thesis focusses on parallel processing optimisations to the TLM method when considering the opposing ends of the contemporary computing hardware spectrum, namely large-scale computing systems versus small-scale mobile computing devices. Theoretical aspects covered in this thesis are: The historical development and derivation of the TLM method. A discrete random variable (DRV) for rain-drop diameter,allowing generation of a rain-field with raindrops adhering to a Gaussian size distribution, as a case study for a 3-D TLM implementation. Investigations into parallel computing strategies for accelerating TLM on large and small-scale computing platforms. Implementation aspects covered in this thesis are: A script for modelling rain-fields using free-to-use modelling software. The first known implementation of 2-D TLM on mobile computing devices. A 3-D TLM implementation designed for simulating the effects of rain-fields on extremely high frequency (EHF) band signals. By optimising both TLM solver implementations for their respective platforms, new opportunities present themselves. Rain-field simulations containing individual rain-drop geometry can be simulated, which was previously impractical due to the lengthy computation times required. Also, computationally time-intensive methods such as TLM were previously impractical on mobile computing devices. Contemporary hardware features on these devices now provide the opportunity for CEM simulations at speeds that are acceptable to end users, as well as providing a new avenue for educating relevant user cohorts via dynamic presentations of EM phenomena.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Roos, Daniel. "Evaluation of BERT-like models for small scale ad-hoc information retrieval." Thesis, Linköpings universitet, Artificiell intelligens och integrerade datorsystem, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-177675.

Повний текст джерела
Анотація:
Measuring semantic similarity between two sentences is an ongoing research field with big leaps being taken every year. This thesis looks at using modern methods of semantic similarity measurement for an ad-hoc information retrieval (IR) system. The main challenge tackled was answering the question "What happens when you don’t have situation-specific data?". Using encoder-based transformer architectures pioneered by Devlin et al., which excel at fine-tuning to situationally specific domains, this thesis shows just how well the presented methodology can work and makes recommendations for future attempts at similar domain-specific tasks. It also shows an example of how a web application can be created to make use of these fast-learning architectures.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Reuter, Balthasar. "Coarsening of Simplicial Meshes for Large ScaleParallel FEM Computations with DOLFIN HPC : A parallel implementation of the edge collapse algorithm." Thesis, KTH, Numerisk analys, NA, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-124240.

Повний текст джерела
Анотація:
Adaptive mesh refinement and coarsening methods are effective techniques to reduce the computation time of finite element based solvers. Parallel imple- mentations of such adaption routines, suitable for large scale computations on distributed memory machines, need additional care. In this thesis, a coarsening technique based on edge collapses is presented, its implementation and opti- mization for parallel computations explained and it is analyzed with respect to coarsening efficiency and performance. As a possible application the use of mesh coarsening in adaptive flow simulations is demonstrated
Adaptiv förfining ochutglesning av element-nät är effektiva tekniker för att minska beräkningstidenför finita-element-lösare. Implementering av sådana adaptions-rutiner, passandeför stora beräkningar på maskiner med distribuerat minne, kräver stor omsorg. Idetta arbete presenteras en utglesnings-metod baserad på kant-sammanslagningar.Dess implementering och optimering för parallell-beräkningar förklaras ochanalyseras med avseende på glesnings-effektivitet och tidsåtgång. Somtillämpning visas nätutglesning i adaptiv strömningssimulering
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Enico, Daniel. "External Heat Transfer Coefficient Predictions on a Transonic Turbine Nozzle Guide Vane Using Computational Fluid Dynamics." Thesis, Linköpings universitet, Mekanisk värmeteori och strömningslära, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-178173.

Повний текст джерела
Анотація:
The high turbine inlet temperature of modern gas turbines poses a challenge to the material used in the turbine blading of the primary stages. Mechanical failure mechanisms are more pronounced at these high temperatures, setting the lifetime of the blade. It is therefore crucial to obtain accurate local metal temperature predictions of the turbine blade. Accurately predicting the external heat transfer coefficient (HTC) distribution of the blade is therefore of uttermost importance. At present time, Siemens Energy uses the boundary layer code TEXSTAN for this purpose. The limitations coupled to such codes however make them less applicable for the complex flow physics involved in the hot gas path of turbine blading. The thesis therefore aims at introducing CFD for calculating the external HTC. This includes conducting an extensive literature study to find and validate a suitable methodology. The literature study was centered around RANS modeling, reviewing how the calculation of the HTC has evolved and the performance of some common turbulence and transition models. From the literature study, the SST k − ω model in conjunction with the γ − Reθ transition model, the v2 − f model and the Lag EB k − ε model were chosen for the investigation of a suitable methodology. The validation of the methodology was based on the extensively studied LS89 vane linear cascade of the von Karman Institute. In total 13 test cases of the cascade were chosen to represent a wide range of flow conditions. Both a periodic model and a model of the entire LS89 cascade were tested but there were great uncertainties whether or not the correct flow conditions were achieved with the model of the entire cascade. It was therefore abandoned and a periodic model was used instead. The decay of turbulence intensity is not known in the LS89 cascade. This made the case difficult to model since the turbulence boundary conditions then were incomplete. Two approaches were attempted to handle this deficiency, where one was ultimately found invalid. It was recognized that the Steelant-Dick postulation could be used in order to find a turbulent length scale which when specified at the inlet, lead to fairly good agreement with data of the HTC. The validation showed that the SST γ − Reθ model performs relatively well on the suction side and in transition onset predictions but worse on the pressure side for certain flow conditions. The v2 − f model performed better on the pressure side and on a small portion of the suction side. Literature emphasized the importance of obtaining proper turbulence characteristics around the vane for accurate HTC-predictions. It was found that the results of the validation step could be closely coupled to this statement and that further work is needed regarding this. Further research must also be done on the Steelant-Dick postulation to validate it as a reliable method in prescribing the inlet length scale.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Ansari, Sam. "Analysis of protein-protein interactions : a computational approach /." Saarbrücken : VDM Verl. Dr. Müller, 2007. http://deposit.d-nb.de/cgi-bin/dokserv?id=2992987&prov=M&dok_var=1&dok_ext=htm.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Kof, Leonid. "Text analysis for requirements engineering : application of computational linguistics /." Saarbrücken : VDM Verl. Dr. Müller, 2007. http://deposit.d-nb.de/cgi-bin/dokserv?id=3021639&prov=M&dok_var=1&dok_ext=htm.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Fundinger, Danny Georg. "Investigating dynamics by multilevel phase space discretization : approaches towards the efficient computation of nonlinear dynamical systems /." Saarbrücken : VDM Verlag Dr. Müller, 2007. http://deposit.d-nb.de/cgi-bin/dokserv?id=3042152&prov=M&dok_var=1&dok_ext=htm.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Baral, Darshan. "Computational Study of Fish Passage through Circular Culverts in Northeast Ohio." Youngstown State University / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=ysu1369409121.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Gajurel, Sanjaya. "Multi-Criteria Direction Antenna Multi-Path Location Aware Routing Protocol for Mobile Ad Hoc Networks." Case Western Reserve University School of Graduate Studies / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=case1197301773.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Fröhlich, Holger. "Kernel methods in chemo- and bioinformatics." Berlin Logos-Verl, 2006. http://deposit.d-nb.de/cgi-bin/dokserv?id=2888426&prov=M&dok_var=1&dok_ext=htm.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Wegner, Jörg Kurt. "Data mining und graph mining auf molekularen Graphen - Cheminformatik und molekulare Kodierungen für ADME/Tox-QSAR-Analysen." Berlin Logos, 2006. http://deposit.d-nb.de/cgi-bin/dokserv?id=2865580&prov=M&dok_var=1&dok_ext=htm.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Bartsch, Adam Jesse. "Biomechanical Engineering Analyses of Head and Spine Impact Injury Risk via Experimentation and Computational Simulation." Case Western Reserve University School of Graduate Studies / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=case1291318455.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Bernabeu, Llinares Miguel Oscar. "An open source HPC-enabled model of cardiac defibrillation of the human heart." Thesis, University of Oxford, 2011. http://ora.ox.ac.uk/objects/uuid:9ca44896-8873-4c91-9358-96744e28d187.

Повний текст джерела
Анотація:
Sudden cardiac death following cardiac arrest is a major killer in the industrialised world. The leading cause of sudden cardiac death are disturbances in the normal electrical activation of cardiac tissue, known as cardiac arrhythmia, which severely compromise the ability of the heart to fulfill the body's demand of oxygen. Ventricular fibrillation (VF) is the most deadly form of cardiac arrhythmia. Furthermore, electrical defibrillation through the application of strong electric shocks to the heart is the only effective therapy against VF. Over the past decades, a large body of research has dealt with the study of the mechanisms underpinning the success or failure of defibrillation shocks. The main mechanism of shock failure involves shocks terminating VF but leaving the appropriate electrical substrate for new VF episodes to rapidly follow (i.e. shock-induced arrhythmogenesis). A large number of models have been developed for the in silico study of shock-induced arrhythmogenesis, ranging from single cell models to three-dimensional ventricular models of small mammalian species. However, no extrapolation of the results obtained in the aforementioned studies has been done in human models of ventricular electrophysiology. The main reason is the large computational requirements associated with the solution of the bidomain equations of cardiac electrophysiology over large anatomically-accurate geometrical models including representation of fibre orientation and transmembrane kinetics. In this Thesis we develop simulation technology for the study of cardiac defibrillation in the human heart in the framework of the open source simulation environment Chaste. The advances include the development of novel computational and numerical techniques for the solution of the bidomain equations in large-scale high performance computing resources. More specifically, we have considered the implementation of effective domain decomposition, the development of new numerical techniques for the reduction of communication in Chaste's finite element method (FEM) solver, and the development of mesh-independent preconditioners for the solution of the linear system arising from the FEM discretisation of the bidomain equations. The developments presented in this Thesis have brought Chaste to the level of performance and functionality required to perform bidomain simulations with large three-dimensional cardiac geometries made of tens of millions of nodes and including accurate representation of fibre orientation and membrane kinetics. This advances have enabled the in silico study of shock-induced arrhythmogenesis for the first time in the human heart, therefore bridging an important gap in the field of cardiac defibrillation research.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Sacco, Federica. "Quantification of the influence of detailed endocardial structures on human cardiac haemodynamics and electrophysiology using HPC." Doctoral thesis, Universitat Pompeu Fabra, 2019. http://hdl.handle.net/10803/667670.

Повний текст джерела
Анотація:
In the last decade, computational modelling has been playing an important role as a non-invasive technology that may significantly impact the diagnosis as well as the treatment of cardiac diseases. As an example, the Food and Drugs Administration (FDA) has created new guidelines for in-silico trials for advancing new device pre-clinical testing and drug cardio-toxicity testing applications, since simulation studies have the potential to accelerate the discovery while reducing the need for expensive lab work and clinical trials. On the European side, the Avicenna Alliance aims to develop a road-map for in-silico clinical trials and establish the bases for the technology, methods and protocols required in order to make possible the use of computer simulations before the clinical trials. A common characteristic of the existing human cardiac models is that personalised geometries usually come from in-vivo imaging and the majority of computational meshes consider simplified ventricular geometries with smoothed endocardial (internal) surfaces, due to a lack of highresolution, fast and safe in-vivo imaging techniques. Acquiring human high-resolution images would mean for the patient to undergo long, expensive and impractical scans, in the case of magnetic resonance images (MRI), or could present a risk for the patient's health, in the case of computed tomography (CT), since this process involves a considerable amount of radiation. Smoothed ventricular surfaces are indeed considered by the majority of existing human heart computational models, both when modelling blood flow dynamics and electrophysiology. However, the endocardial wall of human (and other mammals species) cardiac chambers is not smooth at all; it is instead characterised by endocardial sub-structures such as papillary muscles (PMs), trabeculations and false tendons (FTs). Additionally, fundamental anatomical gender differences can be found in cardiac sub-structural heart configuration as female hearts present less amount of FTs. Since there is little information about the role of endocardial sub-structures in human cardiac function, considering them in the human in-silico cardiac simulations would present a first step towards the understanding of their function. Additionally, comparing simulations results including sub-structural anatomical information with those obtained when considering simplified human cardiac geometries (representing common existing models) would shed a light on the errors introduced when neglecting human endocardial sub-structures. Another important aspect which is often ignored in in-silico simulations and could influence their outcome is gender phenotype. Female hearts have reduced resources for repolarization due to differences in K+ channels as compared to male phenotypes, leading to longer action potential durations (APDs). Longer APDs are consistent with clinical observation that females have longer QT intervals (time the heart takes to depolarize and repolarize) than males. Gender specificity can lead then to arrythmogenesis differences and so it may be important to consider different gender phenotypes when running in-silico electrophysiological simulations, in order to obtain results which are of clinical relevance and that can be compared to the subject-specific clinical data. In this thesis, therefore, we have created highly detailed human heart models from ex-vivo highresolution MRI data, to study the role of cardiac sub-structures and gender phenotype in human cardiac physiology, through computational fluid dynamics (CFD) and electrophysiological highperformance computing (HPC) simulations.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Lehmann, Rüdiger. "Ebene Geodätische Berechnungen: Internes Manuskript." Hochschule für Technik und Wirtschaft, 2018. https://htw-dresden.qucosa.de/id/qucosa%3A31824.

Повний текст джерела
Анотація:
Dieses Manuskript entstand aus Vorlesungen über Geodätische Berechnungen an der Hochschule für Technik und Wirtschaft Dresden. Da diese Lehrveranstaltung im ersten oder zweiten Semester stattfindet, werden noch keine Methoden der höheren Mathematik benutzt. Das Themenspektrum beschränkt sich deshalb weitgehend auf elementare Berechnungen in der Ebene.:0 Vorwort 1 Ebene Trigonometrie 1.1 Winkelfunktionen 1.2 Berechnung schiefwinkliger ebener Dreiecke 1.3 Berechnung schiefwinkliger ebener Vierecke 2 Ebene Koordinatenrechnung 2.1 Kartesische und Polarkoordinaten 2.2 Erste Geodätische Grundaufgabe 2.3 Zweite Geodätische Grundaufgabe 3 Flächenberechnung und Flächenteilung 3.1 Flächenberechnung aus Maßzahlen. 3.2 Flächenberechnung aus Koordinaten 3.3 Absteckung und Teilung gegebener Dreiecksflächen 3.4 Absteckung und Teilung gegebener Vierecksflächen 4 Kreis und Ellipse 4.1 Kreisbogen und Kreissegment 4.2 Näherungsformeln für flache Kreisbögen 4.3 Sehnen-Tangenten-Verfahren 4.4 Grundlegendes über Ellipsen 4.5 Abplattung und Exzentrizitäten 4.6 Die Meridianellipse der Erde 4.7 Flächeninhalt und Bogenlängen 5 Ebene Einschneideverfahren 5.1 Bogenschnitt 5.2 Vorwärtsschnitt 5.3 Anwendung: Geradenschnitt 5.4 Anwendung: Kreis durch drei Punkte 5.5 Schnitt Gerade ⎼ Kreis oder Strahl ⎼ Kreis 5.6 Rückwärtsschnitt 5.7 Anwendung: Rechteck durch fünf Punkte 6 Ebene Koordinatentransformationen 6.1 Elementare Transformationsschritte 6.2 Rotation und Translation. 6.3 Rotation, Skalierung und Translation 6.4 Ähnlichkeitstransformation mit zwei identischen Punkten 6.5 Anwendung: Hansensche Aufgabe 6.6 Anwendung: Kleinpunktberechnung 6.7 Anwendung: Rechteck durch fünf Punkte 6.8 Ebene Helmert-Transformation 6.9 Bestimmung der Parameter bei Rotation und Translation 6.10 Ebene Affintransformation 7 Lösungen
This manuscript evolved from lectures on Geodetic Computations at the University of Applied Sciences Dresden (Germany). Since this lecture is given in the first or second semester, no advanced mathematical methods are used. The range of topics is limited to elementary computations in the plane.:0 Vorwort 1 Ebene Trigonometrie 1.1 Winkelfunktionen 1.2 Berechnung schiefwinkliger ebener Dreiecke 1.3 Berechnung schiefwinkliger ebener Vierecke 2 Ebene Koordinatenrechnung 2.1 Kartesische und Polarkoordinaten 2.2 Erste Geodätische Grundaufgabe 2.3 Zweite Geodätische Grundaufgabe 3 Flächenberechnung und Flächenteilung 3.1 Flächenberechnung aus Maßzahlen. 3.2 Flächenberechnung aus Koordinaten 3.3 Absteckung und Teilung gegebener Dreiecksflächen 3.4 Absteckung und Teilung gegebener Vierecksflächen 4 Kreis und Ellipse 4.1 Kreisbogen und Kreissegment 4.2 Näherungsformeln für flache Kreisbögen 4.3 Sehnen-Tangenten-Verfahren 4.4 Grundlegendes über Ellipsen 4.5 Abplattung und Exzentrizitäten 4.6 Die Meridianellipse der Erde 4.7 Flächeninhalt und Bogenlängen 5 Ebene Einschneideverfahren 5.1 Bogenschnitt 5.2 Vorwärtsschnitt 5.3 Anwendung: Geradenschnitt 5.4 Anwendung: Kreis durch drei Punkte 5.5 Schnitt Gerade ⎼ Kreis oder Strahl ⎼ Kreis 5.6 Rückwärtsschnitt 5.7 Anwendung: Rechteck durch fünf Punkte 6 Ebene Koordinatentransformationen 6.1 Elementare Transformationsschritte 6.2 Rotation und Translation. 6.3 Rotation, Skalierung und Translation 6.4 Ähnlichkeitstransformation mit zwei identischen Punkten 6.5 Anwendung: Hansensche Aufgabe 6.6 Anwendung: Kleinpunktberechnung 6.7 Anwendung: Rechteck durch fünf Punkte 6.8 Ebene Helmert-Transformation 6.9 Bestimmung der Parameter bei Rotation und Translation 6.10 Ebene Affintransformation 7 Lösungen
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Carron, Léopold. "Analyse à haute résolution de la structure spatiale des chromosomes eucaryotes Boost-HiC : Computational enhancement of long-range contacts in chromosomal contact maps Genome supranucleosomal organization and genetic susceptibility to disease." Thesis, Sorbonne université, 2019. http://www.theses.fr/2019SORUS593.

Повний текст джерела
Анотація:
L’information génétique est portée par la molécule d’ADN, un polymère de nucléotides de très grande taille. Afin de mieux comprendre les mécanismes impactant le repliement de l’ADN, on peut exploiter une technique de génomique qui permet de quantifier les contacts entre régions distales du génome. Cette technique expérimentale appelée ’capture de conformation de chromosome’ (Hi-C) donne des informations quantitatives sur l’architecture et le repliement tridimensionnel des chromosomes dans le noyau. Largement utilisée chez l’Homme, la souris et la drosophile, cette technique a grandement évolué durant ces dernières années, produisant ainsi des données de qualité variable. Jusque-là étudiées à des résolutions assez grossières, notre objectif est d’étudier les données Hi-C déjà publiées à des résolutions plus fines. Pour cela, j’ai développé un outil bioinformatique, Boost-HiC, pour améliorer l’analyse des contacts chromosomiques. Fort de cette expertise, je proposerai alors une analyse comparative des structures spatiales des génomes eucaryotes, permettant de clarifier comment extraire les compartiments génomiques de manière optimale. Cette expertise sera utilisée également pour décrire le lien entre les bordures des domaines topologiques de la chromatine et la position dans le génome humain des mutations ponctuelles prédisposant au cancer
Genetic information is encoded in DNA, a huge-size nucleotidic polymer. In order to understand DNA folding mechanisms, an experimental technique is today available that quantifies distal genomic contacts. This high-throughput chromosome conformation capture technique, called Hi-C, reveals 3D chromosome folding in the nucleus. In the recent years, the Hi-C experimental protocol received many improvements through numerous studies for Human, mouse and drosophila genomes. Because most of these studies are performed at poor resolution, I propose bioinformatic methods to analyze these datasets at fine resolution. In order to do this, I present Boost-HiC, a tool that enhanced long-range contacts in Hi-C data. I will then used our extended knowledge to compare 3D folding in different species. This result provides the basis to determine the best method for obtaining genomic compartements from a chromosomal contact map. Finally, I present some other applications of our methodology to study the link between the borders of topologically associating domains and the genomic location of single-nucleotide mutations associated to cancer
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Ho, Minh Quan. "Optimisation de transfert de données pour les processeurs pluri-coeurs, appliqué à l'algèbre linéaire et aux calculs sur stencils." Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAM042/document.

Повний текст джерела
Анотація:
La prochaine cible de Exascale en calcul haute performance (High Performance Computing - HPC) et des récent accomplissements dans l'intelligence artificielle donnent l'émergence des architectures alternatives non conventionnelles, dont l'efficacité énergétique est typique des systèmes embarqués, tout en fournissant un écosystème de logiciel équivalent aux plateformes HPC classiques. Un facteur clé de performance de ces architectures à plusieurs cœurs est l'exploitation de la localité de données, en particulier l'utilisation de mémoire locale (scratchpad) en combinaison avec des moteurs d'accès direct à la mémoire (Direct Memory Access - DMA) afin de chevaucher le calcul et la communication. Un tel paradigme soulève des défis de programmation considérables à la fois au fabricant et au développeur d'application. Dans cette thèse, nous abordons les problèmes de transfert et d'accès aux mémoires hiérarchiques, de performance de calcul, ainsi que les défis de programmation des applications HPC, sur l'architecture pluri-cœurs MPPA de Kalray. Pour le premier cas d'application lié à la méthode de Boltzmann sur réseau (Lattice Boltzmann method - LBM), nous fournissons des techniques génériques et réponses fondamentales à la question de décomposition d'un domaine stencil itérative tridimensionnelle sur les processeurs clusterisés équipés de mémoires locales et de moteurs DMA. Nous proposons un algorithme de streaming et de recouvrement basé sur DMA, délivrant 33% de gain de performance par rapport à l'implémentation basée sur la mémoire cache par défaut. Le calcul de stencil multi-dimensionnel souffre d'un goulot d'étranglement important sur les entrées/sorties de données et d'espace mémoire sur puce limitée. Nous avons développé un nouvel algorithme de propagation LBM sur-place (in-place). Il consiste à travailler sur une seule instance de données, au lieu de deux, réduisant de moitié l'empreinte mémoire et cède une efficacité de performance-par-octet 1.5 fois meilleur par rapport à l'algorithme traditionnel dans l'état de l'art. Du côté du calcul intensif avec l'algèbre linéaire dense, nous construisons un benchmark de multiplication matricielle optimale, basé sur exploitation de la mémoire locale et la communication DMA asynchrone. Ces techniques sont ensuite étendues à un module DMA générique du framework BLIS, ce qui nous permet d'instancier une bibliothèque BLAS3 (Basic Linear Algebra Subprograms) portable et optimisée sur n'importe quelle architecture basée sur DMA, en moins de 100 lignes de code. Nous atteignons une performance maximale de 75% du théorique sur le processeur MPPA avec l'opération de multiplication de matrices (GEMM) de BLAS, sans avoir à écrire des milliers de lignes de code laborieusement optimisé pour le même résultat
Upcoming Exascale target in High Performance Computing (HPC) and disruptive achievements in artificial intelligence give emergence of alternative non-conventional many-core architectures, with energy efficiency typical of embedded systems, and providing the same software ecosystem as classic HPC platforms. A key enabler of energy-efficient computing on many-core architectures is the exploitation of data locality, specifically the use of scratchpad memories in combination with DMA engines in order to overlap computation and communication. Such software paradigm raises considerable programming challenges to both the vendor and the application developer. In this thesis, we tackle the memory transfer and performance issues, as well as the programming challenges of memory- and compute-intensive HPC applications on he Kalray MPPA many-core architecture. With the first memory-bound use-case of the lattice Boltzmann method (LBM), we provide generic and fundamental techniques for decomposing three-dimensional iterative stencil problems onto clustered many-core processors fitted withs cratchpad memories and DMA engines. The developed DMA-based streaming and overlapping algorithm delivers 33%performance gain over the default cache-based implementation.High-dimensional stencil computation suffers serious I/O bottleneck and limited on-chip memory space. We developed a new in-place LBM propagation algorithm, which reduces by half the memory footprint and yields 1.5 times higher performance-per-byte efficiency than the state-of-the-art out-of-place algorithm. On the compute-intensive side with dense linear algebra computations, we build an optimized matrix multiplication benchmark based on exploitation of scratchpad memory and efficient asynchronous DMA communication. These techniques are then extended to a DMA module of the BLIS framework, which allows us to instantiate an optimized and portable level-3 BLAS numerical library on any DMA-based architecture, in less than 100 lines of code. We achieve 75% peak performance on the MPPA processor with the matrix multiplication operation (GEMM) from the standard BLAS library, without having to write thousands of lines of laboriously optimized code for the same result
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Bonnier, Florent. "Algorithmes parallèles pour le suivi de particules." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLV080/document.

Повний текст джерела
Анотація:
Les méthodes de suivi de particules sont couramment utilisées en mécanique des fluides de par leur propriété unique de reconstruire de longues trajectoires avec une haute résolution spatiale et temporelle. De fait, de nombreuses applications industrielles mettant en jeu des écoulements gaz-particules, comme les turbines aéronautiques utilisent un formalisme Euler-Lagrange. L’augmentation rapide de la puissance de calcul des machines massivement parallèles et l’arrivée des machines atteignant le petaflops ouvrent une nouvelle voie pour des simulations qui étaient prohibitives il y a encore une décennie. La mise en oeuvre d’un code parallèle efficace pour maintenir une bonne performance sur un grand nombre de processeurs devra être étudié. On s’attachera en particuliers à conserver un bon équilibre des charges sur les processeurs. De plus, une attention particulière aux structures de données devra être fait afin de conserver une certaine simplicité et la portabilité et l’adaptabilité du code pour différentes architectures et différents problèmes utilisant une approche Lagrangienne. Ainsi, certains algorithmes sont à repenser pour tenir compte de ces contraintes. La puissance de calcul permettant de résoudre ces problèmes est offerte par des nouvelles architectures distribuées avec un nombre important de coeurs. Cependant, l’exploitation efficace de ces architectures est une tâche très délicate nécessitant une maîtrise des architectures ciblées, des modèles de programmation associés et des applications visées. La complexité de ces nouvelles générations des architectures distribuées est essentiellement due à un très grand nombre de noeuds multi-coeurs. Ces noeuds ou une partie d’entre eux peuvent être hétérogènes et parfois distants. L’approche de la plupart des bibliothèques parallèles (PBLAS, ScalAPACK, P_ARPACK) consiste à mettre en oeuvre la version distribuée de ses opérations de base, ce qui signifie que les sous-programmes de ces bibliothèques ne peuvent pas adapter leurs comportements aux types de données. Ces sous programmes doivent être définis une fois pour l’utilisation dans le cas séquentiel et une autre fois pour le cas parallèle. L’approche par composants permet la modularité et l’extensibilité de certaines bibliothèques numériques (comme par exemple PETSc) tout en offrant la réutilisation de code séquentiel et parallèle. Cette approche récente pour modéliser des bibliothèques numériques séquentielles/parallèles est très prometteuse grâce à ses possibilités de réutilisation et son moindre coût de maintenance. Dans les applications industrielles, le besoin de l’emploi des techniques du génie logiciel pour le calcul scientifique dont la réutilisabilité est un des éléments des plus importants, est de plus en plus mis en évidence. Cependant, ces techniques ne sont pas encore maÃotrisées et les modèles ne sont pas encore bien définis. La recherche de méthodologies afin de concevoir et réaliser des bibliothèques réutilisables est motivée, entre autres, par les besoins du monde industriel dans ce domaine. L’objectif principal de ce projet de thèse est de définir des stratégies de conception d’une bibliothèque numérique parallèle pour le suivi lagrangien en utilisant une approche par composants. Ces stratégies devront permettre la réutilisation du code séquentiel dans les versions parallèles tout en permettant l’optimisation des performances. L’étude devra être basée sur une séparation entre le flux de contrôle et la gestion des flux de données. Elle devra s’étendre aux modèles de parallélisme permettant l’exploitation d’un grand nombre de coeurs en mémoire partagée et distribuée
The complexity of these new generations of distributed architectures is essencially due to a high number of multi-core nodes. Most of the nodes can be heterogeneous and sometimes remote. Today, nor the high number of nodes, nor the processes that compose the nodes are exploited by most of applications and numerical libraries. The approach of most of parallel libraries (PBLAS, ScalAPACK, P_ARPACK) consists in implementing the distributed version of its base operations, which means that the subroutines of these libraries can not adapt their behaviors to the data types. These subroutines must be defined once for use in the sequential case and again for the parallel case. The object-oriented approach allows the modularity and scalability of some digital libraries (such as PETSc) and the reusability of sequential and parallel code. This modern approach to modelize sequential/parallel libraries is very promising because of its reusability and low maintenance cost. In industrial applications, the need for the use of software engineering techniques for scientific computation, whose reusability is one of the most important elements, is increasingly highlighted. However, these techniques are not yet well defined. The search for methodologies for designing and producing reusable libraries is motivated by the needs of the industries in this field. The main objective of this thesis is to define strategies for designing a parallel library for Lagrangian particle tracking using a component approach. These strategies should allow the reuse of the sequential code in the parallel versions while allowing the optimization of the performances. The study should be based on a separation between the control flow and the data flow management. It should extend to models of parallelism allowing the exploitation of a large number of cores in shared and distributed memory
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Yieh, Pierson. "Vehicle Pseudonym Association Attack Model." DigitalCommons@CalPoly, 2018. https://digitalcommons.calpoly.edu/theses/1840.

Повний текст джерела
Анотація:
With recent advances in technology, Vehicular Ad-hoc Networks (VANETs) have grown in application. One of these areas of application is Vehicle Safety Communication (VSC) technology. VSC technology allows for vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communications that enhance vehicle safety and driving experience. However, these newly developing technologies bring with them a concern for the vehicular privacy of drivers. Vehicles already employ the use of pseudonyms, unique identifiers used with signal messages for a limited period of time, to prevent long term tracking. But can attackers still attack vehicular privacy even when vehicles employ a pseudonym change strategy? The major contribution of this paper is a new attack model that uses long-distance pseudonym changing and short-distance non-changing protocols to associate vehicles with their respective pseudonyms.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Evans, Llion Marc. "Thermal finite element analysis of ceramic/metal joining for fusion using X-ray tomography data." Thesis, University of Manchester, 2013. https://www.research.manchester.ac.uk/portal/en/theses/thermal-finite-element-analysis-of-ceramicmetal-joining-for-fusion-using-xray-tomography-data(5f06bb67-1c6c-4723-ae14-f03b84628610).html.

Повний текст джерела
Анотація:
A key challenge facing the nuclear fusion community is how to design a reactor that will operate in environmental conditions not easily reproducible in the laboratory for materials testing. Finite element analysis (FEA), commonly used to predict components’ performance, typically uses idealised geometries. An emerging technique shown to have improved accuracy is image based finite element modelling (IBFEM). This involves converting a three dimensional image (such as from X ray tomography) into an FEA mesh. A main advantage of IBFEM is that models include micro structural and non idealised manufacturing features. The aim of this work was to investigate the thermal performance of a CFC Cu divertor monoblock, a carbon fibre composite (CFC) tile joined through its centre to a CuCrZr pipe with a Cu interlayer. As a plasma facing component located where thermal flux in the reactor is at its highest, one of its primary functions is to extract heat by active cooling. Therefore, characterisation of its thermal performance is vital. Investigation of the thermal performance of CFC Cu joining methods by laser flash analysis and X ray tomography showed a strong correlation between micro structures at the material interface and a reduction in thermal conductivity. Therefore, this problem leant itself well to be investigated further by IBFEM. However, because these high resolution models require such large numbers of elements, commercial FEA software could not be used. This served as motivation to develop parallel software capable of performing the necessary transient thermal simulations. The resultant code was shown to scale well with increasing problem sizes and a simulation with 137 million elements was successfully completed using 4096 cores. In comparison with a low resolution IBFEM and traditional FEA simulations it was demonstrated to provide additional accuracy. IBFEM was used to simulate a divertor monoblock mock up, where it was found that a region of delamination existed on the CFC Cu interface. Predictions showed that if this was aligned unfavourably it would increase thermal gradients across the component thus reducing lifespan. As this was a feature introduced in manufacturing it would not have been accounted for without IBFEM.The technique developed in this work has broad engineering applications. It could be used similarly to accurately model components in conditions unfeasible to produce in the laboratory, to assist in research and development of component manufacturing or to verify commercial components against manufacturers’ claims.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Nikfarjam, Farhad. "Extension de la méthode LS-STAG de type frontière immergée/cut-cell aux géométries 3D extrudées : applications aux écoulements newtoniens et non newtoniens." Thesis, Université de Lorraine, 2018. http://www.theses.fr/2018LORR0023/document.

Повний текст джерела
Анотація:
La méthode LS-STAG est une méthode de type frontière immergée/cut-cell pour le calcul d’écoulements visqueux incompressibles qui est basée sur la méthode MAC pour grilles cartésiennes décalées, où la frontière irrégulière est nettement représentée par sa fonction level-set, résultant en un gain significatif en ressources informatiques par rapport aux codes MFN commerciaux utilisant des maillages qui épousent la géométrie. La version 2D est maintenant bien établie et ce manuscrit présente son extension aux géométries 3D avec une symétrie translationnelle dans la direction z (configurations extrudées 3D). Cette étape intermédiaire sera considérée comme la clé de voûte du solveur 3D complet, puisque les problèmes de discrétisation et d’implémentation sur les machines à mémoire distribuée sont abordés à ce stade de développement. La méthode LS-STAG est ensuite appliquée à divers écoulements newtoniens et non-newtoniens dans des géométries extrudées 3D (conduite axisymétrique, cylindre circulaire, conduite cylindrique avec élargissement brusque, etc.) pour lesquels des résultats de références et des données expérimentales sont disponibles. Le but de ces investigations est d’évaluer la précision de la méthode LS-STAG, d’évaluer la polyvalence de la méthode pour les applications d’écoulement dans différents régimes (fluides newtoniens et rhéofluidifiants, écoulement laminaires stationnaires et instationnaires, écoulements granulaires) et de comparer ses performances avec de méthodes numériques bien établies (méthodes non structurées et de frontières immergées)
The LS-STAG method is an immersed boundary/cut-cell method for viscous incompressible flows based on the staggered MAC arrangement for Cartesian grids where the irregular boundary is sharply represented by its level-set function. This approach results in a significant gain in computer resources compared to commercial body-fitted CFD codes. The 2D version of LS-STAG method is now well-established and this manuscript presents its extension to 3D geometries with translational symmetry in the z direction (3D extruded configurations). This intermediate step will be regarded as the milestone for the full 3D solver, since both discretization and implementation issues on distributed memory machines are tackled at this stage of development. The LS-STAG method is then applied to Newtonian and non-Newtonian flows in 3D extruded geometries (axisymmetric pipe, circular cylinder, duct with an abrupt expansion, etc.) for which benchmark results and experimental data are available. The purpose of these investigations is to evaluate the accuracy of LS-STAG method, to assess the versatility of method for flow applications at various regimes (Newtonian and shear-thinning fluids, steady and unsteady laminar to turbulent flows, granular flows) and to compare its performance with well-established numerical methods (body-fitted and immersed boundary methods)
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Wahl, Jean-Baptiste. "The Reduced basis method applied to aerothermal simulations." Thesis, Strasbourg, 2018. http://www.theses.fr/2018STRAD024/document.

Повний текст джерела
Анотація:
Nous présentons dans cette thèse nos travaux sur la réduction d'ordre appliquée à des simulations d'aérothermie. Nous considérons le couplage entre les équations de Navier-Stokes et une équations d'énergie de type advection-diffusion. Les paramètres physiques considérés nous obligent à considéré l'introduction d'opérateurs de stabilisation de type SUPG ou GLS. Le but étant d'ajouter une diffusion numérique dans la direction du champs de convection, afin de supprimer les oscillations non-phyisques. Nous présentons également notre stratégie de résolution basée sur la méthode des bases réduite (RBM). Afin de retrouver une décomposition affine, essentielle pour l'application de la RBM, nous avons implémenté une version discrète de la méthode d'interpolation empirique (EIM). Cette variante permet de la construction d'approximation affine pour des opérateurs complexes. Nous utilisons notamment cette méthode pour la réduction des opérateurs de stabilisations. Cependant, la construction des bases EIM pour des problèmes non-linéaires implique un grand nombre de résolution éléments finis. Pour pallier à ce problème, nous mettons en oeuvre les récents développement de l'algorithme de coconstruction entre EIM et RBM (SER)
We present in this thesis our work on model order reduction for aerothermal simulations. We consider the coupling between the incompressible Navier-Stokes equations and an advection-diffusion equation for the temperature. Since the physical parameters induce high Reynolds and Peclet numbers, we have to introduce stabilization operators in the formulation to deal with the well known numerical stability issue. The chosen stabilization, applied to both fluid and heat equations, is the usual Streamline-Upwind/Petrov-Galerkin (SUPG) which add artificial diffusivity in the direction of the convection field. We also introduce our order reduction strategy for this model, based on the Reduced Basis Method (RBM). To recover an affine decomposition for this complex model, we implemented a discrete variation of the Empirical Interpolation Method (EIM) which is a discrete version of the original EIM. This variant allows building an approximated affine decomposition for complex operators such as in the case of SUPG. We also use this method for the non-linear operators induced by the shock capturing method. The construction of an EIM basis for non-linear operators involves a potentially huge number of non-linear FEM resolutions - depending on the size of the sampling. Even if this basis is built during an offline phase, we usually can not afford such expensive computational cost. We took advantage of the recent development of the Simultaneous EIM Reduced basis algorithm (SER) to tackle this issue
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Kraszewski, Sebastian. "Compréhension des mécanismes d'interaction entre des nanotubes de carbone et une membrane biologique : effets toxiques et vecteurs de médicaments potentiels." Phd thesis, Université de Franche-Comté, 2010. http://tel.archives-ouvertes.fr/tel-00642770.

Повний текст джерела
Анотація:
Ce travail de thèse concerne l'étude théorique des mécanismes d'interaction de nanostructures à base de carbone avec les membranes cellulaires, constituant l'essentiel des cellules vivantes. Ce sujet très complexe compte tenu de la pluridisciplinarité de la thématique a été essentiellement réalisé à l'aide de simulations numériques. Nous avons volontairement partagé ce travail en deux parties distinctes. Nous avons d'abord étudié le fonctionnement des canaux ioniques à l'aide de la dynamique moléculaire et des études ab-initio. Ces canaux sont d'une part des protéines membranaires essentielles pour la fonction cellulaire, et d'autre part, elles constituent aussi des cibles thérapeutiques fréquentes dans la recherche des nouveaux médicaments. Dans une seconde partie, nous avons étudié le comportement d'espèces carbonées nus et fonctionnalisés tels que les fullerènes (C60) et les nanotubes (CNT) en présence de la membrane cellulaire en analysant finement le mécanisme d'ingestion (ang. uptake) de ces vecteurs de médicaments potentiels par les membranes biologiques. Ces études en dynamique moléculaire sur des temps très longs (sub-1 μs) et sur des systèmes très vastes étaient aussi le challenge du point de vue informatique. Pour palier la problématique dans le temps limitée d'une thèse le développement des calculs parallèles de haute performance CPU/GPU a du être mis en place. Les résultats obtenus tentent de mettre en évidence le rôle toxique que peuvent présentées certaines nanostructures vis-à-vis des protéines membranaires précédemment étudiées. Ce travail de thèse ouvre naturellement la voie à l'étude des nanovecteurs biocompatibles pour la délivrance des médicaments.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Teng, Sin Yong. "Intelligent Energy-Savings and Process Improvement Strategies in Energy-Intensive Industries." Doctoral thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2020. http://www.nusl.cz/ntk/nusl-433427.

Повний текст джерела
Анотація:
S tím, jak se neustále vyvíjejí nové technologie pro energeticky náročná průmyslová odvětví, stávající zařízení postupně zaostávají v efektivitě a produktivitě. Tvrdá konkurence na trhu a legislativa v oblasti životního prostředí nutí tato tradiční zařízení k ukončení provozu a k odstavení. Zlepšování procesu a projekty modernizace jsou zásadní v udržování provozních výkonů těchto zařízení. Současné přístupy pro zlepšování procesů jsou hlavně: integrace procesů, optimalizace procesů a intenzifikace procesů. Obecně se v těchto oblastech využívá matematické optimalizace, zkušeností řešitele a provozní heuristiky. Tyto přístupy slouží jako základ pro zlepšování procesů. Avšak, jejich výkon lze dále zlepšit pomocí moderní výpočtové inteligence. Účelem této práce je tudíž aplikace pokročilých technik umělé inteligence a strojového učení za účelem zlepšování procesů v energeticky náročných průmyslových procesech. V této práci je využit přístup, který řeší tento problém simulací průmyslových systémů a přispívá následujícím: (i)Aplikace techniky strojového učení, která zahrnuje jednorázové učení a neuro-evoluci pro modelování a optimalizaci jednotlivých jednotek na základě dat. (ii) Aplikace redukce dimenze (např. Analýza hlavních komponent, autoendkodér) pro vícekriteriální optimalizaci procesu s více jednotkami. (iii) Návrh nového nástroje pro analýzu problematických částí systému za účelem jejich odstranění (bottleneck tree analysis – BOTA). Bylo také navrženo rozšíření nástroje, které umožňuje řešit vícerozměrné problémy pomocí přístupu založeného na datech. (iv) Prokázání účinnosti simulací Monte-Carlo, neuronové sítě a rozhodovacích stromů pro rozhodování při integraci nové technologie procesu do stávajících procesů. (v) Porovnání techniky HTM (Hierarchical Temporal Memory) a duální optimalizace s několika prediktivními nástroji pro podporu managementu provozu v reálném čase. (vi) Implementace umělé neuronové sítě v rámci rozhraní pro konvenční procesní graf (P-graf). (vii) Zdůraznění budoucnosti umělé inteligence a procesního inženýrství v biosystémech prostřednictvím komerčně založeného paradigmatu multi-omics.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

PUCCI, EGIDIO. "Innovative design process for industrial gas turbine combustors." Doctoral thesis, 2018. http://hdl.handle.net/2158/1126566.

Повний текст джерела
Анотація:
This thesis is tracking the design process footprints, from the wide initial scenario of a new combustor design for industrial gas turbines, down to detailed design aspects, passing through sealing system design with turbine nozzle, up to a specific liner cooling architecture and its optimization. Main effort of this job has been focused on the creation of a numerical tool, able, since the early phase of development, to analyze the liner cooling with a one-dimensional conjugate aero-thermal-strain approach: liner cold side heat transfer coefficients in a turbulated forced convection region are iteratively computed updating metal and air temperatures and the deformed geometry of coolant passages from results of a heat balance. Coolant passages, in between the deformed surfaces of liner and baffle, influence the air velocity, changing in turn heat transfer coefficients and coolant pressure losses. The computation of liner and baffle strain has been validated comparing the code results with the ones obtained by a detailed finite element model. Correlations embedded in the code have been calibrated thanks to a comparison with temperatures and pressures experimental measurements, which have been acquired in a full annular rig test campaign. The code has been provided with two additional optimization routines, developed to automatically improve the baffle design for an enhancement of the liners durability, without penalizing engine performance. Maintaining the same coolant pressure losses and minimizing the axial gradients of metal temperature by means of a variable gap baffle geometry, a reduction of thermal induced stresses can be achieved. The reader will follow problems and solutions, sizing criteria and uncertainties estimation of the combustor architecture adopted in the BHGE NovaLT industrial gas turbine class, up to reach the testing phase of the manufactured components and finally the baffle design solution optimization. Reliability of the liner cooling system depends also by the reliability of the leakage prediction across the interface between liners and turbine first stage nozzles. In parallel to the baffle design optimization, studies have been performed on this sealing systems, aimed to increase the reliability of the combustor flow split prediction and to identify areas of improvement of the sealing. The criteria for selection and design of the most suitable sealing system and related durability analyses will be presented, completing the picture of the combustor flow split and synergistically improving the reliability of the liners cooling design presented.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Yi, Huang Chan, and 黃展翊. "A Trust Evidence Establishment, Distribution and Value Computation Mechanism for Mobile Ad Hoc Networks." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/09485415841200298509.

Повний текст джерела
Анотація:
碩士
國立交通大學
資訊管理研究所
95
Personal identity is the basic way to present user’s role for MANETs, so all around nodes can verify the node which they will communicate with in the future. In order to evaluate the node’s behavior, we can combine the identity with trust value. But nodes are in independent and self-configured architecture, so it is important to develop a totally trust evidence which contains personal identity, certificate information, and trust operation mechanism. Moreover, the trust evidence can be established, distributed, evaluated, and verified on-line. This research proposes the distributed trust evidence operation mechanism for MANETs. The node establishes certificate itself and has corresponding trust identity without central certificate authority. The way to manage trust evidence can get others’ trust evidence via the transmission of packets and would not be modified by malicious nodes. The model will resolve selfish node and malicious node problem via simulation. It will be suitable to operate MANETs and provide most correct routing reference. We can also prove that nodes in MANETs will cooperate via game theory. After interaction, higher trust nodes can reflect the outcome and re-evaluate the trust. If the intermediate node deviates, its trust value will be decreased and will be regarded as the doubtful node. Therefore when the doubtful node requests surrounding node to forward packets, it will be rejected until the node cooperates.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Schall, James David. "Computational modeling nanoindentation and an ad hoc molecular dynamics-finite difference thermostat /." 2004. http://www.lib.ncsu.edu/theses/available/etd-06252004-130229/unrestricted/etd.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Chandan, G. "Effective Automatic Computation Placement and Data Allocation for Parallelization of Regular Programs." Thesis, 2014. http://hdl.handle.net/2005/3111.

Повний текст джерела
Анотація:
Scientific applications that operate on large data sets require huge amount of computation power and memory. These applications are typically run on High Performance Computing (HPC) systems that consist of multiple compute nodes, connected over an network interconnect such as InfiniBand. Each compute node has its own memory and does not share the address space with other nodes. A significant amount of work has been done in past two decades on parallelizing for distributed-memory architectures. A majority of this work was done in developing compiler technologies such as high performance Fortran (HPF) and partitioned global address space (PGAS). However, several steps involved in achieving good performance remained manual. Hence, the approach currently used to obtain the best performance is to rely on highly tuned libraries such as ScaLAPACK. The objective of this work is to improve automatic compiler and runtime support for distributed-memory clusters for regular programs. Regular programs typically use arrays as their main data structure and array accesses are affine functions of outer loop indices and program parameters. A lot of scientific applications such as linear-algebra kernels, stencils, partial differential equation solvers, data-mining applications and dynamic programming codes fall in this category. In this work, we propose techniques for finding computation mapping and data allocation when compiling regular programs for distributed-memory clusters. Techniques for transformation and detection of parallelism, relying on the polyhedral framework already exist. We propose automatic techniques to determine computation placements for identified parallelism and allocation of data. We model the problem of finding good computation placement as a graph partitioning problem with the constraints to minimize both communication volume and load imbalance for entire program. We show that our approach for computation mapping is more effective than those that can be developed using vendor-supplied libraries. Our approach for data allocation is driven by tiling of data spaces along with a compiler assisted runtime scheme to allocate and deallocate tiles on-demand and reuse them. Experimental results on some sequences of BLAS calls demonstrate a mean speedup of 1.82× over versions written with ScaLAPACK. Besides enabling weak scaling for distributed memory, data tiling also improves locality for shared-memory parallelization. Experimental results on a 32-core shared-memory SMP system shows a mean speedup of 2.67× over code that is not data tiled.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії