Dissertations / Theses on the topic 'HTC computation'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'HTC computation.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Farhat, Ahmad. "Trust computation in ad-hoc networks." FIU Digital Commons, 2005. http://digitalcommons.fiu.edu/etd/3251.
Full textMcKenzie, Simon Clayton. "Efficient computation of integrals in modern correlated methods." Thesis, University of Sydney, 2020. https://hdl.handle.net/2123/23993.
Full textSen, Sevil. "Evolutionary computation techniques for intrusion detection in mobile ad hoc networks." Thesis, University of York, 2010. http://etheses.whiterose.ac.uk/998/.
Full textNatale, Irene. "A study on Friction Boundary Conditions with Unicorn/FEniCS-HPC." Thesis, KTH, Numerisk analys, NA, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-241920.
Full textMålet med denna avhandling är att presentera och validera ett rand-villkor för CFD problem som inkluderar en friktionsparameter. I den första delen av avhandlingen presenterar vi det inkompressibla Navier-Stokes system av ekvationer och dess randvillkor för friktion. Sedan använder vi oss av Finita Elementmetoden som används för att diskretisera problemet som är presenterat, med en särskild betoning på a posteriori feluppskattningen, den adaptiva algoritmen och den numeriska trippingen som fanns med i flödet. Eftersom denna avhandling helt lutar sig mot FEniCS-HPC mjukvara, förklaras dess ramverk, tillsammans med dess kraftfulla parallelliseringsstrategi. Därefter pre-senterar vi den svaga formuleringen av Navier-Stokes system av ekvationer kopplad till friktionsgränserna, tillsammans med en initiell teoretisk härledning av friktionskoefficientens optimala värden. Vidare, i det sista kapitlet, presenteras de preliminära resultaten av en valide-ringsstudie av lyftkoefficienten för modellen som använts vid benchmarking av NACA0012:s bärytan, som är kommenterad i detalj. Även om det fortfarande finns aspekter som bör belysas tror vi att vårt preliminära resultat är väldigt lovande och att det öppnar en ny väg för simuleringsutveckling i aerodynamikrelaterade modeller.
Krishnamurthy, Siddhartha. "Peak Sidelobe Level Distribution Computation for Ad Hoc Arrays using Extreme Value Theory." Thesis, Harvard University, 2014. http://dissertations.umi.com/gsas.harvard:11300.
Full textEngineering and Applied Sciences
Lehmann, Rüdiger. "A universal and robust computation procedure for geometric observations." Hochschule für Technik und Wirtschaft, 2017. https://htw-dresden.qucosa.de/id/qucosa%3A31843.
Full textDer Beitrag beschreibt ein automatisches und robustes Verfahren, welches auf alle klassischen geodätischen Berechnungsprobleme angewendet werden kann. Ausgehend von vorgelegten Eingabegrößen (z.B. Koordinaten bekannter Punkte, Beobachtungen) werden Berechnungsmöglichkeiten für alle anderen relevanten Größen gefunden. Bei redundanten Eingabegrößen existiert eine Vielzahl von verschiedenen Berechnungsmöglichkeiten aus verschiedenen minimalen Untermengen von Eingabegrößen, die alle automatisch gefunden und deren Ergebnisse berechnet und verglichen werden. Wenn die Berechnung nicht eindeutig ist, aber nur eine endliche Anzahl von Lösungen existiert, dann werden alle Lösungen gefunden und berechnet. Durch den Vergleich verschiedener Berechnungsergebnisse können Ausreißer in den Eingabegrößen aufgedeckt werden und ein robustes Endergebnis wird erhalten. Das Verfahren arbeitet nicht stochastisch, so dass kein stochastisches Modell der Beobachtungen erforderlich ist. Die Beschreibung des Algorithmus wird an einem praktischen Fall illustriert. Er ist auf einem Webserver installiert und über das Internet frei verfügbar.
Soundarapandian, Manikandan. "Relational Computing Using HPC Resources: Services and Optimizations." Thesis, Virginia Tech, 2015. http://hdl.handle.net/10919/56586.
Full textMaster of Science
Bramsäter, Jenny, and Kajsa Lundgren. "Study on the Dynamic Control of Dam Operating Water Levels of Yayangshan Dam in Flood Season." Thesis, KTH, Industriell ekologi, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-174877.
Full textJamasebi, Reza. "COMPUTATIONAL PHENOTYPE DERIVED FROM PHYSIOLOGICAL TIME SERIES: APPLICATION TO SLEEP DATA ANALYSIS." Case Western Reserve University School of Graduate Studies / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=case1220467153.
Full textSan, Juan Sebastián Pablo. "HPC algorithms for nonnegative decompositions." Doctoral thesis, Universitat Politècnica de València, 2018. http://hdl.handle.net/10251/113069.
Full textMolts problemes procedents de aplicacions del mon real poden ser modelats com problemes matemàtics en magnituts no negatives, i per tant, les solucions de estos problemes matemàtics només tenen sentit si son no negatives. Estes magnituts no negatives poden ser, per eixemple, la concentració dels elements en un compost químic, les freqüències en una senyal sonora, les intensitats dels pixels de una image, etc. Alguns d'estos problemes poden ser modelats utilisant un sistema d'equacions llineals sobredeterminat. Quant la solució de este problema deu ser restringida a valors no negatius, apareix un problema nomenat problema de mínims quadrats no negatius (NNLS per les seues sigles en anglés). La solució de este problema te múltiples aplicacions en ciències i ingenieria. Un atra descomposició no negativa important es la Factorisació de Matrius No negatives(NMF per les seues sigles en anglés). La NMF es una ferramenta molt popular utilisada en diversos camps, com per eixemple: classificacio de documents, aprenentage automàtic, anàlisis de image o separació de senyals sonores. Esta factorisació intenta aproximar una matriu no negativa en el producte de dos matrius no negatives de menor tamany, creant habitualment representacions a parts de les dades originals. Els algoritmes dissenyats per a calcular la solució de estos dos problemes no negatius tenen un elevat cost computacional, i degut a este elevat cost, estes descomposicions poden beneficiar-se molt del us de tècniques de Computació de Altes Prestacions (HPC per les seues sigles en anglés). Estos sistemes de computació de altes prestacions inclouen des dels moderns computadors multinucli a lo últim en acceleradors de càlcul (Unitats de Processament Gràfic (GPU), Intel Many Core (MIC), etc.). Per a obtindre el màxim rendiment de estos sistemes, els desenrolladors deuen utilisar tecnologies software tals com la programació paralela, la vectorisació o el us de llibreries de computació de altes prestacions. A pesar de que existixen diversos algoritmes per a calcular la NMF i resoldre el problema NNLS, no tots ells disponen de una implementació paralela i eficient. Ademés, es molt interessant reunir diversos algoritmes en propietats diferents en una sola llibreria computacional. Esta tesis presenta una llibreria computacional de altes prestacions que conté implementacions paraleles i eficients dels millors algoritmes existents per a calcular la NMF. Ademés, la tesis també inclou una comparació experimental entre les diferents implementacions presentades. Esta llibreria centrada en el càlcul de la NMF soporta diverses arquitectures tals com CPUs multinucli, GPUs i Intel MIC. El objectiu de esta llibreria es oferir una varietat de algoritmes eficients per a ajudar a científics, ingeniers o qualsevol tipo de professionals que necessiten utilisar la NMF. Un atre problema abordat en esta tesis es la actualisació de les factorisacions no negatives. El problema de la actualisació se ha estudiat tant per a la solució del problema NNLS com per a el càlcul de la NMF. Existixen problemes no negatius la solució dels quals es pròxima a atres problemes no negatius que ya han sigut resolts, el problema de la actualisació consistix en aprofitar la solució de un problema A que ya ha sigut resolt, per a obtindre la solució de un problema B pròxim al problema A. Utilisant esta aproximació, el problema B pot ser resolt molt mes ràpidament que si tinguera que ser resolt des de 0 sense aprofitar la solució coneguda del problema A. En esta tesis es presenta una metodologia algorítmica per a resoldre els dos problemes de actualisació: la actualisació de la solució del problema NNLS i la actualisació de la NMF. Ademés es presenten evaluacions empíriques de les solucions presentades per als dos problemes. Els resultats de estes evaluacions mostren que els algoritmes proposts son més ràpits que resoldre el problema des de 0 en tots els
Many real world-problems can be modelled as mathematical problems with nonnegative magnitudes, and, therefore, the solutions of these problems are meaningful only if their values are nonnegative. Examples of these nonnegative magnitudes are the concentration of components in a chemical compound, frequencies in an audio signal, pixel intensities on an image, etc. Some of these problems can be modelled to an overdetermined system of linear equations. When the solution of this system of equations should be constrained to nonnegative values, a new problem arises. This problem is called the Nonnegative Least Squares (NNLS) problem, and its solution has multiple applications in science and engineering, especially for solving optimization problems with nonnegative restrictions. Another important nonnegativity constrained decomposition is the Nonnegative Matrix Factorization (NMF). The NMF is a very popular tool in many fields such as document clustering, data mining, machine learning, image analysis, chemical analysis, and audio source separation. This factorization tries to approximate a nonnegative data matrix with the product of two smaller nonnegative matrices, usually creating parts based representations of the original data. The algorithms that are designed to compute the solution of these two nonnegative problems have a high computational cost. Due to this high cost, these decompositions can benefit from the extra performance obtained using High Performance Computing (HPC) techniques. Nowadays, there are very powerful computational systems that offer high performance and can be used to solve extremely complex problems in science and engineering. From modern multicore CPUs to the newest computational accelerators (Graphics Processing Units(GPU), Intel Many Integrated Core(MIC), etc.), the performance of these systems keeps increasing continuously. To make the most of the hardware capabilities of these HPC systems, developers should use software technologies such as parallel programming, vectorization, or high performance computing libraries. While there are several algorithms for computing the NMF and for solving the NNLS problem, not all of them have an efficient parallel implementation available. Furthermore, it is very interesting to group several algorithms with different properties into a single computational library. This thesis presents a high-performance computational library with efficient parallel implementations of the best algorithms to compute the NMF in the current state of the art. In addition, an experimental comparison between the different implementations is presented. This library is focused on the computation of the NMF supporting multiple architectures like multicore CPUs, GPUs and Intel MIC. The goal of the library is to offer a full suit of algorithms to help researchers, engineers or professionals that need to use the NMF. Another problem that is dealt with in this thesis is the updating of nonnegative decompositions. The updating problem has been studied for both the solution of the NNLS problem and the NMF. Sometimes there are nonnegative problems that are close to other nonnegative problems that have already been solved. The updating problem tries to take advantage of the solution of a problem A, that has already been solved in order to obtain a solution of a new problem B, which is closely related to problem A. With this approach, problem B can be solved faster than solving it from scratch and not taking advantage of the already known solution of problem A. In this thesis, an algorithmic scheme is proposed for both the updating of the solution of NNLS problems and the updating of the NMF. Empirical evaluations for both updating problems are also presented. The results show that the proposed algorithms are faster than solving the problems from scratch in all of the tested cases.
San Juan Sebastián, P. (2018). HPC algorithms for nonnegative decompositions [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/113069
TESIS
Karlsson, Lars. "Scheduling of parallel matrix computations and data layout conversion for HPC and Multi-Core Architectures." Doctoral thesis, Umeå universitet, Institutionen för datavetenskap, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-41224.
Full textMatrisberäkningar är fundamentala byggblock imånga beräkningstunga teknisk-vetenskapliga applikationer. Algoritmerna måste vara numeriskt stabila och robusta för att användaren ska kunna förlita sig på de beräknade resultaten. Algoritmerna måste dessutom skala och kunna köras effektivt på massivt parallella datorer med noder bestående av flerkärniga processorer. Det är utmanande att uveckla högpresterande algoritmer för täta matrisberäkningar, särskilt sedan introduktionen av flerkärniga processorer. Det är ännu viktigare att återanvända data i cache-minnena i en flerkärnig processor på grund av dess höga beräkningsprestanda. Två centrala tekniker i strävan efter algoritmer med optimal prestanda är blockade algoritmer och blockade matrislagringsformat. En blockad algoritm har ett minnesåtkomstmönster som passar minneshierarkin väl. Ett blockat matrislagringsformat placerar matrisens element i minnet så att elementen i specifika matrisblock lagras konsekutivt. I Artikel I presenteras en algoritm för Cholesky-faktorisering av en matris kompakt lagrad i ett distribuerat minne. Det nya lagringsformatet är blockat och möjliggör därigenom hög prestanda. Artikel II och Artikel III beskriver hur en konventionellt lagrad matris kan konverteras till och från ett blockat lagringsformat med hjälp av en ytterst liten mängd extra lagringsutrymme. Lösningen bygger på en ny parallell algoritm för matristransponering av rektangulära matriser. Vid skapandet av en skalbar parallell algoritm måste man även beakta hur de olika beräkningsuppgifterna schemaläggs på ett effektivt sätt. Många så kallade svagt skalbara algoritmer är effektiva endast för relativt stora problem. En nuvarande forskningstrend är att utveckla så kallade starkt skalbara algoritmer, vilka är mer effektiva även för mindre problem. Artikel IV introducerar ett dynamiskt schemaläggningssystem för två-sidiga matrisberäkningar. Beräkningsuppgifterna fördelas statiskt på noderna och schemaläggs sedan dynamiskt inom varje nod. Artikeln visar även hur prioritetsbaserad schemaläggning tar en tidigare ineffektiv algoritm för ett så kallat QR-svep och gör den effektiv. Artikel V och Artikel VI presenterar nya parallella blockade algoritmer, designade för flerkärniga processorer, för en två-stegs Hessenberg-reduktion. De centrala bidragen i Artikel V utgörs av en blockad algoritm för reduktionens andra steg samt en adaptiv lastbalanseringsmetod.
Abaunza, Víctor Eduardo Martínez. "Performance optimization of geophysics stencils on HPC architectures." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2018. http://hdl.handle.net/10183/183165.
Full textWave modeling is a crucial tool in geophysics, for efficient strong motion analysis, risk mitigation and oil & gas exploration. Due to its simplicity and numerical efficiency, the finite-difference method is one of the standard techniques implemented to solve the wave propagation equations. This kind of applications is known as stencils because they consist in a pattern that replicates the same computation on a multi-dimensional domain. High Performance Computing is required to solve this class of problems, as a consequence of a large number of grid points involved in three-dimensional simulations of the underground. The performance optimization of stencil computations is a challenge and strongly depends on the underlying architecture. In this context, this work was directed toward a twofold aim. Firstly, we have led our research on multicore architectures and we have analyzed the standard OpenMP implementation of numerical kernels from the 3D heat transfer model (a 7-point Jacobi stencil) and the Ondes3D code (a full-fledged application developed by the French Geological Survey). We have considered two well-known implementations (naïve, and space blocking) to find correlations between parameters from the input configuration at runtime and the computing performance; thus, we have proposed a Machine Learning-based approach to evaluate, to predict, and to improve the performance of these stencil models on the underlying architecture. We have also used an acoustic wave propagation model provided by the Petrobras company and we have predicted the performance with high accuracy on multicore architectures. Secondly, we have oriented our research on heterogeneous architectures, we have analyzed the standard implementation for seismic wave propagation model in CUDA, to find which factors affect the performance; then, we have proposed a task-based implementation to improve the performance, according to the runtime configuration set (scheduling algorithm, size, and number of tasks), and we have compared the performance obtained with the classical CPU or GPU only versions with the results obtained on heterogeneous architectures.
López-Yunta, Mariña. "Multimodal ventricular tachycardia analysis : towards the accurate parametrization of predictive HPC electrophysiological computational models." Doctoral thesis, Universitat Politècnica de Catalunya, 2018. http://hdl.handle.net/10803/663730.
Full textTras un infarto de miocardio, las zonas de tejido cardiaco afectadas sufren cambios en sus propiedades eléctricas y mecánicas. Este substrato miocárdico se ha relacionado con la taquicardia ventricular (TV), un tipo de arritmia. En esta tesis se presenta un estudio exhaustivo de los datos experimentales adquiridos con protocolos clínicos con el objetivo de definir las limitaciones de los datos clínicos antes de avanzar hacia modelos computacionales. Los modelos computacionales tienen un gran potencial como herramientas para la predicción de TV, pero es necesaria su verificación, validación y la cuantificación de la incertidumbre en los resultados numéricos antes de poderlos emplear como herramientas clínicas. La caracterización precisa del sustrato miocárdico, cicatriz, se realiza mediante el procesado de los datos experimentales porcinos obtenidos del estudio electrofisiológico invasivo y la resonancia magnética cardiaca. Como consecuencia, se describen las limitaciones de cada técnica. Ademas, se estudia si el volumen da la cicatriz puede actuar como indicador de la aparición de VT. El escenario de simulación para los modelos computacionales biventriulares se construye a partir de los datos experimentales de un caso control incluido en el protocolo experimental. En el, se realizan simulaciones electrofisiológicas empleando un modelo celular detallado adaptado a las propiedades de las corrientes iónicas en los miocitos de los cerdos. Se cuantifica la incertidumbre del modelo generada por la difusión y la orientación de las fibras. Por ultimo, se compara la recuperación del modelo a un extraestímulo con datas experimentales mediante la simulación de un protocolo S1-S2. Los resultado numéricos obtenidos muestran que los patrones de propagación de la onda de las simulación cardiaca coinciden con los descritos por los mapas de activación experimentales si la fibras incluidas en el modelo corresponden a los datos de DTI. El modelo de activación es sensible a la orientación de fibras impuesta. Las simulaciones incluyendo la orientación de fibras de DTI es capaz de reproducir los patrones fisiológicos de la onda de propagación eléctrica en ambos ventrículos. El velocidad de conducción obtenida es muy dependiente del coeficiente de difusión impuesto. El protocolo S1-S2 protocolo genera curvas de restitución con pendientes simulares a las curvas experimentales. Esta tesis es un primer paso hacia la validación de las simulaciones electrofisiológicas cardiacas. En el futuro, se mejoraran las limitaciones relacionadas con una optima parametrización del modelo celular de O?Hara-Rudy para validar por completo el modelo computacional cardiaco para avanzar hacia la predicción de la predicción de VT.
Amin, Kaizar Abdul Husain. "An Integrated Architecture for Ad Hoc Grids." Thesis, University of North Texas, 2006. https://digital.library.unt.edu/ark:/67531/metadc5300/.
Full textCheema, Fahad Islam. "High-Level Parallel Programming of Computation-Intensive Algorithms on Fine-Grained Architecture." Thesis, Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-2620.
Full textComputation-intensive algorithms require a high level of parallelism and programmability, which
make them good candidate for hardware acceleration using fine-grained processor arrays. Using
Hardware Description Language (HDL), it is very difficult to design and manage fine-grained
processing units and therefore High-Level Language (HLL) is a preferred alternative.
This thesis analyzes HLL programming of fine-grained architecture in terms of achieved
performance and resource consumption. In a case study, highly computation-intensive algorithms
(interpolation kernels) are implemented on fine-grained architecture (FPGA) using a high-level
language (Mitrion-C). Mitrion Virtual Processor (MVP) is extracted as an application-specific
fine-grain processor array, and the Mitrion development environment translates high-level design
to hardware description (HDL).
Performance requirements, parallelism possibilities/limitations and resource requirement for
parallelism vary from algorithm to algorithm as well as by hardware platform. By considering
parallelism at different levels, we can adjust the parallelism according to available hardware
resources and can achieve better adjustment of different tradeoffs like gates-performance and
memory-performance tradeoffs. This thesis proposes different design approaches to adjust
parallelism at different design levels. For interpolation kernels, different parallelism levels and
design variants are proposed, which can be mixed to get a well-tuned application and resource
specific design.
Streit, Achim. "Self-tuning job scheduling strategies for the resource management of HPC systems and computational grids." [S.l. : s.n.], 2003. http://deposit.ddb.de/cgi-bin/dokserv?idn=971579393.
Full textSchirra, Jörg R. J. "Foundation of computational visualistics." Wiesbaden Dt. Univ.-Verl, 2005. http://deposit.ddb.de/cgi-bin/dokserv?id=2686222&prov=M&dok_var=1&dok_ext=htm.
Full textSandholm, Thomas. "Managing Service Levels in Grid Computing Systems : Quota Policy and Computational Market Approaches." Licentiate thesis, KTH, Numerical Analysis and Computer Science, NADA, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4346.
Full textWe study techniques to enforce and provision differentiated service levels in Computational Grid systems. The Grid offers simplified provisioning of peak-capacity for applications with computational requirements beyond local machines and clusters, by sharing resources across organizational boundaries. Current systems have focussed on access control, i.e., managing who is allowed to run applications on remote sites. Very little work has been done on providing differentiated service levels for those applications that are admitted. This leads to a number of problems when scheduling jobs in a fair and efficient way. For example, users with a large number of long-running jobs could starve out others, both intentionally and non-intentionally. We investigate the requirements of High Performance Computing (HPC) applications that run in academic Grid systems, and propose two models of service-level management. Our first model is based on global real-time quota enforcement, where projects are granted resource quota, such as CPU hours, across the Grid by a centralized allocation authority. We implement the SweGrid Accounting System to enforce quota allocated by the Swedish National Allocations Committee in the SweGrid production Grid, which connects six Swedish HPC centers. A flexible authorization policy framework allows provisioning and enforcement of two different service levels across the SweGrid clusters; high-priority and low-priority jobs. As a solution to more fine-grained control over service levels we propose and implement a Grid Market system, using a market-based resource allocator called Tycoon. The conclusion of our research is that although the Grid accounting solution offers better service level enforcement support than state-of-the-art production Grid systems, it turned out to be complex to set the resource price and other policies manually, while ensuring fairness and efficiency of the system. Our Grid Market on the other hand sets the price according to the dynamic demand, and it is further incentive compatible, in that the overall system state remains healthy even in the presence of strategic users.
Ling, Cheng. "High performance bioinformatics and computational biology on general-purpose graphics processing units." Thesis, University of Edinburgh, 2012. http://hdl.handle.net/1842/6260.
Full textShabut, Antesar Ramadan M. "Trust computational models for mobile ad hoc networks : recommendation based trustworthiness evaluation using multidimensional metrics to secure routing protocol in mobile ad hoc networks." Thesis, University of Bradford, 2015. http://hdl.handle.net/10454/7501.
Full textShabut, Antesar R. M. "Trust Computational Models for Mobile Ad Hoc Networks. Recommendation Based Trustworthiness Evaluation using Multidimensional Metrics to Secure Routing Protocol in Mobile Ad Hoc Networks." Thesis, University of Bradford, 2015. http://hdl.handle.net/10454/7501.
Full textMinistry of Higher Education in Libya and the Libyan Cultural Attaché bureau in London
Agarwal, Dinesh. "Scientific High Performance Computing (HPC) Applications On The Azure Cloud Platform." Digital Archive @ GSU, 2013. http://digitalarchive.gsu.edu/cs_diss/75.
Full textBrowne, Daniel R. "Application of multi-core and cluster computing to the Transmission Line Matrix method." Thesis, Loughborough University, 2014. https://dspace.lboro.ac.uk/2134/14984.
Full textRoos, Daniel. "Evaluation of BERT-like models for small scale ad-hoc information retrieval." Thesis, Linköpings universitet, Artificiell intelligens och integrerade datorsystem, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-177675.
Full textReuter, Balthasar. "Coarsening of Simplicial Meshes for Large ScaleParallel FEM Computations with DOLFIN HPC : A parallel implementation of the edge collapse algorithm." Thesis, KTH, Numerisk analys, NA, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-124240.
Full textAdaptiv förfining ochutglesning av element-nät är effektiva tekniker för att minska beräkningstidenför finita-element-lösare. Implementering av sådana adaptions-rutiner, passandeför stora beräkningar på maskiner med distribuerat minne, kräver stor omsorg. Idetta arbete presenteras en utglesnings-metod baserad på kant-sammanslagningar.Dess implementering och optimering för parallell-beräkningar förklaras ochanalyseras med avseende på glesnings-effektivitet och tidsåtgång. Somtillämpning visas nätutglesning i adaptiv strömningssimulering
Enico, Daniel. "External Heat Transfer Coefficient Predictions on a Transonic Turbine Nozzle Guide Vane Using Computational Fluid Dynamics." Thesis, Linköpings universitet, Mekanisk värmeteori och strömningslära, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-178173.
Full textAnsari, Sam. "Analysis of protein-protein interactions : a computational approach /." Saarbrücken : VDM Verl. Dr. Müller, 2007. http://deposit.d-nb.de/cgi-bin/dokserv?id=2992987&prov=M&dok_var=1&dok_ext=htm.
Full textKof, Leonid. "Text analysis for requirements engineering : application of computational linguistics /." Saarbrücken : VDM Verl. Dr. Müller, 2007. http://deposit.d-nb.de/cgi-bin/dokserv?id=3021639&prov=M&dok_var=1&dok_ext=htm.
Full textFundinger, Danny Georg. "Investigating dynamics by multilevel phase space discretization : approaches towards the efficient computation of nonlinear dynamical systems /." Saarbrücken : VDM Verlag Dr. Müller, 2007. http://deposit.d-nb.de/cgi-bin/dokserv?id=3042152&prov=M&dok_var=1&dok_ext=htm.
Full textBaral, Darshan. "Computational Study of Fish Passage through Circular Culverts in Northeast Ohio." Youngstown State University / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=ysu1369409121.
Full textGajurel, Sanjaya. "Multi-Criteria Direction Antenna Multi-Path Location Aware Routing Protocol for Mobile Ad Hoc Networks." Case Western Reserve University School of Graduate Studies / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=case1197301773.
Full textFröhlich, Holger. "Kernel methods in chemo- and bioinformatics." Berlin Logos-Verl, 2006. http://deposit.d-nb.de/cgi-bin/dokserv?id=2888426&prov=M&dok_var=1&dok_ext=htm.
Full textWegner, Jörg Kurt. "Data mining und graph mining auf molekularen Graphen - Cheminformatik und molekulare Kodierungen für ADME/Tox-QSAR-Analysen." Berlin Logos, 2006. http://deposit.d-nb.de/cgi-bin/dokserv?id=2865580&prov=M&dok_var=1&dok_ext=htm.
Full textBartsch, Adam Jesse. "Biomechanical Engineering Analyses of Head and Spine Impact Injury Risk via Experimentation and Computational Simulation." Case Western Reserve University School of Graduate Studies / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=case1291318455.
Full textBernabeu, Llinares Miguel Oscar. "An open source HPC-enabled model of cardiac defibrillation of the human heart." Thesis, University of Oxford, 2011. http://ora.ox.ac.uk/objects/uuid:9ca44896-8873-4c91-9358-96744e28d187.
Full textSacco, Federica. "Quantification of the influence of detailed endocardial structures on human cardiac haemodynamics and electrophysiology using HPC." Doctoral thesis, Universitat Pompeu Fabra, 2019. http://hdl.handle.net/10803/667670.
Full textLehmann, Rüdiger. "Ebene Geodätische Berechnungen: Internes Manuskript." Hochschule für Technik und Wirtschaft, 2018. https://htw-dresden.qucosa.de/id/qucosa%3A31824.
Full textThis manuscript evolved from lectures on Geodetic Computations at the University of Applied Sciences Dresden (Germany). Since this lecture is given in the first or second semester, no advanced mathematical methods are used. The range of topics is limited to elementary computations in the plane.:0 Vorwort 1 Ebene Trigonometrie 1.1 Winkelfunktionen 1.2 Berechnung schiefwinkliger ebener Dreiecke 1.3 Berechnung schiefwinkliger ebener Vierecke 2 Ebene Koordinatenrechnung 2.1 Kartesische und Polarkoordinaten 2.2 Erste Geodätische Grundaufgabe 2.3 Zweite Geodätische Grundaufgabe 3 Flächenberechnung und Flächenteilung 3.1 Flächenberechnung aus Maßzahlen. 3.2 Flächenberechnung aus Koordinaten 3.3 Absteckung und Teilung gegebener Dreiecksflächen 3.4 Absteckung und Teilung gegebener Vierecksflächen 4 Kreis und Ellipse 4.1 Kreisbogen und Kreissegment 4.2 Näherungsformeln für flache Kreisbögen 4.3 Sehnen-Tangenten-Verfahren 4.4 Grundlegendes über Ellipsen 4.5 Abplattung und Exzentrizitäten 4.6 Die Meridianellipse der Erde 4.7 Flächeninhalt und Bogenlängen 5 Ebene Einschneideverfahren 5.1 Bogenschnitt 5.2 Vorwärtsschnitt 5.3 Anwendung: Geradenschnitt 5.4 Anwendung: Kreis durch drei Punkte 5.5 Schnitt Gerade ⎼ Kreis oder Strahl ⎼ Kreis 5.6 Rückwärtsschnitt 5.7 Anwendung: Rechteck durch fünf Punkte 6 Ebene Koordinatentransformationen 6.1 Elementare Transformationsschritte 6.2 Rotation und Translation. 6.3 Rotation, Skalierung und Translation 6.4 Ähnlichkeitstransformation mit zwei identischen Punkten 6.5 Anwendung: Hansensche Aufgabe 6.6 Anwendung: Kleinpunktberechnung 6.7 Anwendung: Rechteck durch fünf Punkte 6.8 Ebene Helmert-Transformation 6.9 Bestimmung der Parameter bei Rotation und Translation 6.10 Ebene Affintransformation 7 Lösungen
Carron, Léopold. "Analyse à haute résolution de la structure spatiale des chromosomes eucaryotes Boost-HiC : Computational enhancement of long-range contacts in chromosomal contact maps Genome supranucleosomal organization and genetic susceptibility to disease." Thesis, Sorbonne université, 2019. http://www.theses.fr/2019SORUS593.
Full textGenetic information is encoded in DNA, a huge-size nucleotidic polymer. In order to understand DNA folding mechanisms, an experimental technique is today available that quantifies distal genomic contacts. This high-throughput chromosome conformation capture technique, called Hi-C, reveals 3D chromosome folding in the nucleus. In the recent years, the Hi-C experimental protocol received many improvements through numerous studies for Human, mouse and drosophila genomes. Because most of these studies are performed at poor resolution, I propose bioinformatic methods to analyze these datasets at fine resolution. In order to do this, I present Boost-HiC, a tool that enhanced long-range contacts in Hi-C data. I will then used our extended knowledge to compare 3D folding in different species. This result provides the basis to determine the best method for obtaining genomic compartements from a chromosomal contact map. Finally, I present some other applications of our methodology to study the link between the borders of topologically associating domains and the genomic location of single-nucleotide mutations associated to cancer
Ho, Minh Quan. "Optimisation de transfert de données pour les processeurs pluri-coeurs, appliqué à l'algèbre linéaire et aux calculs sur stencils." Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAM042/document.
Full textUpcoming Exascale target in High Performance Computing (HPC) and disruptive achievements in artificial intelligence give emergence of alternative non-conventional many-core architectures, with energy efficiency typical of embedded systems, and providing the same software ecosystem as classic HPC platforms. A key enabler of energy-efficient computing on many-core architectures is the exploitation of data locality, specifically the use of scratchpad memories in combination with DMA engines in order to overlap computation and communication. Such software paradigm raises considerable programming challenges to both the vendor and the application developer. In this thesis, we tackle the memory transfer and performance issues, as well as the programming challenges of memory- and compute-intensive HPC applications on he Kalray MPPA many-core architecture. With the first memory-bound use-case of the lattice Boltzmann method (LBM), we provide generic and fundamental techniques for decomposing three-dimensional iterative stencil problems onto clustered many-core processors fitted withs cratchpad memories and DMA engines. The developed DMA-based streaming and overlapping algorithm delivers 33%performance gain over the default cache-based implementation.High-dimensional stencil computation suffers serious I/O bottleneck and limited on-chip memory space. We developed a new in-place LBM propagation algorithm, which reduces by half the memory footprint and yields 1.5 times higher performance-per-byte efficiency than the state-of-the-art out-of-place algorithm. On the compute-intensive side with dense linear algebra computations, we build an optimized matrix multiplication benchmark based on exploitation of scratchpad memory and efficient asynchronous DMA communication. These techniques are then extended to a DMA module of the BLIS framework, which allows us to instantiate an optimized and portable level-3 BLAS numerical library on any DMA-based architecture, in less than 100 lines of code. We achieve 75% peak performance on the MPPA processor with the matrix multiplication operation (GEMM) from the standard BLAS library, without having to write thousands of lines of laboriously optimized code for the same result
Bonnier, Florent. "Algorithmes parallèles pour le suivi de particules." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLV080/document.
Full textThe complexity of these new generations of distributed architectures is essencially due to a high number of multi-core nodes. Most of the nodes can be heterogeneous and sometimes remote. Today, nor the high number of nodes, nor the processes that compose the nodes are exploited by most of applications and numerical libraries. The approach of most of parallel libraries (PBLAS, ScalAPACK, P_ARPACK) consists in implementing the distributed version of its base operations, which means that the subroutines of these libraries can not adapt their behaviors to the data types. These subroutines must be defined once for use in the sequential case and again for the parallel case. The object-oriented approach allows the modularity and scalability of some digital libraries (such as PETSc) and the reusability of sequential and parallel code. This modern approach to modelize sequential/parallel libraries is very promising because of its reusability and low maintenance cost. In industrial applications, the need for the use of software engineering techniques for scientific computation, whose reusability is one of the most important elements, is increasingly highlighted. However, these techniques are not yet well defined. The search for methodologies for designing and producing reusable libraries is motivated by the needs of the industries in this field. The main objective of this thesis is to define strategies for designing a parallel library for Lagrangian particle tracking using a component approach. These strategies should allow the reuse of the sequential code in the parallel versions while allowing the optimization of the performances. The study should be based on a separation between the control flow and the data flow management. It should extend to models of parallelism allowing the exploitation of a large number of cores in shared and distributed memory
Yieh, Pierson. "Vehicle Pseudonym Association Attack Model." DigitalCommons@CalPoly, 2018. https://digitalcommons.calpoly.edu/theses/1840.
Full textEvans, Llion Marc. "Thermal finite element analysis of ceramic/metal joining for fusion using X-ray tomography data." Thesis, University of Manchester, 2013. https://www.research.manchester.ac.uk/portal/en/theses/thermal-finite-element-analysis-of-ceramicmetal-joining-for-fusion-using-xray-tomography-data(5f06bb67-1c6c-4723-ae14-f03b84628610).html.
Full textNikfarjam, Farhad. "Extension de la méthode LS-STAG de type frontière immergée/cut-cell aux géométries 3D extrudées : applications aux écoulements newtoniens et non newtoniens." Thesis, Université de Lorraine, 2018. http://www.theses.fr/2018LORR0023/document.
Full textThe LS-STAG method is an immersed boundary/cut-cell method for viscous incompressible flows based on the staggered MAC arrangement for Cartesian grids where the irregular boundary is sharply represented by its level-set function. This approach results in a significant gain in computer resources compared to commercial body-fitted CFD codes. The 2D version of LS-STAG method is now well-established and this manuscript presents its extension to 3D geometries with translational symmetry in the z direction (3D extruded configurations). This intermediate step will be regarded as the milestone for the full 3D solver, since both discretization and implementation issues on distributed memory machines are tackled at this stage of development. The LS-STAG method is then applied to Newtonian and non-Newtonian flows in 3D extruded geometries (axisymmetric pipe, circular cylinder, duct with an abrupt expansion, etc.) for which benchmark results and experimental data are available. The purpose of these investigations is to evaluate the accuracy of LS-STAG method, to assess the versatility of method for flow applications at various regimes (Newtonian and shear-thinning fluids, steady and unsteady laminar to turbulent flows, granular flows) and to compare its performance with well-established numerical methods (body-fitted and immersed boundary methods)
Wahl, Jean-Baptiste. "The Reduced basis method applied to aerothermal simulations." Thesis, Strasbourg, 2018. http://www.theses.fr/2018STRAD024/document.
Full textWe present in this thesis our work on model order reduction for aerothermal simulations. We consider the coupling between the incompressible Navier-Stokes equations and an advection-diffusion equation for the temperature. Since the physical parameters induce high Reynolds and Peclet numbers, we have to introduce stabilization operators in the formulation to deal with the well known numerical stability issue. The chosen stabilization, applied to both fluid and heat equations, is the usual Streamline-Upwind/Petrov-Galerkin (SUPG) which add artificial diffusivity in the direction of the convection field. We also introduce our order reduction strategy for this model, based on the Reduced Basis Method (RBM). To recover an affine decomposition for this complex model, we implemented a discrete variation of the Empirical Interpolation Method (EIM) which is a discrete version of the original EIM. This variant allows building an approximated affine decomposition for complex operators such as in the case of SUPG. We also use this method for the non-linear operators induced by the shock capturing method. The construction of an EIM basis for non-linear operators involves a potentially huge number of non-linear FEM resolutions - depending on the size of the sampling. Even if this basis is built during an offline phase, we usually can not afford such expensive computational cost. We took advantage of the recent development of the Simultaneous EIM Reduced basis algorithm (SER) to tackle this issue
Kraszewski, Sebastian. "Compréhension des mécanismes d'interaction entre des nanotubes de carbone et une membrane biologique : effets toxiques et vecteurs de médicaments potentiels." Phd thesis, Université de Franche-Comté, 2010. http://tel.archives-ouvertes.fr/tel-00642770.
Full textTeng, Sin Yong. "Intelligent Energy-Savings and Process Improvement Strategies in Energy-Intensive Industries." Doctoral thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2020. http://www.nusl.cz/ntk/nusl-433427.
Full textPUCCI, EGIDIO. "Innovative design process for industrial gas turbine combustors." Doctoral thesis, 2018. http://hdl.handle.net/2158/1126566.
Full textYi, Huang Chan, and 黃展翊. "A Trust Evidence Establishment, Distribution and Value Computation Mechanism for Mobile Ad Hoc Networks." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/09485415841200298509.
Full text國立交通大學
資訊管理研究所
95
Personal identity is the basic way to present user’s role for MANETs, so all around nodes can verify the node which they will communicate with in the future. In order to evaluate the node’s behavior, we can combine the identity with trust value. But nodes are in independent and self-configured architecture, so it is important to develop a totally trust evidence which contains personal identity, certificate information, and trust operation mechanism. Moreover, the trust evidence can be established, distributed, evaluated, and verified on-line. This research proposes the distributed trust evidence operation mechanism for MANETs. The node establishes certificate itself and has corresponding trust identity without central certificate authority. The way to manage trust evidence can get others’ trust evidence via the transmission of packets and would not be modified by malicious nodes. The model will resolve selfish node and malicious node problem via simulation. It will be suitable to operate MANETs and provide most correct routing reference. We can also prove that nodes in MANETs will cooperate via game theory. After interaction, higher trust nodes can reflect the outcome and re-evaluate the trust. If the intermediate node deviates, its trust value will be decreased and will be regarded as the doubtful node. Therefore when the doubtful node requests surrounding node to forward packets, it will be rejected until the node cooperates.
Schall, James David. "Computational modeling nanoindentation and an ad hoc molecular dynamics-finite difference thermostat /." 2004. http://www.lib.ncsu.edu/theses/available/etd-06252004-130229/unrestricted/etd.pdf.
Full textChandan, G. "Effective Automatic Computation Placement and Data Allocation for Parallelization of Regular Programs." Thesis, 2014. http://hdl.handle.net/2005/3111.
Full text