Academic literature on the topic 'High performance computing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'High performance computing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "High performance computing"

1

Bungartz, Hans-Joachim. "High-Performance Computing." it - Information Technology 55, no. 3 (June 2013): 83–85. http://dx.doi.org/10.1524/itit.2013.9003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zier, Ulrich, and J. P. Morgan. "High-performance computing." Computers & Geosciences 27, no. 3 (April 2001): 369–70. http://dx.doi.org/10.1016/s0098-3004(00)00125-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Marsh, P. "High performance horizons [high performance computing]." Computing and Control Engineering 15, no. 6 (December 1, 2004): 42–48. http://dx.doi.org/10.1049/cce:20040613.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Benkrid, Khaled, Esam El-Araby, Miaoqing Huang, Kentaro Sano, and Thomas Steinke. "High-Performance Reconfigurable Computing." International Journal of Reconfigurable Computing 2012 (2012): 1–2. http://dx.doi.org/10.1155/2012/104963.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Simons, Joshua E., and Jeffrey Buell. "Virtualizing high performance computing." ACM SIGOPS Operating Systems Review 44, no. 4 (December 13, 2010): 136–45. http://dx.doi.org/10.1145/1899928.1899946.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Blaheta, Radim, Ivan Georgiev, Krassimir Georgiev, Ondrej Jakl, Roman Kohut, Svetozar Margenov, and Jiry Starý. "High Performance Computing Applications." Cybernetics and Information Technologies 17, no. 5 (December 20, 2017): 5–16. http://dx.doi.org/10.1515/cait-2017-0050.

Full text
Abstract:
Abstract High Performance Computing (HPC) is required for many important applications in chemistry, computational fluid dynamics, etc., see, e.g., an overview in [1]. In this paper we shortly describe an application (a multiscale material design problem) that requires HPC for several reasons. The problem of interest is analysis of the fiber-reinforced concrete and we focus on modelling of stiffness through numerical homogenization and computing local material properties by inverse analysis. Both problems require a repeated solution of large-scale finite element problems up to 200 million degrees of freedom and therefore the importance of HPC computing is evident.
APA, Harvard, Vancouver, ISO, and other styles
7

Lathrop, Scott, and Thomas Murphy. "High-Performance Computing Education." Computing in Science & Engineering 10, no. 5 (September 2008): 9–11. http://dx.doi.org/10.1109/mcse.2008.132.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Yang, Mei, Yingtao Jiang, Ling Wang, and Yulu Yang. "High performance computing architectures." Computers & Electrical Engineering 35, no. 6 (November 2009): 815–16. http://dx.doi.org/10.1016/j.compeleceng.2009.02.009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Mauch, Viktor, Marcel Kunze, and Marius Hillenbrand. "High performance cloud computing." Future Generation Computer Systems 29, no. 6 (August 2013): 1408–16. http://dx.doi.org/10.1016/j.future.2012.03.011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

DEVITT, Simon J., William J. MUNRO, and Kae NEMOTO. "High performance quantum computing." Progress in Informatics, no. 8 (March 2011): 49. http://dx.doi.org/10.2201/niipi.2011.8.6.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "High performance computing"

1

KHAN, OMAR USMAN. "High Performance Computing using GPGPU's." Doctoral thesis, Politecnico di Torino, 2013. http://hdl.handle.net/11583/2506369.

Full text
Abstract:
Computer based simulation software having a basis in numerical methods play a major role in research in the area of natural and physical sciences. These tools allow scientists to attempt problems that are too large to solve using analytical methods. But even these tools can fail to give solutions due to computational or storage limits. However, as the performance of computer hardware gets better and better, the computational limits can be also addressed. One such area of work is that of magnetic field modeling, which plays a crucial role in various fields of research, especially those related to nanotechnology. Due to remarkable advancements made in this field, magnetic modelling has developed new found interest, and rightly so. The most significant impact of this interest is perhaps felt in increasing areal densities for data storage devices which is projected to reach almost atomic scales. Computational limits, and subsequently their solutions based on hardware delivering high performance, are therefore a key component in research in this field. The scale of length and time plays a crucial role in observing magnetic phenomena, and as these scales are reduced, new behaviours can be observed. Coarser scales may be beneficial if modeling larger systems, but when working with sub-μm scales, a finer scale has to be selected. Doing so will project the proper magnetic behaviour of the materials, but will come with its share of problems. These will be addressed in this thesis. Simulations are usually configured before being started. The configuration is performed using scripting based methods which need to reflect the proper environmental conditions. For example, simulating multiple bodies with varying orientations, non-uniform geometries, bodies consisting of multiple layers with each layer having different properties, etc. will all need different configuration methods. A performance based solution would need to be optimized for each type of simulation. This may require re-structuring of different components of a simulator. This thesis is devoted to addressing such problems listed above with a focus on performance based solutions. The scope of the work has been limited to magnetostatic field calculations in particular because they consume the most time in the overall simulation. The scope has also been confined to regular structured rectangular meshes which are popular in major micromagnetic simulation software. Using regular meshes, magnetostatic field calculations can exploit a performance boost by using Fast Fourier Transforms. Therefore, fast FFT libraries using open standards will also be addressed in this thesis. In particular, this thesis will be based on the development process of open standards for magnetic field modeling. The major contribution in this regard includes an OpenCL specific FFT library for GPU’s and a GPU based magnetostatic field solver which is used as an extension to the OOMMF simulator. The thesis covers some novel numerical techniques that have been developed to target particular simulation configurations to obtain maximum performance.
APA, Harvard, Vancouver, ISO, and other styles
2

ROOZMEH, MEHDI. "High Performance Computing via High Level Synthesis." Doctoral thesis, Politecnico di Torino, 2018. http://hdl.handle.net/11583/2710706.

Full text
Abstract:
As more and more powerful integrated circuits are appearing on the market, more and more applications, with very different requirements and workloads, are making use of the available computing power. This thesis is in particular devoted to High Performance Computing applications, where those trends are carried to the extreme. In this domain, the primary aspects to be taken into consideration are (1) performance (by definition) and (2) energy consumption (since operational costs dominate over procurement costs). These requirements can be satisfied more easily by deploying heterogeneous platforms, which include CPUs, GPUs and FPGAs to provide a broad range of performance and energy-per-operation choices. In particular, as we will see, FPGAs clearly dominate both CPUs and GPUs in terms of energy, and can provide comparable performance. An important aspect of this trend is of course design technology, because these applications were traditionally programmed in high-level languages, while FPGAs required low-level RTL design. The OpenCL (Open Computing Language) developed by the Khronos group enables developers to program CPU, GPU and recently FPGAs using functionally portable (but sadly not performance portable) source code which creates new possibilities and challenges both for research and industry. FPGAs have been always used for mid-size designs and ASIC prototyping thanks to their energy efficient and flexible hardware architecture, but their usage requires hardware design knowledge and laborious design cycles. Several approaches are developed and deployed to address this issue and shorten the gap between software and hardware in FPGA design flow, in order to enable FPGAs to capture a larger portion of the hardware acceleration market in data centers. Moreover, FPGAs usage in data centers is growing already, regardless of and in addition to their use as computational accelerators, because they can be used as high performance, low power and secure switches inside data-centers. High-Level Synthesis (HLS) is the methodology that enables designers to map their applications on FPGAs (and ASICs). It synthesizes parallel hardware from a model originally written C-based programming languages .e.g. C/C++, SystemC and OpenCL. Design space exploration of the variety of implementations that can be obtained from this C model is possible through wide range of optimization techniques and directives, e.g. to pipeline loops and partition memories into multiple banks, which guide RTL generation toward application dependent hardware and benefit designers from flexible parallel architecture of FPGAs. Model Based Design (MBD) is a high-level and visual process used to generate implementations that solve mathematical problems through a varied set of IP-blocks. MBD enables developers with different expertise, e.g. control theory, embedded software development, and hardware design to share a common design framework and contribute to a shared design using the same tool. Simulink, developed by MATLAB, is a model based design tool for simulation and development of complex dynamical systems. Moreover, Simulink embedded code generators can produce verified C/C++ and HDL code from the graphical model. This code can be used to program micro-controllers and FPGAs. This PhD thesis work presents a study using automatic code generator of Simulink to target Xilinx FPGAs using both HDL and C/C++ code to demonstrate capabilities and challenges of high-level synthesis process. To do so, firstly, digital signal processing unit of a real-time radar application is developed using Simulink blocks. Secondly, generated C based model was used for high level synthesis process and finally the implementation cost of HLS is compared to traditional HDL synthesis using Xilinx tool chain. Alternative to model based design approach, this work also presents an analysis on FPGA programming via high-level synthesis techniques for computationally intensive algorithms and demonstrates the importance of HLS by comparing performance-per-watt of GPUs(NVIDIA) and FPGAs(Xilinx) manufactured in the same node running standard OpenCL benchmarks. We conclude that generation of high quality RTL from OpenCL model requires stronger hardware background with respect to the MBD approach, however, the availability of a fast and broad design space exploration ability and portability of the OpenCL code, e.g. to CPUs and GPUs, motivates FPGA industry leaders to provide users with OpenCL software development environment which promises FPGA programming in CPU/GPU-like fashion. Our experiments, through extensive design space exploration(DSE), suggest that FPGAs have higher performance-per-watt with respect to two high-end GPUs manufactured in the same technology(28 nm). Moreover, FPGAs with more available resources and using a more modern process (20 nm) can outperform the tested GPUs while consuming much less power at the cost of more expensive devices.
APA, Harvard, Vancouver, ISO, and other styles
3

Balakrishnan, Suresh Reuben A/L. "Hybrid High Performance Computing (HPC) + Cloud for Scientific Computing." Thesis, Curtin University, 2022. http://hdl.handle.net/20.500.11937/89123.

Full text
Abstract:
The HPC+Cloud framework has been built to enable on-premise HPC jobs to use resources from cloud computing nodes. As part of designing the software framework, public cloud providers, namely Amazon AWS, Microsoft Azure and NeCTAR were benchmarked against one another, and Microsoft Azure was determined to be the most suitable cloud component in the proposed HPC+Cloud software framework. Finally, an HPC+Cloud cluster was built using the HPC+Cloud software framework and then was validated by conducting HPC processing benchmarks.
APA, Harvard, Vancouver, ISO, and other styles
4

Roberts, Stephen I. "Energy-aware performance engineering in high performance computing." Thesis, University of Warwick, 2017. http://wrap.warwick.ac.uk/107784/.

Full text
Abstract:
Advances in processor design have delivered performance improvements for decades. As physical limits are reached, however, refinements to the same basic technologies are beginning to yield diminishing returns. Unsustainable increases in energy consumption are forcing hardware manufacturers to prioritise energy efficiency in their designs. Research suggests that software modifications will be needed to exploit the resulting improvements in current and future hardware. New tools are required to capitalise on this new class of optimisation. This thesis investigates the field of energy-aware performance engineering. It begins by examining the current state of the art, which is characterised by ad-hoc techniques and a lack of standardised metrics. Work in this thesis addresses these deficiencies and lays stable foundations for others to build on. The first contribution made includes a set of criteria which define the properties that energy-aware optimisation metrics should exhibit. These criteria show that current metrics cannot meaningfully assess the utility of code or correctly guide its optimisation. New metrics are proposed to address these issues, and theoretical and empirical proofs of their advantages are given. This thesis then presents the Power Optimised Software Envelope (POSE) model, which allows developers to assess whether power optimisation is worth pursuing for their applications. POSE is used to study the optimisation characteristics of codes from the Mantevo mini-application suite running on a Haswell-based cluster. The results obtained show that of these codes TeaLeaf has the most scope for power optimisation while PathFinder has the least. Finally, POSE modelling techniques are extended to evaluate the system-wide scope for energy-aware performance optimisation. System Summary POSE allows developers to assess the scope a system has for energy-aware software optimisation independent of the code being run.
APA, Harvard, Vancouver, ISO, and other styles
5

Palamadai, Natarajan Ekanathan. "Portable and productive high-performance computing." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/108988.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 115-120).
Performance portability of computer programs, and programmer productivity in writing them are key expectations in software engineering. These expectations lead to the following questions: Can programmers write code once, and execute it at optimal speed on any machine configuration? Can programmers write parallel code to simple models that hide the complex details of parallel programming? This thesis addresses these questions for certain "classes" of computer programs. It describes "autotuning" techniques that achieve performance portability for serial divide-and-conquer programs, and an abstraction that improves programmer productivity in writing parallel code for a class of programs called "Star". We present a "pruned-exhaustive" autotuner called Ztune that optimizes the performance of serial divide-and-conquer programs for a given machine configuration. Whereas the traditional way of autotuning divide-and-conquer programs involves simply coarsening the base case of recursion optimally, Ztune searches for optimal divide-and-conquer trees. Although Ztune, in principle, exhaustively enumerates the search domain, it uses pruning properties that greatly reduce the size of the search domain without significantly sacrificing the quality of the autotuned code. We illustrate how to autotune divide-and-conquer stencil computations using Ztune, and present performance comparisons with state-of-the-art "heuristic" autotuning. Not only does Ztune autotune significantly faster than a heuristic autotuner, the Ztuned programs also run faster on average than their heuristic autotuner tuned counterparts. Surprisingly, for some stencil benchmarks, Ztune actually autotuned faster than the time it takes to execute the stencil computation once. We introduce the Star class that includes many seemingly different programs like solving symmetric, diagonally-dominant tridiagonal systems, executing "watershed" cuts on graphs, sample sort, fast multipole computations, and all-prefix-sums and its various applications. We present a programming model, which is also called Star, to generate and execute parallel code for the Star class of programs. The Star model abstracts the pattern of computation and interprocessor communication in the Star class of programs, hides low-level parallel programming details, and offers ease of expression, thereby improving programmer productivity in writing parallel code. Besides, we also present parallel algorithms, which offer asymptotic improvements over prior art, for two programs in the Star class - a Trip algorithm for solving symmetric, diagonally-dominant tridiagonal systems, and a Wasp algorithm for executing watershed cuts on graphs. The Star model is implemented in the Julia programming language, and leverages Julia's capabilities in expressing parallelism in code concisely, and in supporting both shared-memory and distributed-memory parallel programming alike.
by Ekanathan Palamadai Natarajan.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
6

Zhou, He. "High Performance Computing Architecture with Security." Diss., The University of Arizona, 2015. http://hdl.handle.net/10150/578611.

Full text
Abstract:
Multi-processor embedded system is the future promise of high performance computing architecture. However, it still suffers low network efficiency and security threat. Simply upgrading to multi-core systems has been proven to provide only minor speedup compared with single core systems. Router architecture of network-on-chip (NoC) uses shared input buffers such as virtual channels and crossbar switches that only allow sequential data access. The speed and efficiency of on-chip communication is limited. In addition, the performance of conventional NoC topology is limited by routing latency and energy consumption due to its network diameter increases with the rising number of nodes. The security concern has also become a serious problem for embedded systems. Even with cryptographic algorithms, embedded systems are still very vulnerable to side channel attacks (SCAs). Among SCA approaches, power analysis is an efficient and powerful attack. Once the encryption location in an instruction sequence is identified, power analysis can be applied to exploit the embedded system. To improve on-chip network parallelism, this dissertation proposes a new router microarchitecture based on a new data structure called virtual collision array. Sequential data requests are partially eliminated in the virtual collision array before entering router pipeline. To facilitate the new router architecture, new workload assignment is applied to increase data request elimination. Through a task flow partitioning algorithm, we minimize sequential data access and then schedule tasks while minimizing the total router delay. For NoC topology, this dissertation presents a new hybrid NoC (HyNoC) architecture. We introduce an adaptive routing scheme to provide reconfigurable on-chip communication with both wired and wireless links. In addition, based on a mathematical model which established on cross-correlation, this dissertation proposes two obfuscation methodologies: Real Instruction Insertion and AES Mimic to prevent SCAs power analysis attack.
APA, Harvard, Vancouver, ISO, and other styles
7

Mani, Sindhu. "Empirical Performance Analysis of High Performance Computing Benchmarks Across Variations in Cloud Computing." UNF Digital Commons, 2012. http://digitalcommons.unf.edu/etd/418.

Full text
Abstract:
High Performance Computing (HPC) applications are data-intensive scientific software requiring significant CPU and data storage capabilities. Researchers have examined the performance of Amazon Elastic Compute Cloud (EC2) environment across several HPC benchmarks; however, an extensive HPC benchmark study and a comparison between Amazon EC2 and Windows Azure (Microsoft’s cloud computing platform), with metrics such as memory bandwidth, Input/Output (I/O) performance, and communication computational performance, are largely absent. The purpose of this study is to perform an exhaustive HPC benchmark comparison on EC2 and Windows Azure platforms. We implement existing benchmarks to evaluate and analyze performance of two public clouds spanning both IaaS and PaaS types. We use Amazon EC2 and Windows Azure as platforms for hosting HPC benchmarks with variations such as instance types, number of nodes, hardware and software. This is accomplished by running benchmarks including STREAM, IOR and NPB benchmarks on these platforms on varied number of nodes for small and medium instance types. These benchmarks measure the memory bandwidth, I/O performance, communication and computational performance. Benchmarking cloud platforms provides useful objective measures of their worthiness for HPC applications in addition to assessing their consistency and predictability in supporting them.
APA, Harvard, Vancouver, ISO, and other styles
8

Choi, Jee Whan. "Power and performance modeling for high-performance computing algorithms." Diss., Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/53561.

Full text
Abstract:
The overarching goal of this thesis is to provide an algorithm-centric approach to analyzing the relationship between time, energy, and power. This research is aimed at algorithm designers and performance tuners so that they may be able to make decisions on how algorithms should be designed and tuned depending on whether the goal is to minimize time or to minimize energy on current and future systems. First, we present a simple analytical cost model for energy and power. Assuming a simple von Neumann architecture with a two-level memory hierarchy, this model pre- dicts energy and power for algorithms using just a few simple parameters, such as the number of floating point operations (FLOPs or flops) and the amount of data moved (bytes or words). Using highly optimized microbenchmarks and a small number of test platforms, we show that although this model uses only a few simple parameters, it is, nevertheless, accurate. We can also visualize this model using energy “arch lines,” analogous to the “rooflines” in time. These “rooflines in energy” allow users to easily assess and com- pare different algorithms’ intensities in energy and time to various target systems’ balances in energy and time. This visualization of our model gives us many inter- esting insights, and as such, we refer to our analytical model as the energy roofline model. Second, we present the results of our microbenchmarking study of time, energy, and power costs of computation and memory access of several candidate compute- node building blocks of future high–performance computing (HPC) systems. Over a dozen server-, desktop-, and mobile-class platforms that span a range of compute and power characteristics were evaluated, including x86 (both conventional and Xeon Phi accelerator), ARM, graphics processing units (GPU), and hybrid (AMD accelerated processing units (APU) and other system–on–chip (SoC)) processors. The purpose of this study was twofold; first, it was to extend the validation of the energy roofline model to a more comprehensive set of target systems to show that the model works well independent of system hardware and microarchitecture; second, it was to improve the model by uncovering and remedying potential shortcomings, such as incorporating the effects of power “capping,” multi–level memory hierarchy, and different implementation strategies on power and performance. Third, we incorporate dynamic voltage and frequency scaling (DVFS) into the energy roofline model to explore its potential for saving energy. Rather than the more traditional approach of using DVFS to reduce energy, whereby a “slack” in computation is used as an opportunity to dynamically cycle down the processor clock, the energy roofline model can be used to determine precisely how the time and energy costs of different operations, both compute and memory, change with respect to frequency and voltage settings. This information can be used to target a specific optimization goal, whether that be time, energy, or a combination of both. In the final chapter of this thesis, we use our model to predict the energy dissi- pation of a real application running on a real system. The fast multipole method (FMM) kernel was executed on the GPU component of the Tegra K1 SoC under various frequency and voltage settings and a breakdown of instructions and data ac- cess pattern was collected via performance counters. The total energy dissipation of FMM was then calculated as a weighted sum of these instructions and the associated costs in energy. On eight different voltage and frequency settings and eight different algorithm–specific input parameters per setting, for a total of 64 total test cases, the accuracy of the energy roofline model for predicting total energy dissipation was within 6.2%, with a standard deviation of 4.7%, when compared to actual energy measurements. Despite its simplicity and its foundation on the first principles of algorithm anal- ysis, the energy roofline model has proven to be both practical and accurate for real applications running on a real system. And as such, it can be an invaluable tool for al- gorithm designers and performance tuners with which they can more precisely analyze the impact of their design decisions on both performance and energy efficiency.
APA, Harvard, Vancouver, ISO, and other styles
9

Ge, Rong. "Theories and Techniques for Efficient High-End Computing." Diss., Virginia Tech, 2007. http://hdl.handle.net/10919/28863.

Full text
Abstract:
Today, power consumption costs supercomputer centers millions of dollars annually and the heat produced can reduce system reliability and availability. Achieving high performance while reducing power consumption is challenging since power and performance are inextricably interwoven; reducing power often results in degradation in performance. This thesis aims to address these challenges by providing theories, techniques, and tools to 1) accurately predict performance and improve it in systems with advanced hierarchical memories, 2) understand and evaluate power and its impacts on performance, 3) control power and performance for maximum efficiency. Our theories, techniques, and tools have been applied to high-end computing systems. Our theroetical models can improve algorithm performance by up to 59% and accurately predict the impacts of power on performance. Our techniques can evaluate power consumption of high-end computing systems and their applications with fine granularity and save up to 36% energy with little performance degradation.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
10

Orobitg, Cortada Miquel. "High performance computing on biological sequence alignment." Doctoral thesis, Universitat de Lleida, 2013. http://hdl.handle.net/10803/110930.

Full text
Abstract:
L'Alineament Múltiple de Seqüències (MSA) és una eina molt potent per a aplicacions biològiques importants. Els MSA són computacionalment complexos de calcular, i la majoria de les formulacions porten a problemes d'optimització NP-Hard. Per a dur a terme alineaments de milers de seqüències, nous desafiaments necessiten ser resolts per adaptar els algoritmes a l'era de la computació d'altes prestacions. En aquesta tesi es proposen tres aportacions diferents per resoldre algunes limitacions dels mètodes MSA. La primera proposta consisteix en un algoritme de construcció d'arbres guia per millorar el grau de paral•lelisme, amb la finalitat de resoldre el coll d'ampolla de l'etapa de l'alineament progressiu. La segona proposta consisteix en optimitzar la biblioteca de consistència per millorar el temps d'execució, l'escalabilitat, i poder tractar un major nombre de seqüències. Finalment, proposem Multiples Trees Alignment (MTA), un mètode MSA per alinear en paral•lel múltiples arbres guia, avaluar els alineaments obtinguts i seleccionar el millor com a resultat. Els resultats experimentals han demostrat que MTA millora considerablement la qualitat dels alineaments. El Alineamiento Múltiple de Secuencias (MSA) es una herramienta poderosa para aplicaciones biológicas importantes. Los MSA son computacionalmente complejos de calcular, y la mayoría de las formulaciones llevan a problemas de optimización NP-Hard. Para llevar a cabo alineamientos de miles de secuencias, nuevos desafíos necesitan ser resueltos para adaptar los algoritmos a la era de la computación de altas prestaciones. En esta tesis se proponen tres aportaciones diferentes para resolver algunas limitaciones de los métodos MSA. La primera propuesta consiste en un algoritmo de construcción de árboles guía para mejorar el grado de paralelismo, con el fin de resolver el cuello de botella de la etapa del alineamiento progresivo. La segunda propuesta consiste en optimizar la biblioteca de consistencia para mejorar el tiempo de ejecución, la escalabilidad, y poder tratar un mayor número de secuencias. Finalmente, proponemos Múltiples Trees Alignment (MTA), un método MSA para alinear en paralelo múltiples árboles guía, evaluar los alineamientos obtenidos y seleccionar el mejor como resultado. Los resultados experimentales han demostrado que MTA mejora considerablemente la calidad de los alineamientos. Multiple Sequence Alignment (MSA) is a powerful tool for important biological applications. MSAs are computationally difficult to calculate, and most formulations of the problem lead to NP-Hard optimization problems. To perform large-scale alignments, with thousands of sequences, new challenges need to be resolved to adapt the MSA algorithms to the High-Performance Computing era. In this thesis we propose three different approaches to solve some limitations of main MSA methods. The first proposal consists of a new guide tree construction algorithm to improve the degree of parallelism in order to resolve the bottleneck of the progressive alignment stage. The second proposal consists of optimizing the consistency library, improving the execution time and the scalability of MSA to enable the method to treat more sequences. Finally, we propose Multiple Trees Alignments (MTA), a MSA method to align in parallel multiple guide-trees, evaluate the alignments obtained and select the best one as a result. The experimental results demonstrated that MTA improves considerably the quality of the alignments.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "High performance computing"

1

Chamberlain, Bradford L., Ana-Lucia Varbanescu, Hatem Ltaief, and Piotr Luszczek, eds. High Performance Computing. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-78713-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Jagode, Heike, Hartwig Anzt, Hatem Ltaief, and Piotr Luszczek, eds. High Performance Computing. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-90539-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Gitler, Isidoro, Carlos Jaime Barrios Hernández, and Esteban Meneses, eds. High Performance Computing. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04209-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Varbanescu, Ana-Lucia, Abhinav Bhatele, Piotr Luszczek, and Baboulin Marc, eds. High Performance Computing. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-07312-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kunkel, Julian M., Rio Yokota, Michela Taufer, and John Shalf, eds. High Performance Computing. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-67630-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zima, Hans P., Kazuki Joe, Mitsuhisa Sato, Yoshiki Seo, and Masaaki Shimasaki, eds. High Performance Computing. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-47847-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Mocskos, Esteban, and Sergio Nesmachnow, eds. High Performance Computing. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-73353-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Weiland, Michèle, Guido Juckeland, Sadaf Alam, and Heike Jagode, eds. High Performance Computing. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-34356-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Kunkel, Julian M., Pavan Balaji, and Jack Dongarra, eds. High Performance Computing. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-41321-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Valero, Mateo, Kazuki Joe, Masaru Kitsuregawa, and Hidehiko Tanaka, eds. High Performance Computing. Berlin, Heidelberg: Springer Berlin Heidelberg, 2000. http://dx.doi.org/10.1007/3-540-39999-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "High performance computing"

1

Conlan, Chris. "High-Performance Computing." In Automated Trading with R, 65–81. Berkeley, CA: Apress, 2016. http://dx.doi.org/10.1007/978-1-4842-2178-5_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wang, Sun-Chong. "High Performance Computing." In Interdisciplinary Computing in Java Programming, 39–55. Boston, MA: Springer US, 2003. http://dx.doi.org/10.1007/978-1-4615-0377-4_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Dhillon, Vikram, David Metcalf, and Max Hooper. "High-Performance Computing." In Blockchain Enabled Applications, 129–75. Berkeley, CA: Apress, 2021. http://dx.doi.org/10.1007/978-1-4842-6534-5_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Anderson, Dale A., John C. Tannehill, Richard H. Pletcher, Munipalli Ramakanth, and Vijaya Shankar. "High-Performance Computing." In Computational Fluid Mechanics and Heat Transfer, 855–63. Fourth edition. | Boca Raton, FL : CRC Press, 2020. | Series: Computational and physical processes in mechanics and thermal sciences: CRC Press, 2020. http://dx.doi.org/10.1201/9781351124027-13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Danial, Albert. "High Performance Computing." In Python for MATLAB Development, 575–654. Berkeley, CA: Apress, 2022. http://dx.doi.org/10.1007/978-1-4842-7223-7_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Brieda, Lubos, Joseph Wang, and Robert Martin. "High-Performance Computing." In Introduction to Modern Scientific Programming and Numerical Methods, 340–86. Boca Raton: CRC Press, 2024. http://dx.doi.org/10.1201/9781003132233-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Nicole, Denis, Kenji Takeda, Ivan Wolton, and Simon Cox. "Southampton High Performance Computing Centre." In High-Performance Computing, 33–41. Boston, MA: Springer US, 1999. http://dx.doi.org/10.1007/978-1-4615-4873-7_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Keane, J. A. "High Performance Computing in Banking." In High-Performance Computing, 479–86. Boston, MA: Springer US, 1999. http://dx.doi.org/10.1007/978-1-4615-4873-7_51.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Chan, Fan, Jiannong Cao, and Minyi Guo. "ClusterGOP: A High-Level Programming Environment for Clusters." In High-Performance Computing, 1–19. Hoboken, NJ, USA: John Wiley & Sons, Inc., 2006. http://dx.doi.org/10.1002/0471732710.ch1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Liao, W., A. Choudhary, K. Coloma, L. Ward, E. Russell, and N. Pundit. "MPI Atomicity and Concurrent Overlapping I/O." In High-Performance Computing, 203–18. Hoboken, NJ, USA: John Wiley & Sons, Inc., 2006. http://dx.doi.org/10.1002/0471732710.ch10.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "High performance computing"

1

Joó, Bálint, Aaron Walden, Dhiraj D. Kalamkar, Thorsten Kurth, and Karthikeyan Vaidyanathan. "Optimizing Dirac Wilson Operator and linear solvers for Intel KNL." In ISC High Performance 2016: High Performance Computing. US DOE, 2016. http://dx.doi.org/10.2172/1988224.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Stewart, Craig A., Christopher S. Peebles, Mary Papakhian, John Samuel, David Hart, and Stephen Simms. "High performance computing." In the 29th annual ACM SIGUCCS conference. New York, New York, USA: ACM Press, 2001. http://dx.doi.org/10.1145/500956.501026.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Shi, Xuan. "High performance computing." In the ACM SIGSPATIAL International Workshop. New York, New York, USA: ACM Press, 2010. http://dx.doi.org/10.1145/1869692.1869698.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Bauer, Michael A. "High performance computing." In the 2007 international workshop. New York, New York, USA: ACM Press, 2007. http://dx.doi.org/10.1145/1278177.1278180.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Verma, Anurag, Jennifer Huffman, Ali Torkamani, and Ravi Madduri. "HIGH-PERFORMANCE COMPUTING MEETS HIGH-PERFORMANCE MEDICINE." In Pacific Symposium on Biocomputing 2023. WORLD SCIENTIFIC, 2022. http://dx.doi.org/10.1142/9789811270611_0050.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

"High Performance Distributed Computing." In Proceedings. 13th IEEE International Symposium on High performance Distributed Computing, 2004. IEEE, 2004. http://dx.doi.org/10.1109/hpdc.2004.1323401.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Sunderam, V., S. Y. Cheung, M. Hirsch, S. Chodrow, M. Grigni, A. Krantz, I. Rhee, et al. "CCF: Collaborative Computing Frameworks." In SC98 - High Performance Networking and Computing Conference. IEEE, 1998. http://dx.doi.org/10.1109/sc.1998.10040.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

"3.1 High performance computing." In 2013 International Conference on Field-Programmable Technology (FPT). IEEE, 2013. http://dx.doi.org/10.1109/fpt.2013.6718354.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Rooks, John W., and Richard Linderman. "High Performance Space Computing." In 2007 IEEE Aerospace Conference. IEEE, 2007. http://dx.doi.org/10.1109/aero.2007.352661.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

"High-performance grid computing." In 18th International Parallel and Distributed Processing Symposium, 2004. Proceedings. IEEE, 2004. http://dx.doi.org/10.1109/ipdps.2004.1303346.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "High performance computing"

1

Aggour, Kareem S., Robert M. Mattheyses, Joseph Shultz, Brent H. Allen, and Michael Lapinski. Quantum Computing and High Performance Computing. Fort Belvoir, VA: Defense Technical Information Center, December 2006. http://dx.doi.org/10.21236/ada462065.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Townley, Judy, and Michael Karr. High Performance Computing Environments. Fort Belvoir, VA: Defense Technical Information Center, October 1997. http://dx.doi.org/10.21236/ada337780.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Birman, Kenneth, Daniel Freedman, Robert van Renesse, Hakim Weatherspoon, and Tudor Marian. High Performance Computing Multicast. Fort Belvoir, VA: Defense Technical Information Center, February 2012. http://dx.doi.org/10.21236/ada557017.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Browne, J. C., and G. J. Lipovski. High Performance Parallel Computing. Fort Belvoir, VA: Defense Technical Information Center, January 1986. http://dx.doi.org/10.21236/ada169981.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Guo, Yang. High-Performance Computing Security:. Gaithersburg, MD: National Institute of Standards and Technology, 2024. http://dx.doi.org/10.6028/nist.sp.800-223.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Green, Ronald Wayne. Vectorization for High Performance Computing. Office of Scientific and Technical Information (OSTI), June 2017. http://dx.doi.org/10.2172/1364565.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Guo, Yang. High Performance Computing (HPC) Security:. Gaithersburg, MD: National Institute of Standards and Technology, 2022. http://dx.doi.org/10.6028/nist.sp.800-223.ipd.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Martinez, Jesse. High Performance Computing Network Overview. Office of Scientific and Technical Information (OSTI), May 2023. http://dx.doi.org/10.2172/1974907.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Killian, Edward. Advanced Computing Architectures for High Performance Computing Engineering Integration. Fort Belvoir, VA: Defense Technical Information Center, May 2010. http://dx.doi.org/10.21236/ada522412.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ross, Virginia W., and Scott E. Spetka. Grid Computing for High Performance Computing (HPC) Data Centers. Fort Belvoir, VA: Defense Technical Information Center, March 2007. http://dx.doi.org/10.21236/ada466685.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography