Journal articles on the topic 'High performance scientific computing'

To see the other types of publications on this topic, follow the link: High performance scientific computing.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'High performance scientific computing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Camp, William J., and Philippe Thierry. "Trends for high-performance scientific computing." Leading Edge 29, no. 1 (January 2010): 44–47. http://dx.doi.org/10.1190/1.3284052.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kisel, Ivan. "Scientific and high-performance computing at FAIR." EPJ Web of Conferences 95 (2015): 01007. http://dx.doi.org/10.1051/epjconf/20159501007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Fosdick, Lloyd D., Elizabeth R. Jessup, Carolyn J. C. Schauble, Gitta Domik, and Ralph L. Place. "An Introduction to High‐Performance Scientific Computing." Physics Today 49, no. 12 (December 1996): 55–56. http://dx.doi.org/10.1063/1.881590.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Biryaltsev, Eugeniy Vasiljevich, Marat Razifovich Galimov, Denis Evgenievich Demidov, and Aleksandr Mikhailovich Elizarov. "The platform approach to research and development using high-performance computing." Program Systems: Theory and Applications 10, no. 2 (2019): 93–119. http://dx.doi.org/10.25209/2079-3316-2019-10-2-93-119.

Full text
Abstract:
In this paper, we analyze the prerequisites and substantiate the relevance for creating an open Internet platform that employs big data technologies, highperformance computing, and multilateral markets in a unified way. Conceived as an ecosystem for the development and use of applied software (including in the field of design and scientific research), the platform should reduce time/costs and improve the quality of software development for solving analytical problems arising in industrial enterprises, scientific research organizations, state bodies and private individuals. The article presents a working prototype of the platform using supercomputer technologies and desktop virtualization systems.
APA, Harvard, Vancouver, ISO, and other styles
5

Bernholdt, David E., Benjamin A. Allan, Robert Armstrong, Felipe Bertrand, Kenneth Chiu, Tamara L. Dahlgren, Kostadin Damevski, et al. "A Component Architecture for High-Performance Scientific Computing." International Journal of High Performance Computing Applications 20, no. 2 (May 2006): 163–202. http://dx.doi.org/10.1177/1094342006064488.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kurzak, Jakub, Alfredo Buttari, Piotr Luszczek, and Jack Dongarra. "The PlayStation 3 for High-Performance Scientific Computing." Computing in Science & Engineering 10, no. 3 (May 2008): 84–87. http://dx.doi.org/10.1109/mcse.2008.85.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Alexeev, Yuri, Benjamin A. Allan, Robert C. Armstrong, David E. Bernholdt, Tamara L. Dahlgren, Dennis Gannon, Curtis L. Janssen, et al. "Component-based software for high-performance scientific computing." Journal of Physics: Conference Series 16 (January 1, 2005): 536–40. http://dx.doi.org/10.1088/1742-6596/16/1/073.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Davis, Kei, and Jöerg Striegnitz. "Parallel/High Performance Object-Oriented Scientific Computing 2008." International Journal of Parallel, Emergent and Distributed Systems 24, no. 6 (December 2009): 463–65. http://dx.doi.org/10.1080/17445760902758529.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ponce, Marcelo, Erik Spence, Ramses van Zon, and Daniel Gruner. "Scientific Computing, High-Performance Computing and Data Science in Higher Education." Journal of Computational Science Education 10, no. 1 (January 2019): 24–31. http://dx.doi.org/10.22369/issn.2153-4136/10/1/5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Boulle, A., and J. Kieffer. "High-performance Python for crystallographic computing." Journal of Applied Crystallography 52, no. 4 (July 24, 2019): 882–97. http://dx.doi.org/10.1107/s1600576719008471.

Full text
Abstract:
The Python programming language, combined with the numerical computing library NumPy and the scientific computing library SciPy, has become the de facto standard for scientific computing in a variety of fields. This popularity is mainly due to the ease with which a Python program can be written and executed (easy syntax, dynamical typing, no compilation etc.), coupled with the existence of a large number of specialized third-party libraries that aim to lift all the limitations of the raw Python language. NumPy introduces vector programming, improving execution speeds, whereas SciPy brings a wealth of highly optimized and reliable scientific functions. There are cases, however, where vector programming alone is not sufficient to reach optimal performance. This issue is addressed with dedicated compilers that aim to translate Python code into native and statically typed code with support for the multi-core architectures of modern processors. In the present article it is shown how these approaches can be efficiently used to tackle different problems, with increasing complexity, that are relevant to crystallography: the 2D Laue function, scattering from a strained 2D crystal, scattering from 3D nanocrystals and, finally, diffraction from films and multilayers. For each case, detailed implementations and explanations of the functioning of the algorithms are provided. Different Python compilers (namely NumExpr, Numba, Pythran and Cython) are used to improve performance and are benchmarked against state-of-the-art NumPy implementations. All examples are also provided as commented and didactic Python (Jupyter) notebooks that can be used as starting points for crystallographers curious to enter the Python ecosystem or wishing to accelerate their existing codes.
APA, Harvard, Vancouver, ISO, and other styles
11

Coveney, Peter V. "Scientific Grid computing." Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 363, no. 1833 (July 18, 2005): 1707–13. http://dx.doi.org/10.1098/rsta.2005.1632.

Full text
Abstract:
We introduce a definition of Grid computing which is adhered to throughout this Theme Issue. We compare the evolution of the World Wide Web with current aspirations for Grid computing and indicate areas that need further research and development before a generally usable Grid infrastructure becomes available. We discuss work that has been done in order to make scientific Grid computing a viable proposition, including the building of Grids, middleware developments, computational steering and visualization. We review science that has been enabled by contemporary computational Grids, and associated progress made through the widening availability of high performance computing.
APA, Harvard, Vancouver, ISO, and other styles
12

Ware, Colin, David Rogers, Mark Petersen, James Ahrens, and Erol Aygar. "Optimizing for Visual Cognition in High Performance Scientific Computing." Electronic Imaging 2016, no. 16 (February 14, 2016): 1–9. http://dx.doi.org/10.2352/issn.2470-1173.2016.16.hvei-130.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Barba, Lorena A., Andreas Klockner, Prabhu Ramachandran, and Rollin Thomas. "Scientific Computing With Python on High-Performance Heterogeneous Systems." Computing in Science & Engineering 23, no. 4 (July 1, 2021): 5–7. http://dx.doi.org/10.1109/mcse.2021.3088549.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Epperly, Thomas GW, Gary Kumfert, Tamara Dahlgren, Dietmar Ebner, Jim Leek, Adrian Prantl, and Scott Kohn. "High-performance language interoperability for scientific computing through Babel." International Journal of High Performance Computing Applications 26, no. 3 (July 7, 2011): 260–74. http://dx.doi.org/10.1177/1094342011414036.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Schuster, Micah D. "The Heat Equation: High-Performance Scientific Computing Case Study." Computing in Science & Engineering 20, no. 5 (September 2018): 114–27. http://dx.doi.org/10.1109/mcse.2018.05329820.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Paprzycki, M. "An Introduction To High-performance Scientific Computing [Book Review]." IEEE Concurrency 5, no. 3 (July 1997): 73–74. http://dx.doi.org/10.1109/mcc.1997.605921.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Di Gregorio, S., R. Rongo, W. Spataro, G. Spezzano, and D. Talia. "High performance scientific computing by a parallel cellular environment." Future Generation Computer Systems 12, no. 5 (April 1997): 357–69. http://dx.doi.org/10.1016/s0167-739x(96)00023-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Nielsen, Ida M. B., and Curtis L. Janssen. "Multicore Challenges and Benefits for High Performance Scientific Computing." Scientific Programming 16, no. 4 (2008): 277–85. http://dx.doi.org/10.1155/2008/450818.

Full text
Abstract:
Until recently, performance gains in processors were achieved largely by improvements in clock speeds and instruction level parallelism. Thus, applications could obtain performance increases with relatively minor changes by upgrading to the latest generation of computing hardware. Currently, however, processor performance improvements are realized by using multicore technology and hardware support for multiple threads within each core, and taking full advantage of this technology to improve the performance of applications requires exposure of extreme levels of software parallelism. We will here discuss the architecture of parallel computers constructed from many multicore chips as well as techniques for managing the complexity of programming such computers, including the hybrid message-passing/multi-threading programming model. We will illustrate these ideas with a hybrid distributed memory matrix multiply and a quantum chemistry algorithm for energy computation using Møller–Plesset perturbation theory.
APA, Harvard, Vancouver, ISO, and other styles
19

Cameron, K. W., Rong Ge, and Xizhou Feng. "High-performance, power-aware distributed computing for scientific applications." Computer 38, no. 11 (November 2005): 40–47. http://dx.doi.org/10.1109/mc.2005.380.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Haney, S., and J. Crotlinger. "How templates enable high-performance scientific computing in C++." Computing in Science & Engineering 1, no. 4 (1999): 66–72. http://dx.doi.org/10.1109/5992.774843.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Armstrong, Rob, Gary Kumfert, Lois Curfman McInnes, Steven Parker, Ben Allan, Matt Sottile, Thomas Epperly, and Tamara Dahlgren. "The CCA component model for high-performance scientific computing." Concurrency and Computation: Practice and Experience 18, no. 2 (2005): 215–29. http://dx.doi.org/10.1002/cpe.911.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Kritz, Mauricio Vieira. "Meeting report-colloquia on high performance scientific computing 1996." Complexity 2, no. 3 (January 1997): 3–4. http://dx.doi.org/10.1002/(sici)1099-0526(199701/02)2:3<3::aid-cplx1>3.0.co;2-c.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Cohen, Jeremy, Ioannis Filippis, Mark Woodbridge, Daniela Bauer, Neil Chue Hong, Mike Jackson, Sarah Butcher, et al. "RAPPORT: running scientific high-performance computing applications on the cloud." Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 371, no. 1983 (January 28, 2013): 20120073. http://dx.doi.org/10.1098/rsta.2012.0073.

Full text
Abstract:
Cloud computing infrastructure is now widely used in many domains, but one area where there has been more limited adoption is research computing, in particular for running scientific high-performance computing (HPC) software. The Robust Application Porting for HPC in the Cloud (RAPPORT) project took advantage of existing links between computing researchers and application scientists in the fields of bioinformatics, high-energy physics (HEP) and digital humanities, to investigate running a set of scientific HPC applications from these domains on cloud infrastructure. In this paper, we focus on the bioinformatics and HEP domains, describing the applications and target cloud platforms. We conclude that, while there are many factors that need consideration, there is no fundamental impediment to the use of cloud infrastructure for running many types of HPC applications and, in some cases, there is potential for researchers to benefit significantly from the flexibility offered by cloud platforms.
APA, Harvard, Vancouver, ISO, and other styles
24

Grannan, A., K. Sood, B. Norris, and A. Dubey. "Understanding the landscape of scientific software used on high-performance computing platforms." International Journal of High Performance Computing Applications 34, no. 4 (January 14, 2020): 465–77. http://dx.doi.org/10.1177/1094342019899451.

Full text
Abstract:
Scientific discovery increasingly relies on computation through simulations, analytics, and machine and deep learning. Of these, simulations on high-performance computing (HPC) platforms have been the cornerstone of scientific computing for more than two decades. However, the development of simulation software has, in general, occurred through accretion, with a few exceptions. With an increase in scientific understanding, models have become more complex, rendering an accretion mode untenable to the point where software productivity and sustainability have become active concerns in scientific computing. In this survey paper, we examine a modest set of HPC scientific simulation applications that are already using cutting-edge HPC platforms. Several have been in existence for a decade or more. Our objective in this survey is twofold: first, to understand the landscape of scientific computing on HPC platforms in order to distill the currently scattered knowledge about software practices that have helped both developer and software productivity, and second, to understand the kind of tools and methodologies that need attention for continued productivity.
APA, Harvard, Vancouver, ISO, and other styles
25

Simon, Horst D. "The Recent Revolution in High Performance Computing." MRS Bulletin 22, no. 10 (October 1997): 5–6. http://dx.doi.org/10.1557/s0883769400034096.

Full text
Abstract:
Recent events in the high-performance computing industry have concerned scientists and the general public regarding a crisis or a lack of leadership in the field. That concern is understandable considering the industry's history from 1993 to 1996. Cray Research, the historic leader in supercomputing technology, was unable to survive financially as an independent company and was acquired by Silicon Graphics. Two ambitious new companies that introduced new technologies in the late 1980s and early 1990s—Thinking Machines and Kendall Square Research—were commercial failures and went out of business. And Intel, which introduced its Paragon supercomputer in 1994, discontinued production only two years later.During the same time frame, scientists who had finished the laborious task of writing scientific codes to run on vector parallel supercomputers learned that those codes would have to be rewritten if they were to run on the next-generation, highly parallel architecture. Scientists who are not yet involved in high-performance computing are understandably hesitant about committing their time and energy to such an apparently unstable enterprise.However, beneath the commercial chaos of the last several years, a technological revolution has been occurring. The good news is that the revolution is over, leading to five to ten years of predictable stability, steady improvements in system performance, and increased productivity for scientific applications. It is time for scientists who were sitting on the fence to jump in and reap the benefits of the new technology.
APA, Harvard, Vancouver, ISO, and other styles
26

Matkerim, Bazargul, Darhan Akhmed-Zaki, and Manuel Barata. "Development high performance scientific computing application using model-driven architecture." Applied Mathematical Sciences 7 (2013): 4961–74. http://dx.doi.org/10.12988/ams.2013.37426.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Adakin, A., S. Belov, D. Chubarov, V. Kalyuzhny, V. Kaplin, N. Kuchin, S. Lomakin, V. Nikultsev, A. Sukharev, and A. Zaytsev. "Building a High Performance Computing Infrastructure for Novosibirsk Scientific Center." Journal of Physics: Conference Series 331, no. 5 (December 23, 2011): 052020. http://dx.doi.org/10.1088/1742-6596/331/5/052020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Kumar, Phani, V. V. Nukala, Srdan Simunović, and Richard T. Mills. "Statistical physics of fracture: scientific discovery through high-performance computing." Journal of Physics: Conference Series 46 (September 1, 2006): 278–91. http://dx.doi.org/10.1088/1742-6596/46/1/039.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Belletti, Francesco, Maria Cotallo, A. Cruz, Luis Antonio Fernandez, Antonio Gordillo-Guerrero, Marco Guidetti, Andrea Maiorano, et al. "Janus: An FPGA-Based System for High-Performance Scientific Computing." Computing in Science & Engineering 11, no. 1 (January 2009): 48–58. http://dx.doi.org/10.1109/mcse.2009.11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Chen, Zizhong, and Jack Dongarra. "Highly Scalable Self-Healing Algorithms for High Performance Scientific Computing." IEEE Transactions on Computers 58, no. 11 (November 2009): 1512–24. http://dx.doi.org/10.1109/tc.2009.42.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Chang, Chia, Christopher Körber, and André Loud. "EspressoDB: A scientific database for managing high-performance computing workflows." Journal of Open Source Software 5, no. 46 (February 21, 2020): 2007. http://dx.doi.org/10.21105/joss.02007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Jorissen, K., F. D. Vila, and J. J. Rehr. "A high performance scientific cloud computing environment for materials simulations." Computer Physics Communications 183, no. 9 (September 2012): 1911–19. http://dx.doi.org/10.1016/j.cpc.2012.04.010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Greer, Bruce, John Harrison, Greg Henry, Wei Li, and Peter Tang. "Scientific Computing on the Itanium® Processor." Scientific Programming 10, no. 4 (2002): 329–37. http://dx.doi.org/10.1155/2002/193478.

Full text
Abstract:
The 64-bit Intel® Itanium® architecture is designed for high-performance scientific and enterprise computing, and the Itanium processor is its first silicon implementation. Features such as extensive arithmetic support, predication, speculation, and explicit parallelism can be used to provide a sound infrastructure for supercomputing. A large number of high-performance computer companies are offering Itanium® -based systems, some capable of peak performance exceeding 50 GFLOPS. In this paper we give an overview of the most relevant architectural features and provide illustrations of how these features are used in both low-level and high-level support for scientific and engineering computing, including transcendental functions and linear algebra kernels.
APA, Harvard, Vancouver, ISO, and other styles
34

Getov, Vladimir, Paul Gray, Sava Mintchev, and Vaidy Sunderam. "Multi-Language Programming Environments for High Performance Java Computing." Scientific Programming 7, no. 2 (1999): 139–46. http://dx.doi.org/10.1155/1999/975837.

Full text
Abstract:
Recent developments in processor capabilities, software tools, programming languages and programming paradigms have brought about new approaches to high performance computing. A steadfast component of this dynamic evolution has been the scientific community’s reliance on established scientific packages. As a consequence, programmers of high‐performance applications are reluctant to embrace evolving languages such as Java. This paper describes the Java‐to‐C Interface (JCI) tool which provides application programmers wishing to use Java with immediate accessibility to existing scientific packages. The JCI tool also facilitates rapid development and reuse of existing code. These benefits are provided at minimal cost to the programmer. While beneficial to the programmer, the additional advantages of mixed‐language programming in terms of application performance and portability are addressed in detail within the context of this paper. In addition, we discuss how the JCI tool is complementing other ongoing projects such as IBM’s High‐Performance Compiler for Java (HPCJ) and IceT’s metacomputing environment.
APA, Harvard, Vancouver, ISO, and other styles
35

Misra, Goldi, Sandeep Agrawal, Nisha Kurkure, Shweta Das, Kapil Mathur, and Sucheta Pawar. "ONAMA: A Quantum Leap in High Performance Computing." Advanced Materials Research 328-330 (September 2011): 2337–42. http://dx.doi.org/10.4028/www.scientific.net/amr.328-330.2337.

Full text
Abstract:
The growth of serial and High Performance Computing (HPC) applications presents the challenge of porting of scientific and engineering applications. A number of key issues and trends in High Performance Computing will impact the delivery of breakthrough science and engineering in the future. ONAMA was developed to cope with increasing demands for HPC. ONAMA, which means a new beginning, is a desktop based Graphical User Interface which is developed using C and GTK. It aims to satisfy the research needs of academic institutions. ONAMA is a comprehensive package, comprising of applications covering many engineering branches. ONAMA provides tools that have a close affinity with practical simulation, thus making the learning process for students more applied. Most of the software tools and libraries are open source and supported on Linux, thereby promoting the use of open source software. It also provides tools to the researchers to solve their day-to-day as well as long term problems accurately in lesser time. The Execution Model of ONAMA serves to execute engineering and scientific applications either in sequential or in parallel on Linux computing clusters.
APA, Harvard, Vancouver, ISO, and other styles
36

Anzt, Hartwig, Goran Flegar, Thomas Grützmacher, and Enrique S. Quintana-Ortí. "Toward a modular precision ecosystem for high-performance computing." International Journal of High Performance Computing Applications 33, no. 6 (May 9, 2019): 1069–78. http://dx.doi.org/10.1177/1094342019846547.

Full text
Abstract:
With the memory bandwidth of current computer architectures being significantly slower than the (floating point) arithmetic performance, many scientific computations only leverage a fraction of the computational power in today’s high-performance architectures. At the same time, memory operations are the primary energy consumer of modern architectures, heavily impacting the resource cost of large-scale applications and the battery life of mobile devices. This article tackles this mismatch between floating point arithmetic throughput and memory bandwidth by advocating a disruptive paradigm change with respect to how data are stored and processed in scientific applications. Concretely, the goal is to radically decouple the data storage format from the processing format and, ultimately, design a “modular precision ecosystem” that allows for more flexibility in terms of customized data access. For memory-bounded scientific applications, dynamically adapting the memory precision to the numerical requirements allows for attractive resource savings. In this article, we demonstrate the potential of employing a modular precision ecosystem for the block-Jacobi preconditioner and the PageRank algorithm—two applications that are popular in the communities and at the same characteristic representatives for the field of numerical linear algebra and data analytics, respectively.
APA, Harvard, Vancouver, ISO, and other styles
37

Hogervorst, Tom, Răzvan Nane, Giacomo Marchiori, Tong Dong Qiu, Markus Blatt, and Alf Birger Rustad. "Hardware Acceleration of High-Performance Computational Flow Dynamics Using High-Bandwidth Memory-Enabled Field-Programmable Gate Arrays." ACM Transactions on Reconfigurable Technology and Systems 15, no. 2 (June 30, 2022): 1–35. http://dx.doi.org/10.1145/3476229.

Full text
Abstract:
Scientific computing is at the core of many High-Performance Computing applications, including computational flow dynamics. Because of the utmost importance to simulate increasingly larger computational models, hardware acceleration is receiving increased attention due to its potential to maximize the performance of scientific computing. Field-Programmable Gate Arrays could accelerate scientific computing because of the possibility to fully customize the memory hierarchy important in irregular applications such as iterative linear solvers. In this article, we study the potential of using Field-Programmable Gate Arrays in High-Performance Computing because of the rapid advances in reconfigurable hardware, such as the increase in on-chip memory size, increasing number of logic cells, and the integration of High-Bandwidth Memories on board. To perform this study, we propose a novel Sparse Matrix-Vector multiplication unit and an ILU0 preconditioner tightly integrated with a BiCGStab solver kernel. We integrate the developed preconditioned iterative solver in Flow from the Open Porous Media project, a state-of-the-art open source reservoir simulator. Finally, we perform a thorough evaluation of the FPGA solver kernel in both stand-alone mode and integrated in the reservoir simulator, using the NORNE field, a real-world case reservoir model using a grid with more than 10 5 cells and using three unknowns per cell.
APA, Harvard, Vancouver, ISO, and other styles
38

Siek, J. G., and A. Lumsdaine. "The Matrix Template Library: generic components for high-performance scientific computing." Computing in Science & Engineering 1, no. 6 (1999): 70–71. http://dx.doi.org/10.1109/5992.805137.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Siegel, Stephen F., and Timothy K. Zirkel. "FEVS: A Functional Equivalence Verification Suite for High-Performance Scientific Computing." Mathematics in Computer Science 5, no. 4 (November 15, 2011): 427–35. http://dx.doi.org/10.1007/s11786-011-0101-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Möller, Matthias, and Cornelis Vuik. "On the impact of quantum computing technology on future developments in high-performance scientific computing." Ethics and Information Technology 19, no. 4 (August 31, 2017): 253–69. http://dx.doi.org/10.1007/s10676-017-9438-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Sorokin, Aleksei, Sergey Malkovsky, Georgiy Tsoy, Alexander Zatsarinnyy, and Konstantin Volovich. "Comparative Performance Evaluation of Modern Heterogeneous High-Performance Computing Systems CPUs." Electronics 9, no. 6 (June 23, 2020): 1035. http://dx.doi.org/10.3390/electronics9061035.

Full text
Abstract:
The study presents a comparison of computing systems based on IBM POWER8, IBM POWER9, and Intel Xeon Platinum 8160 processors running parallel applications. Memory subsystem bandwidth was studied, parallel programming technologies were compared, and the operating modes and capabilities of simultaneous multithreading technology were analyzed. Performance analysis for the studied computing systems running parallel applications based on the OpenMP and MPI technologies was carried out by using the NAS Parallel Benchmarks. An assessment of the results obtained during experimental calculations led to the conclusion that IBM POWER8 and Intel Xeon Platinum 8160 systems have almost the same maximum memory bandwidth, but require a different number of threads for efficient utilization. The IBM POWER9 system has the highest maximum bandwidth, which can be attributed to the large number of memory channels per socket. Based on the results of numerical experiments, recommendations are given on how the hardware of a similar grade can be utilized to solve various scientific problems, including recommendations on optimal processor architecture choice for leveraging the operation of high-performance hybrid computing platforms.
APA, Harvard, Vancouver, ISO, and other styles
42

Lu, Quanming, and Vladimir Getov. "Mixed-Language High-Performance Computing for Plasma Simulations." Scientific Programming 11, no. 1 (2003): 57–66. http://dx.doi.org/10.1155/2003/928543.

Full text
Abstract:
Java is receiving increasing attention as the most popular platform for distributed computing. However, programmers are still reluctant to embrace Java as a tool for writing scientific and engineering applications due to its still noticeable performance drawbacks compared with other programming languages such as Fortran or C. In this paper, we present a hybrid Java/Fortran implementation of a parallel particle-in-cell (PIC) algorithm for plasma simulations. In our approach, the time-consuming components of this application are designed and implemented as Fortran subroutines, while less calculation-intensive components usually involved in building the user interface are written in Java. The two types of software modules have been glued together using the Java native interface (JNI). Our mixed-language PIC code was tested and its performance compared with pure Java and Fortran versions of the same algorithm on a Sun E6500 SMP system and a Linux cluster of Pentium~III machines.
APA, Harvard, Vancouver, ISO, and other styles
43

Bernholdt, David E. "Special Issue: Component and Framework Technology in High-Performance and Scientific Computing." Concurrency and Computation: Practice and Experience 19, no. 5 (2007): 571–72. http://dx.doi.org/10.1002/cpe.1054.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

GARCÍA-RISUEÑO, PABLO, and PABLO E. IBÁÑEZ. "A REVIEW OF HIGH PERFORMANCE COMPUTING FOUNDATIONS FOR SCIENTISTS." International Journal of Modern Physics C 23, no. 07 (July 2012): 1230001. http://dx.doi.org/10.1142/s0129183112300011.

Full text
Abstract:
The increase of existing computational capabilities has made simulation emerge as a third discipline of Science, lying midway between experimental and purely theoretical branches [G. Makov, C. Gattinoni and A. D. Vita, Model. Simul. Mater. Sci. Eng.17, 084008 (2009); C. J. Cramer, Essentials of Computational Chemistry: Theories and Models, 2nd edn. (John Wiley & Sons, Chichester, 2002)]. Simulation enables the evaluation of quantities which otherwise would not be accessible, helps to improve experiments and provides new insights on systems which are analyzed [T. C. Germann, K. Kadau and S. Swaminarayan, Concurrency Comput. Pract. Exp.21, 2143 (2009); M. A. L. Marques, X. Lopez, D. Varsano, A. Castro and A. Rubio, Phys. Rev. Lett.90, 258101 (2003); D. E. Shaw, P. Maragakis, K. Lindorff-Larsen, S. Piana, R. O. Dror, M. P. Eastwood, J. A. Bank, J. M. Jumper, J. K. Salmon, Y. Shan and W. Wriggers, Science330, 341 (2010); D. Marx, Chem. Phys. Chem.7, 1848 (2006)]. Knowing the fundamentals of computation can be very useful for scientists, for it can help them to improve the performance of their theoretical models and simulations. This review includes some technical essentials that can be useful to this end, and it is devised as a complement for researchers whose education is focused on scientific issues and not on technological respects. In this document, we attempt to discuss the fundamentals of high performance computing (HPC) [G. Hager and G. Wellein, Introduction to High Performance Computing for Scientists and Engineers, 1st edn. (CRC Press, Taylor & Francis Group, 2011)] in a way which is easy to understand without much previous background. We sketch the way standard computers and supercomputers work, as well as discuss distributed computing and discuss essential aspects to take into account when running scientific calculations in computers.
APA, Harvard, Vancouver, ISO, and other styles
45

Kumar, Rakesh. "FUTURE FOR SCIENTIFIC COMPUTING USING PYTHON." International Journal of Engineering Technologies and Management Research 2, no. 1 (January 29, 2020): 30–41. http://dx.doi.org/10.29121/ijetmr.v2.i1.2015.28.

Full text
Abstract:
Computational science (scientific computing or scientific computation) is concerned with constructing mathematical models as well as quantitative analysis techniques and using computers to analyze as well as solve scientific problems. In practical use, it is basically the application of computer simulation as well as other forms of computation from numerical analysis and theoretical computer science to problems in different scientific disciplines. The scientific computing approach is to gain understanding, basically through the analysis of mathematical models implemented on computers. Python is frequently used for highperformance scientific applications and widely used in academia as well as scientific projects because it is easy to write and performs well. Due to its high performance nature, scientific computing in Python often utilizes external libraries like NumPy, SciPy and Matplotlib etc.
APA, Harvard, Vancouver, ISO, and other styles
46

SHARIF, MD HAIDAR. "HIGH-PERFORMANCE MATHEMATICAL FUNCTIONS FOR SINGLE-CORE ARCHITECTURES." Journal of Circuits, Systems and Computers 23, no. 04 (April 2014): 1450051. http://dx.doi.org/10.1142/s0218126614500510.

Full text
Abstract:
Nowadays high-performance computing (HPC) architectures are designed to resolve assorted sophisticated scientific as well as engineering problems across an ever intensifying number of HPC and professional workloads. Application and computation of key trigonometric functions sine and cosine are in all spheres of our daily life, yet fairly time consuming task in high-performance numerical simulations. In this paper, we have delivered a detailed deliberation of how the micro-architecture of single-core Itanium® and Alpha 21264/21364 processors as well as the manual optimization techniques improve the computing performance of several mathematical functions. On describing the detailed algorithm and its execution pattern on the processor, we have confirmed that the processor micro-architecture side by side manual optimization techniques ameliorate computing performance significantly as compared to not only the standard math library's built-in functions with compiler optimizing options but also Intel® Itanium® library's highly optimized mathematical functions.
APA, Harvard, Vancouver, ISO, and other styles
47

Zhang, Shu Fang, Jun Han, and Fei Jiang. "Internet-Oriented Environment for Scientific Computing Resource Sharing." Applied Mechanics and Materials 241-244 (December 2012): 2953–56. http://dx.doi.org/10.4028/www.scientific.net/amm.241-244.2953.

Full text
Abstract:
In this paper, we introduced a scientific computing environment for Internet-Oriented computing resource sharing, abbreviate ISCEs, which is a high-performance computing environment that allows users to write and evaluate parallel distributed applications for different hardware and software configurations using a web interface. We described the software architecture of ISCEs by emphasizing Application editor, Application Scheduling Components, and Application execution/runtime modules. ISCEs is efficient which is strongly supported by the time measurement scheduling polices. The system resource monitoring can also benefit a lot from the Application execution/runtime modules. The results obtained from performance analysis show that Scalability and Speedup of ISCEs was good.
APA, Harvard, Vancouver, ISO, and other styles
48

Bhaskar, Mihir K., Stuart Hadfield, Anargyros Papageorgiou, and Iasonas Petras. "Quantum algorithms and circuits for scientific computing." Quantum Information and Computation 16, no. 3&4 (March 2016): 197–236. http://dx.doi.org/10.26421/qic16.3-4-2.

Full text
Abstract:
Quantum algorithms for scientific computing require modules implementing fundamental functions, such as the square root, the logarithm, and others. We require algorithms that have a well-controlled numerical error, that are uniformly scalable and reversible (unitary), and that can be implemented efficiently. We present quantum algorithms and circuits for computing the square root, the natural logarithm, and arbitrary fractional powers. We provide performance guarantees in terms of their worst-case accuracy and cost. We further illustrate their performance by providing tests comparing them to the respective floating point implementations found in widely used numerical software.
APA, Harvard, Vancouver, ISO, and other styles
49

Mohammed, Ali, Ahmed Eleliemy, Florina M. Ciorba, Franziska Kasielke, and Ioana Banicescu. "An approach for realistically simulating the performance of scientific applications on high performance computing systems." Future Generation Computer Systems 111 (October 2020): 617–33. http://dx.doi.org/10.1016/j.future.2019.10.007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Cesini, Daniele, Elena Corni, Antonio Falabella, Andrea Ferraro, Lucia Morganti, Enrico Calore, Sebastiano Fabio Schifano, et al. "Power-Efficient Computing: Experiences from the COSA Project." Scientific Programming 2017 (2017): 1–14. http://dx.doi.org/10.1155/2017/7206595.

Full text
Abstract:
Energy consumption is today one of the most relevant issues in operating HPC systems for scientific applications. The use of unconventional computing systems is therefore of great interest for several scientific communities looking for a better tradeoff between time-to-solution and energy-to-solution. In this context, the performance assessment of processors with a high ratio of performance per watt is necessary to understand how to realize energy-efficient computing systems for scientific applications, using this class of processors. Computing On SOC Architecture (COSA) is a three-year project (2015–2017) funded by the Scientific Commission V of the Italian Institute for Nuclear Physics (INFN), which aims to investigate the performance and the total cost of ownership offered by computing systems based on commodity low-power Systems on Chip (SoCs) and high energy-efficient systems based on GP-GPUs. In this work, we present the results of the project analyzing the performance of several scientific applications on several GPU- and SoC-based systems. We also describe the methodology we have used to measure energy performance and the tools we have implemented to monitor the power drained by applications while running.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography