Academic literature on the topic 'MPI'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'MPI.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "MPI"

1

Fleming, Richard. "Reno Cardiologist Confirms FMTVDM – Opening New Opportunities for Nuclear Cardiologists." Clinical Medical Reviews and Reports 1, no. 1 (December 19, 2019): 01–04. http://dx.doi.org/10.31579/2690-8794/001.

Full text
Abstract:
Background: A quantitative myocardial perfusion imaging (MPI) and oncologic - including molecular breast imaging (MBI) - utility patent (FMTVDM*) previously validated at experienced MPI and MBI centers was independently tested for clinical application at a private practice Reno, Nevada cardiologists office. Methods: Using FMTVDM, a private practice cardiologist independently investigated forty-four regions of interest (ROI) in 12-women with varying transitional levels of breast changes – including breast cancer. Results: Using FMTVDM, a nuclear cardiologist without prior experience in MBI was able to easily measure changes in women’s breast tissue differentiating inflammatory and cancerous breast tissue from normal using the same camera used for MPI. These measured changes provided diagnostically useful information on cellular metabolism and regional blood flow changes (RBF) – the same properties which differentiate ischemic coronary artery disease (CAD) on myocardial perfusion imaging (MPI). Conclusions: Quantitative MBI using FMTVDM allows differentiation of tissue types through measurement of enhanced regional blood flow and metabolic differences. Nuclear cardiologists have previously reported cases of breast cancer while conducting MPI studies. This investigation demonstrated that nuclear cardiologists can independently conduct MBI in addition to MPI studies using the nuclear cameras they currently use for MPI.
APA, Harvard, Vancouver, ISO, and other styles
2

Overbeek, Femke C. M. S., Jeannette A. Goudzwaard, Judy van Hemmen, Rozemarijn L. van Bruchem-Visser, Janne M. Papma, Harmke A. Polinder-Bos, and Francesco U. S. Mattace-Raso. "The Multidimensional Prognostic Index Predicts Mortality in Older Outpatients with Cognitive Decline." Journal of Clinical Medicine 11, no. 9 (April 23, 2022): 2369. http://dx.doi.org/10.3390/jcm11092369.

Full text
Abstract:
Since the heterogeneity of the growing group of older outpatients with cognitive decline, it is challenging to evaluate survival rates in clinical shared decision making. The primary outcome was to determine whether the Multidimensional Prognostic Index (MPI) predicts mortality, whilst assessing the MPI distribution was considered secondary. This retrospective chart review included 311 outpatients aged ≥65 years and diagnosed with dementia or mild cognitive impairment (MCI). The MPI includes several domains of the comprehensive geriatric assessment (CGA). All characteristics and data to calculate the risk score and mortality data were extracted from administrative information in the database of the Alzheimer’s Center and medical records. The study population (mean age 76.8 years, men = 51.4%) was divided as follows: 34.1% belonged to MPI category 1, 52.1% to MPI category 2 and 13.8% to MPI category 3. Patients with dementia have a higher mean MPI risk score than patients with MCI (0.47 vs. 0.32; p < 0.001). The HRs and corresponding 95% CIs for mortality in patients in MPI categories 2 and 3 were 1.67 (0.81–3.45) and 3.80 (1.56–9.24) compared with MPI category 1, respectively. This study shows that the MPI predicts mortality in outpatients with cognitive decline.
APA, Harvard, Vancouver, ISO, and other styles
3

Hilbrich, Tobias, Matthias S. Müller, and Bettina Krammer. "MPI Correctness Checking for OpenMP/MPI Applications." International Journal of Parallel Programming 37, no. 3 (April 22, 2009): 277–91. http://dx.doi.org/10.1007/s10766-009-0099-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Redden, Miles D., Daniel W. Shike, and Joshua C. McCann. "335 Metabolizable protein requirement of lightweight beef calves." Journal of Animal Science 97, Supplement_2 (July 2019): 136. http://dx.doi.org/10.1093/jas/skz122.241.

Full text
Abstract:
Abstract The objective was to assess the metabolizable protein (MP) requirements of lightweight beef steers. The 2016 Beef Cattle Nutrient Requirements Model (BCNRM) prediction of MP requirements in lightweight beef steers (less than 250 kg) is limited by available performance data in beef calves. Fall born Angus × Simmental crossbred steers (n = 172; BW = 153 kg) were weaned at 70 d of age (± 26 d), backgrounded 73 d, implanted with Component TE-IS, blocked by BW as light (96–163 kg) or heavy (163–215 kg), and assigned to one of four treatments for 56 d. Treatment diets provided MP at: 0.59 (MP1), 0.69 (MP2), 0.85 (MP3), and 0.91 kg per d (MP4) based on observed DMI. Energy was similar across all treatments at 2.14 Mcal/kg ME. Shrunk body weights were collected on d 0 and 56 to estimate performance. Dry matter intake was affected (P < 0.01) by treatment and increased (linear; P < 0.01) with greater provision of MP. As a percent of shrunk body weight, DMI averaged 2.8% and tended (linear; P = 0.07) to increase with MP. Based on observed DMI, ADG for MP1, MP2, MP3, and MP4 was predicted using the BCNRM at 1.15, 1.55, 1.99, and 1.98 kg, respectively. Observed ADG increased quadratically (P = 0.01) with MP1, MP2, MP3, and MP4 gaining 1.86, 2.13, 2.3, and 2.3 kg, respectively. Final BW increased (quadratic; P = 0.02) with greater MP as MP1, MP2, MP3, and MP4 were 258, 273, 282, and 284 kg, respectively. Gain:feed increased quadratically (P = 0.04) with observed values for MP1, MP2, MP3, and MP4 being 0.323, 0.357, 0.359, and 0.360, respectively. Steer ADG response exceeded BCNRM predictions by 29%. Data suggest MP requirements of lightweight beef steers (BW = 213 kg) are 0.85 kg per d to achieve 2.3 kg ADG when fed a 2.14 Mcal/kg ME diet.
APA, Harvard, Vancouver, ISO, and other styles
5

Liu, Feilong, Claude Barthels, Spyros Blanas, Hideaki Kimura, and Garret Swart. "Beyond MPI." ACM SIGMOD Record 49, no. 4 (March 8, 2021): 12–17. http://dx.doi.org/10.1145/3456859.3456862.

Full text
Abstract:
Networkswith Remote DirectMemoryAccess (RDMA) support are becoming increasingly common. RDMA, however, offers a limited programming interface to remote memory that consists of read, write and atomic operations. With RDMA alone, completing the most basic operations on remote data structures often requires multiple round-trips over the network. Data-intensive systems strongly desire higher-level communication abstractions that supportmore complex interaction patterns. A natural candidate to consider is MPI, the de facto standard for developing high-performance applications in the HPC community. This paper critically evaluates the communication primitives of MPI and shows that using MPI in the context of a data processing system comes with its own set of insurmountable challenges. Based on this analysis, we propose a new communication abstraction named RDMO, or Remote DirectMemory Operation, that dispatches a short sequence of reads, writes and atomic operations to remote memory and executes them in a single round-trip.
APA, Harvard, Vancouver, ISO, and other styles
6

&NA;. "MPI-5003." Inpharma Weekly &NA;, no. 1133 (April 1998): 9. http://dx.doi.org/10.2165/00128413-199811330-00014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Karwande, Amit, Xin Yuan, and David K. Lowenthal. "CC--MPI." ACM SIGPLAN Notices 38, no. 10 (October 2003): 95–106. http://dx.doi.org/10.1145/966049.781514.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

LOUCA, SOULLA, NEOPHYTOS NEOPHYTOU, ADRIANOS LACHANAS, and PARASKEVAS EVRIPIDOU. "MPI-FT: PORTABLE FAULT TOLERANCE SCHEME FOR MPI." Parallel Processing Letters 10, no. 04 (December 2000): 371–82. http://dx.doi.org/10.1142/s0129626400000342.

Full text
Abstract:
In this paper, we propose the design and development of a fault tolerant and recovery scheme for the Message Passing Interface (MPI). The proposed scheme consists of a detection mechanism for detecting process failures, and a recovery mechanism. Two different cases are considered, both assuming the existence of a monitoring process, the Observer which triggers the recovery procedure in case of failure. In the first case, each process keeps a buffer with its own message traffic to be used in case of failure, while the implementor uses periodical tests for notification of failure by the Observer. The recovery function simulates all the communication of the processes with the dead one by re-sending to the replacement process all the messages destined for the dead one. In the second case, the Observer receives and stores all message traffic, and sends to the replacement all the buffered messages destined for the dead process. Solutions are provided to the dead communicator problem caused by the death of a process. A description of the prototype developed is provided along with the results of the experiments performed for efficiency and performance.
APA, Harvard, Vancouver, ISO, and other styles
9

Aji, Ashwin M., Lokendra S. Panwar, Feng Ji, Karthik Murthy, Milind Chabbi, Pavan Balaji, Keith R. Bisset, et al. "MPI-ACC: Accelerator-Aware MPI for Scientific Applications." IEEE Transactions on Parallel and Distributed Systems 27, no. 5 (May 1, 2016): 1401–14. http://dx.doi.org/10.1109/tpds.2015.2446479.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Fang, Niandong, and Helmar Burkhart. "MPI-DDL: A distributed-data library for MPI." Future Generation Computer Systems 12, no. 5 (April 1997): 407–19. http://dx.doi.org/10.1016/s0167-739x(96)00026-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "MPI"

1

Kamal, Humaira. "FG-MPI : Fine-Grain MPI." Thesis, University of British Columbia, 2013. http://hdl.handle.net/2429/44668.

Full text
Abstract:
The Message Passing Interface (MPI) is widely used to write sophisticated parallel applications ranging from cognitive computing to weather predictions and is almost universally adopted for High Performance Computing (HPC). Many popular MPI implementations bind MPI processes to OS-processes. This runtime model has closely matched single or multi-processor compute clusters. Since 2008, however, clusters of multicore nodes have been the predominant architecture for HPC, with the opportunity for parallelism inside one compute node. There are a number of popular parallel programming languages for multicore that use message passing. One notable difference between MPI and these languages is the granularity of the MPI processes. Processes written using MPI tend to be coarse-grained and designed to match the number of processes to the available hardware, rather than the program structure. Binding MPI processes to OS-processes fails to take full advantage of the finer-grain parallelism available on today's multicore systems. Our goal was to take advantage of the type of runtime systems used by fine-grain languages and integrate that into MPI to obtain the best of these programming models; the ability to have fine-grain parallelism, while maintaining MPI's rich support for communication inside clusters. Fine-Grain MPI (FG-MPI) is a system that extends the execution model of MPI to include interleaved concurrency through integration into the MPI middleware. FG-MPI is integrated into the MPICH2 middleware, which is an open source, production-quality implementation of MPI. The FG-MPI runtime uses coroutines to implement light-weight MPI processes that are non-preemptively scheduled by its MPI-aware scheduler. The use of coroutines enables fast context-switching time and low communication and synchronization overhead. FG-MPI enables expression of finer-grain function-level parallelism, which allows for flexible process mapping, scalability, and can lead to better program performance. We have demonstrated FG-MPI's ability to scale to over a 100 million MPI processes on a large cluster of 6,480 cores. This is the first time any system has executed such a large number of MPI processes, and this capability will be useful in exploring scalability issues of the MPI middleware as systems move towards compute clusters with millions of processor cores.
APA, Harvard, Vancouver, ISO, and other styles
2

Ramesh, Srinivasan. "MPI Performance Engineering with the MPI Tools Information Interface." Thesis, University of Oregon, 2018. http://hdl.handle.net/1794/23779.

Full text
Abstract:
The desire for high performance on scalable parallel systems is increasing the complexity and the need to tune MPI implementations. The MPI Tools Information Interface (MPI T) introduced in the MPI 3.0 standard provides an opportunity for performance tools and external software to introspect and understand MPI runtime behavior at a deeper level to detect scalability issues. The interface also provides a mechanism to fine-tune the performance of the MPI library dynamically at runtime. This thesis describes the motivation, design, and challenges involved in developing an MPI performance engineering infrastructure using MPI T for two performance toolkits — the TAU Performance System, and Caliper. I validate the design of the infrastructure for TAU by developing optimizations for production and synthetic applications. I show that the MPI T runtime introspection mechanism in Caliper enables a meaningful analysis of performance data. This thesis includes previously published co-authored material.
APA, Harvard, Vancouver, ISO, and other styles
3

Massetto, Francisco Isidro. "Hybrid MPI - uma implementação MPI para ambientes distribuídos híbridos." Universidade de São Paulo, 2007. http://www.teses.usp.br/teses/disponiveis/3/3141/tde-08012008-100937/.

Full text
Abstract:
O crescente desenvolvimento de aplicações de alto desempenho é uma realidade presente nos dias atuais. Entretanto, a diversidade de arquiteturas de máquinas, incluindo monoprocessadores e multiprocessadores, clusters com ou sem máquina front-end, variedade de sistemas operacionais e implementações da biblioteca MPI tem aumentado cada dia mais. Tendo em vista este cenário, bibliotecas que proporcionem a integração de diversas implementações MPI, sistemas operacionais e arquiteturas de máquinas são necessárias. Esta tese apresenta o HyMPI, uma implementação da biblioteca MPI voltada para integração, em um mesmo ambiente distribuído de alto desempenho, nós com diferentes arquiteturas, clusters com ou sem máquina front-end, sistemas operacionais e implementações MPI. HyMPI oferece um conjunto de primitivas compatíveis com a especificação MPI, incluindo comunicação ponto a ponto, operações coletivas, inicio e termino, além de outras primitivas utilitárias.
The increasing develpment of high performance applications is a reality on current days. However, the diversity of computer architectures, including mono and multiprocessor machines, clusters with or without front-end node, the variety of operating systems and MPI implementations has growth increasingly. Focused on this scenario, programming libraries that allows integration of several MPI implementations, operating systems and computer architectures are needed. This thesis introduces HyMPI, a MPI implementation aiming integratino, on a distributed high performance system nodes with different architectures, clusters with or without front-end machine, operating systems and MPI implementations. HyMPI offers a set of primitives based on MPI specification, including point-to-point communication, collective operations, startup and finalization and some other utility functions.
APA, Harvard, Vancouver, ISO, and other styles
4

Subotic, Vladimir. "Evaluating techniques for parallelization tuning in MPI, OmpSs and MPI/OmpSs." Doctoral thesis, Universitat Politècnica de Catalunya, 2013. http://hdl.handle.net/10803/129573.

Full text
Abstract:
Parallel programming is used to partition a computational problem among multiple processing units and to define how they interact (communicate and synchronize) in order to guarantee the correct result. The performance that is achieved when executing the parallel program on a parallel architecture is usually far from the optimal: computation unbalance and excessive interaction among processing units often cause lost cycles, reducing the efficiency of parallel computation. In this thesis we propose techniques oriented to better exploit parallelism in parallel applications, with emphasis in techniques that increase asynchronism. Theoretically, this type of parallelization tuning promises multiple benefits. First, it should mitigate communication and synchronization delays, thus increasing the overall performance. Furthermore, parallelization tuning should expose additional parallelism and therefore increase the scalability of execution. Finally, increased asynchronism would provide higher tolerance to slower networks and external noise. In the first part of this thesis, we study the potential for tuning MPI parallelism. More specifically, we explore automatic techniques to overlap communication and computation. We propose a speculative messaging technique that increases the overlap and requires no changes of the original MPI application. Our technique automatically identifies the application’s MPI activity and reinterprets that activity using optimally placed non-blocking MPI requests. We demonstrate that this overlapping technique increases the asynchronism of MPI messages, maximizing the overlap, and consequently leading to execution speedup and higher tolerance to bandwidth reduction. However, in the case of realistic scientific workloads, we show that the overlapping potential is significantly limited by the pattern by which each MPI process locally operates on MPI messages. In the second part of this thesis, we study the potential for tuning hybrid MPI/OmpSs parallelism. We try to gain a better understanding of the parallelism of hybrid MPI/OmpSs applications in order to evaluate how these applications would execute on future machines and to predict the execution bottlenecks that are likely to emerge. We explore how MPI/OmpSs applications could scale on the parallel machine with hundreds of cores per node. Furthermore, we investigate how this high parallelism within each node would reflect on the network constraints. We especially focus on identifying critical code sections in MPI/OmpSs. We devised a technique that quickly evaluates, for a given MPI/OmpSs application and the selected target machine, which code section should be optimized in order to gain the highest performance benefits. Also, this thesis studies techniques to quickly explore the potential OmpSs parallelism inherent in applications. We provide mechanisms to easily evaluate potential parallelism of any task decomposition. Furthermore, we describe an iterative trialand-error approach to search for a task decomposition that will expose sufficient parallelism for a given target machine. Finally, we explore potential of automating the iterative approach by capturing the programmers’ experience into an expert system that can autonomously lead the search process. Also, throughout the work on this thesis, we designed development tools that can be useful to other researchers in the field. The most advanced of these tools is Tareador – a tool to help porting MPI applications to MPI/OmpSs programming model. Tareador provides a simple interface to propose some decomposition of a code into OmpSs tasks. Tareador dynamically calculates data dependencies among the annotated tasks, and automatically estimates the potential OmpSs parallelization. Furthermore, Tareador gives additional hints on how to complete the process of porting the application to OmpSs. Tareador already proved itself useful, by being included in the academic classes on parallel programming at UPC.
La programación paralela consiste en dividir un problema de computación entre múltiples unidades de procesamiento y definir como interactúan (comunicación y sincronización) para garantizar un resultado correcto. El rendimiento de un programa paralelo normalmente está muy lejos de ser óptimo: el desequilibrio de la carga computacional y la excesiva interacción entre las unidades de procesamiento a menudo causa ciclos perdidos, reduciendo la eficiencia de la computación paralela. En esta tesis proponemos técnicas orientadas a explotar mejor el paralelismo en aplicaciones paralelas, poniendo énfasis en técnicas que incrementan el asincronismo. En teoría, estas técnicas prometen múltiples beneficios. Primero, tendrían que mitigar el retraso de la comunicación y la sincronización, y por lo tanto incrementar el rendimiento global. Además, la calibración de la paralelización tendría que exponer un paralelismo adicional, incrementando la escalabilidad de la ejecución. Finalmente, un incremente en el asincronismo proveería una tolerancia mayor a redes de comunicación lentas y ruido externo. En la primera parte de la tesis, estudiamos el potencial para la calibración del paralelismo a través de MPI. En concreto, exploramos técnicas automáticas para solapar la comunicación con la computación. Proponemos una técnica de mensajería especulativa que incrementa el solapamiento y no requiere cambios en la aplicación MPI original. Nuestra técnica identifica automáticamente la actividad MPI de la aplicación y la reinterpreta usando solicitudes MPI no bloqueantes situadas óptimamente. Demostramos que esta técnica maximiza el solapamiento y, en consecuencia, acelera la ejecución y permite una mayor tolerancia a las reducciones de ancho de banda. Aún así, en el caso de cargas de trabajo científico realistas, mostramos que el potencial de solapamiento está significativamente limitado por el patrón según el cual cada proceso MPI opera localmente en el paso de mensajes. En la segunda parte de esta tesis, exploramos el potencial para calibrar el paralelismo híbrido MPI/OmpSs. Intentamos obtener una comprensión mejor del paralelismo de aplicaciones híbridas MPI/OmpSs para evaluar de qué manera se ejecutarían en futuras máquinas. Exploramos como las aplicaciones MPI/OmpSs pueden escalar en una máquina paralela con centenares de núcleos por nodo. Además, investigamos cómo este paralelismo de cada nodo se reflejaría en las restricciones de la red de comunicación. En especia, nos concentramos en identificar secciones críticas de código en MPI/OmpSs. Hemos concebido una técnica que rápidamente evalúa, para una aplicación MPI/OmpSs dada y la máquina objetivo seleccionada, qué sección de código tendría que ser optimizada para obtener la mayor ganancia de rendimiento. También estudiamos técnicas para explorar rápidamente el paralelismo potencial de OmpSs inherente en las aplicaciones. Proporcionamos mecanismos para evaluar fácilmente el paralelismo potencial de cualquier descomposición en tareas. Además, describimos una aproximación iterativa para buscar una descomposición en tareas que mostrará el suficiente paralelismo en la máquina objetivo dada. Para finalizar, exploramos el potencial para automatizar la aproximación iterativa. En el trabajo expuesto en esta tesis hemos diseñado herramientas que pueden ser útiles para otros investigadores de este campo. La más avanzada es Tareador, una herramienta para ayudar a migrar aplicaciones al modelo de programación MPI/OmpSs. Tareador proporciona una interfaz simple para proponer una descomposición del código en tareas OmpSs. Tareador también calcula dinámicamente las dependencias de datos entre las tareas anotadas, y automáticamente estima el potencial de paralelización OmpSs. Por último, Tareador da indicaciones adicionales sobre como completar el proceso de migración a OmpSs. Tareador ya se ha mostrado útil al ser incluido en las clases de programación de la UPC.
APA, Harvard, Vancouver, ISO, and other styles
5

Träff, Jesper. "Aspects of the efficient implementation of the message passing interface (MPI)." Aachen Shaker, 2009. http://d-nb.info/994501803/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Young, Bobby Dalton. "MPI WITHIN A GPU." UKnowledge, 2009. http://uknowledge.uky.edu/gradschool_theses/614.

Full text
Abstract:
GPUs offer high-performance floating-point computation at commodity prices, but their usage is hindered by programming models which expose the user to irregularities in the current shared-memory environments and require learning new interfaces and semantics. This thesis will demonstrate that the message-passing paradigm can be conceptually cleaner than the current data-parallel models for programming GPUs because it can hide the quirks of current GPU shared-memory environments, as well as GPU-specific features, behind a well-established and well-understood interface. This will be shown by demonstrating a proof-of-concept MPI implementation which provides cleaner, simpler code with a reasonable performance cost. This thesis will also demonstrate that, although there is a virtualization constraint imposed by MPI, this constraint is harmless as long as the virtualization was already chosen to be optimal in terms of a strong execution model and nearly-optimal execution time. This will be demonstrated by examining execution times with varying virtualization using a computationally-expensive micro-kernel.
APA, Harvard, Vancouver, ISO, and other styles
7

Angadi, Raghavendra. "Best effort MPI/RT as an alternative to MPI design and performance comparison /." Master's thesis, Mississippi State : Mississippi State University, 2002. http://library.msstate.edu/etd/show.asp?etd=etd-12032002-162333.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Sankarapandian, Dayala Ganesh R. Kamal Raj. "Profiling MPI Primitives in Real-time Using OSU INAM." The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1587336162238284.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Hoefler, Torsten. "Communication/Computation Overlap in MPI." Universitätsbibliothek Chemnitz, 2006. http://nbn-resolving.de/urn:nbn:de:swb:ch1-200600021.

Full text
Abstract:
This talk discusses optimized collective algorithms and the benefits of leveraging independent hardware entities in a pipelined manner. The resulting approach uses overlap of computation and communication to reach this task. Different examples are given.
APA, Harvard, Vancouver, ISO, and other styles
10

Chung, Ryan Ki Sing. "CMCMPI : Compose-Map-Configure MPI." Thesis, University of British Columbia, 2014. http://hdl.handle.net/2429/51185.

Full text
Abstract:
In order to manage the complexities of Multiple Program, Multiple Data (MPMD) program deployment to optimize for performance, we propose (CM)²PI as a specification and tool that employs a four stage approach to create a separation of concerns between distinct decisions: architecture interactions, software size, resource constraints, and function. With function level parallelism in mind, to create a scalable architecture specification we use multi-level compositions to improve re-usability and encapsulation. We explore different ways to abstract out communication from the tight coupling of MPI ranks and placement. One of the methods proposed is the flow-controlled channels which also aims at tackling the common issues of buffer limitations and termination. The specification increase compatibility with optimization tools. This enables the automatic optimization of program run time with respect to resource constraints. Together these features simplify the development of MPMD MPI programs.
Science, Faculty of
Computer Science, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "MPI"

1

Duanghom, Srinuan. An Mpi dictionary. Bangkok: Indigenous Languages of Thailand Research Project, 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Parallel programming with MPI. San Francisco, Calif: Morgan Kaufmann Publishers, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ndụbisi, Oriaku Onyefụlụchukwu. Atụrụ ga-epu mpi--. Enugu: Generation Books, 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Marc, Snir, ed. MPI: The complete reference. Cambridge, Mass: MIT Press, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Marc, Snir, ed. MPI--the complete reference. 2nd ed. Cambridge, Mass: MIT Press, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

pechati, Moskovskiĭ gosudarstvennyĭ universitet, ed. My iz MPI: Moskovskiĭ poligraficheskiĭ institut. Moskva: MGUP, 2005.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Peter, Corbett, and United States. National Aeronautics and Space Administration., eds. MPI-IO: A parallel file I/O interface for MPI : [NAS technical report NAS-95-002 ...]. [Washington, DC: National Aeronautics and Space Administration, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Nielsen, Frank. Introduction to HPC with MPI for Data Science. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-21903-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Research Institute for Advanced Computer Science (U.S.), ed. A portable MPI-based parallel vector template library. [Moffett Field, Calif.]: Research Institute for Advanced Computer Science, NASA Ames Research Center, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Research Institute for Advanced Computer Science (U.S.), ed. A portable MPI-based parallel vector template library. [Moffett Field, Calif.]: Research Institute for Advanced Computer Science, NASA Ames Research Center, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "MPI"

1

Ross, Robert, Robert Latham, William Gropp, Ewing Lusk, and Rajeev Thakur. "Processing MPI Datatypes Outside MPI." In Recent Advances in Parallel Virtual Machine and Message Passing Interface, 42–53. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-03770-2_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Pérache, Marc, Patrick Carribault, and Hervé Jourdren. "MPC-MPI: An MPI Implementation Reducing the Overall Memory Consumption." In Recent Advances in Parallel Virtual Machine and Message Passing Interface, 94–103. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-03770-2_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Knoth, Adrian. "Open MPI." In Grid-Computing, 117–26. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-79747-0_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Huang, Chao, Orion Lawlor, and L. V. Kalé. "Adaptive MPI." In Languages and Compilers for Parallel Computing, 306–22. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-24644-2_20.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Padua, David, Amol Ghoting, John A. Gunnels, Mark S. Squillante, José Meseguer, James H. Cownie, Duncan Roweth, et al. "MPI-IO." In Encyclopedia of Parallel Computing, 1191–99. Boston, MA: Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-09766-4_297.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Rabenseifner, Rolf. "MPI-GLUE: Interoperable high-performance MPI combining different vendor’s MPI worlds." In Euro-Par’98 Parallel Processing, 563–69. Berlin, Heidelberg: Springer Berlin Heidelberg, 1998. http://dx.doi.org/10.1007/bfb0057902.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Gropp, William, Ewing Lusk, and Rajeev Thakur. "Advanced MPI Including New MPI-3 Features." In Recent Advances in the Message Passing Interface, 14. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33518-1_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Huse, Lars Paul, and Ole W. Saastad. "The Network Agnostic MPI – Scali MPI Connect." In Recent Advances in Parallel Virtual Machine and Message Passing Interface, 294–301. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/978-3-540-39924-7_42.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Graham, Richard L., Timothy S. Woodall, and Jeffrey M. Squyres. "Open MPI: A Flexible High Performance MPI." In Parallel Processing and Applied Mathematics, 228–39. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11752578_29.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Szustak, Lukasz, Roman Wyrzykowski, Kamil Halbiniak, and Pawel Bratek. "Toward Heterogeneous MPI+MPI Programming: Comparison of OpenMP and MPI Shared Memory Models." In Euro-Par 2019: Parallel Processing Workshops, 270–81. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-48340-1_21.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "MPI"

1

Getov, Vladimir, Paul Gray, and Vaidy Sunderam. "MPI and Java-MPI." In the 1999 ACM/IEEE conference. New York, New York, USA: ACM Press, 1999. http://dx.doi.org/10.1145/331532.331553.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Green, Ronald W. "Beyond MPI---Beyond MPI." In the 2006 ACM/IEEE conference. New York, New York, USA: ACM Press, 2006. http://dx.doi.org/10.1145/1188455.1188494.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

"MPI." In the 1993 ACM/IEEE conference. New York, New York, USA: ACM Press, 1993. http://dx.doi.org/10.1145/169627.169855.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Squyres, Jeff, and Brian Barrett. "Open MPI---Open MPI community meeting." In the 2006 ACM/IEEE conference. New York, New York, USA: ACM Press, 2006. http://dx.doi.org/10.1145/1188455.1188461.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Graham, Richard, Galen Shipman, Brian Barrett, Ralph Castain, George Bosilca, and Andrew Lumsdaine. "Open MPI: A High-Performance, Heterogeneous MPI." In 2006 IEEE International Conference on Cluster Computing. IEEE, 2006. http://dx.doi.org/10.1109/clustr.2006.311904.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Cong Du and Xian-He Sun Sun. "MPI-Mitten: Enabling Migration Technology in MPI." In Sixth IEEE International Symposium on Cluster Computing and the Grid. IEEE, 2006. http://dx.doi.org/10.1109/ccgrid.2006.71.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Booth, S., and E. Mourao. "Single sided MPI implementations for SUN MPI." In ACM/IEEE SC 2000 Conference. IEEE, 2000. http://dx.doi.org/10.1109/sc.2000.10022.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Wickramasinghe, U. S., Greg Bronevetsky, Andrew Lumsdaine, and Andrew Friedley. "Hybrid MPI." In the 4th International Workshop. New York, New York, USA: ACM Press, 2014. http://dx.doi.org/10.1145/2612262.2612267.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Droste, Alexander, Michael Kuhn, and Thomas Ludwig. "MPI-checker." In the Second Workshop. New York, New York, USA: ACM Press, 2015. http://dx.doi.org/10.1145/2833157.2833159.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Zhou, Huan, Yousri Mhedheb, Kamran Idrees, Colin W. Glass, José Gracia, and Karl Fürlinger. "DART-MPI." In the 8th International Conference. New York, New York, USA: ACM Press, 2014. http://dx.doi.org/10.1145/2676870.2676875.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "MPI"

1

Han, D., and T. Jones. MPI Profiling. Office of Scientific and Technical Information (OSTI), February 2005. http://dx.doi.org/10.2172/15014654.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Garrett, Charles Kristopher. Distributed Computing (MPI). Office of Scientific and Technical Information (OSTI), June 2016. http://dx.doi.org/10.2172/1258356.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Pritchard, Howard Porter Jr, Samuel Keith Gutierrez, Nathan Hjelm, Daniel Holmes, and Ralph Castain. MPI Sessions: Second Demonstration and Evaluation of MPI Sessions Prototype. Office of Scientific and Technical Information (OSTI), September 2019. http://dx.doi.org/10.2172/1566099.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Pritchard, Howard. MPI Sessions - Working Group activities post MPI 4.0 standard ratification. Office of Scientific and Technical Information (OSTI), December 2022. http://dx.doi.org/10.2172/1906014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Hassanzadeh, Sara, Sina Neshat, Afshin Heidari, and Masoud Moslehi. Myocardial Perfusion Imaging in the Era of COVID-19. INPLASY - International Platform of Registered Systematic Review and Meta-analysis Protocols, April 2022. http://dx.doi.org/10.37766/inplasy2022.4.0063.

Full text
Abstract:
Review question / Objective: This review studies all aspects of myocardial perfusion imaging with single-photon emission computed tomography (MPI SPECT) after the COVID-19 pandemic. Condition being studied: Many imaging modalities have been reduced after the COVID-19 pandemic. Our focus in this review is to see if the number of MPIs is lowered or not and, if so, why. Furthermore, it is possible that a combination of CT attenuation correction and MPI could yield findings. In this study, we'll also look for these probable findings. Third, we know from previous studies that COVID might cause cardiac injuries in some people. Since MPI is a cardiovascular imaging technique, it might shows those injuries. So we'll review articles to find out in patients with active COVID infection, long COVID, or previous COVID cases what findings in MPI those cardiac injuries can cause.
APA, Harvard, Vancouver, ISO, and other styles
6

Loewe, W. MPI I/O Testing Results. Office of Scientific and Technical Information (OSTI), September 2007. http://dx.doi.org/10.2172/925675.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

George, William L., John G. Hagedorn, and Judith E. Devaney. Parallel programming with interoperable MPI. Gaithersburg, MD: National Institute of Standards and Technology, 2003. http://dx.doi.org/10.6028/nist.ir.7066.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Pritchard, Howard, and Tom Herschberg. MPI Session:External Network Transport Implementation. Office of Scientific and Technical Information (OSTI), September 2020. http://dx.doi.org/10.2172/1669081.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Rao, Lakshman A., and Jon Weissman. MPI-Based Adaptive Parallel Grid Services. Fort Belvoir, VA: Defense Technical Information Center, August 2003. http://dx.doi.org/10.21236/ada439405.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Bronevetsky, G., A. Friedley, T. Hoefler, A. Lumsdaine, and D. Quinlan. Compiling MPI for Many-Core Systems. Office of Scientific and Technical Information (OSTI), June 2013. http://dx.doi.org/10.2172/1088441.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography