Добірка наукової літератури з теми "Shared-Memory Machines"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Shared-Memory Machines".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Shared-Memory Machines"

1

Xian-He Sun and Jianping Zhu. "Performance considerations of shared virtual memory machines." IEEE Transactions on Parallel and Distributed Systems 6, no. 11 (1995): 1185–94. http://dx.doi.org/10.1109/71.476190.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Barton, Christopher, CĆlin Casçaval, George Almási, Yili Zheng, Montse Farreras, Siddhartha Chatterje, and José Nelson Amaral. "Shared memory programming for large scale machines." ACM SIGPLAN Notices 41, no. 6 (June 11, 2006): 108–17. http://dx.doi.org/10.1145/1133255.1133995.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Bonomo, John P., and Wayne R. Dyksen. "Pipelined iterative methods for shared memory machines." Parallel Computing 11, no. 2 (August 1989): 187–99. http://dx.doi.org/10.1016/0167-8191(89)90028-8.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Zaki, Mohammed J. "Parallel Sequence Mining on Shared-Memory Machines." Journal of Parallel and Distributed Computing 61, no. 3 (March 2001): 401–26. http://dx.doi.org/10.1006/jpdc.2000.1695.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Bircsak, John, Peter Craig, RaeLyn Crowell, Zarka Cvetanovic, Jonathan Harris, C. Alexander Nelson, and Carl D. Offner. "Extending OpenMP for NUMA Machines." Scientific Programming 8, no. 3 (2000): 163–81. http://dx.doi.org/10.1155/2000/464182.

Повний текст джерела
Анотація:
This paper describes extensions to OpenMP that implement data placement features needed for NUMA architectures. OpenMP is a collection of compiler directives and library routines used to write portable parallel programs for shared-memory architectures. Writing efficient parallel programs for NUMA architectures, which have characteristics of both shared-memory and distributed-memory architectures, requires that a programmer control the placement of data in memory and the placement of computations that operate on that data. Optimal performance is obtained when computations occur on processors that have fast access to the data needed by those computations. OpenMP -- designed for shared-memory architectures -- does not by itself address these issues. The extensions to OpenMP Fortran presented here have been mainly taken from High Performance Fortran. The paper describes some of the techniques that the Compaq Fortran compiler uses to generate efficient code based on these extensions. It also describes some additional compiler optimizations, and concludes with some preliminary results.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

STRATULAT, S., and D. J. EVANS. "VIRTUAL SHARED MEMORY MACHINES— AN APPLICATION OF PVM∗." Parallel Algorithms and Applications 7, no. 1-2 (January 1995): 143–60. http://dx.doi.org/10.1080/10637199508915528.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

FANTOZZI, CARLO, ANDREA PIETRACAPRINA, and GEPPINO PUCCI. "A GENERAL PRAM SIMULATION SCHEME FOR CLUSTERED MACHINES." International Journal of Foundations of Computer Science 14, no. 06 (December 2003): 1147–64. http://dx.doi.org/10.1142/s0129054103002230.

Повний текст джерела
Анотація:
We present a general deterministic scheme to implement a shared memory abstraction on any distributed-memory machine which exhibits a clustered structure. More specifically, we develop a memory distribution strategy and an access protocol for the Decomposable BSP (D-BSP), a generic machine model whose bandwidth/latency parameters can be instantiated to closely reflect the characteristics of machines that admit a hierarchical decomposition into independent clusters. Our scheme achieves provably optimal slowdown for those machines where delays due to latency dominate over those due to bandwidth limitations. For machines where this is not the case, the slowdown is a mere logarithmic factor away from the natural bandwidth-based lower bound.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

HABBAS, ZINEB, MICHAËL KRAJECKI, and DANIEL SINGER. "SHARED MEMORY IMPLEMENTATION OF CONSTRAINT SATISFACTION PROBLEM RESOLUTION." Parallel Processing Letters 11, no. 04 (December 2001): 487–501. http://dx.doi.org/10.1142/s0129626401000749.

Повний текст джерела
Анотація:
Many problems in Computer Science, especially in Artificial Intelligence, can be formulated as Constraint Satisfaction Problems (CSP). This paper presents a parallel implementation of the Forward-Checking algorithm for solving a binary CSP over finite domains. Its main contribution is to use a simple decomposition strategy in order to distribute dynamically the search tree among machines. The feasibility and benefit of this approach are studied for a Shared Memory model. An implementation is drafted using the new emergent standard OpenMP library for shared memory, thus controlling load balancing. We mainly highlight satisfactory efficiencies without using any tricky load balancing policy. All the experiments were carried out running on the Sillicon Graphics Origin 2000 parallel machine.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Choi, Yoonseo, and Hwansoo Han. "Shared heap management for memory-limited java virtual machines." ACM Transactions on Embedded Computing Systems 7, no. 2 (February 2008): 1–32. http://dx.doi.org/10.1145/1331331.1331337.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Limaye, Ajay C. "Parallel MP2-energy evaluation: Simulated shared memory approach on distributed memory parallel machines." Journal of Computational Chemistry 18, no. 4 (March 1997): 552–61. http://dx.doi.org/10.1002/(sici)1096-987x(199703)18:4<552::aid-jcc8>3.0.co;2-s.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "Shared-Memory Machines"

1

Roberts, Harriet. "Preconditioned iterative methods on virtual shared memory machines." Thesis, This resource online, 1994. http://scholar.lib.vt.edu/theses/available/etd-07292009-090522/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Younge, Andrew J., Christopher Reidy, Robert Henschel, and Geoffrey C. Fox. "Evaluation of SMP Shared Memory Machines for Use with In-Memory and OpenMP Big Data Applications." IEEE, 2016. http://hdl.handle.net/10150/622702.

Повний текст джерела
Анотація:
While distributed memory systems have shaped the field of distributed systems for decades, the demand for many-core shared memory resources is increasing. Symmetric Multiprocessor Systems (SMPs) have become increasingly important recently among a wide array of disciplines, ranging from Bioinformatics to astrophysics, and beyond. With the increase in big data computing, the size and scope of traditional commodity server systems is often outpaced. While some big data applications can be mapped to distributed memory systems found through many cluster and cloud technologies today, this effort represents a large barrier of entry that some projects cannot cross. Shared memory SMP systems look to effectively and efficiently fill this niche within distributed systems by providing high throughput and performance with minimized development effort, as the computing environment often represents what many researchers are already familiar with. In this paper, we look at the use of two common shared memory systems, the ScaleMP vSMP virtualized SMP deployment at Indiana University, and the SGI UV architecture deployed at University of Arizona. While both systems are notably different in their design, their potential impact on computing is remarkably similar. As such, we look to compare each system first under a set of OpenMP threaded benchmarks via the SPEC group, and to follow up with our experience using each machine for Trinity de-novo assembly. We find both SMP systems are well suited to support various big data applications, with the newer vSMP deployment often slightly faster; however, certain caveats and performance considerations are necessary when considering such SMP systems.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Hines, Michael R. "Techniques for collective physical memory ubiquity within networked clusters of virtual machines." Diss., Online access via UMI:, 2009.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Huang, Wei. "High Performance Network I/O in Virtual Machines over Modern Interconnects." The Ohio State University, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=osu1218602792.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Melo, Alba Cristina M. A. "Conception d'un système supportant des modèles de cohérence multiples pour les machines parallèles à mémoire virtuelle partagée." Grenoble INPG, 1996. http://www.theses.fr/1996INPG0108.

Повний текст джерела
Анотація:
La programmation par variables partagees est utilisee dans les architectures paralleles sans memoire commune grace a une couche logicielle qui simule la memoire physiquement partagee. Le maintien de l'abstraction parfaite d'une memoire unique necessite un grand nombre d'operations de coherence et, par consequent, une degradation importante des performances. Afin de palier cette degradation, plusieurs systemes se servent des modeles de coherence de la memoire plus relaches, qui permettent une concurrence plus importante entre les acces mais compliquent le modele de programmation. Le choix d'un modele de coherence est donc un compromis entre les performances et la simplicite de la programmation. Ces deux facteurs dependent des attentes des utilisateurs et des caracteristiques d'acces aux donnees de chaque applications parallele. Cette these presente diva, un systeme a memoire virtuelle partagee qui supporte plusieurs modeles de coherence de la memoire. Avec diva, l'utilisateur peut choisir la semantique de la memoire partagee la plus appropriee a l'execution correcte et performante de son application. De plus, diva offre a l'utilisateur la possibilite de definir ses propres modeles de coherence. L'existence des modeles multiples a l'interieur de diva a guide les choix de conception de plusieurs autres mecanismes. Ainsi, nous proposons une interface unique de synchronisation et des mecanismes de remplacement et prechargement des pages adaptes a un environnement a modeles multiples. Un prototype de diva a ete mis en uvre sur la machine parallele intel/paragon. L'analyse d'une application qui s'execute sur des differents modeles de coherence nous a permis de montrer que le choix du modele de coherence affecte directement les performances d'une application
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Moreaud, Stéphanie. "Mouvement de données et placement des tâches pour les communications haute performance sur machines hiérarchiques." Phd thesis, Université Sciences et Technologies - Bordeaux I, 2011. http://tel.archives-ouvertes.fr/tel-00635651.

Повний текст джерела
Анотація:
Les architectures des machines de calcul sont de plus en plus complexes et hiérarchiques, avec des processeurs multicœurs, des bancs mémoire distribués, et de multiples bus d'entrées-sorties. Dans le cadre du calcul haute performance, l'efficacité de l'exécution des applications parallèles dépend du coût de communication entre les tâches participantes qui est impacté par l'organisation des ressources, en particulier par les effets NUMA ou de cache. Les travaux de cette thèse visent à l'étude et à l'optimisation des communications haute performance sur les architectures hiérarchiques modernes. Ils consistent tout d'abord en l'évaluation de l'impact de la topologie matérielle sur les performances des mouvements de données, internes aux calculateurs ou au travers de réseaux rapides, et pour différentes stratégies de transfert, types de matériel et plateformes. Dans une optique d'amélioration et de portabilité des performances, nous proposons ensuite de prendre en compte les affinités entre les communications et le matériel au sein des bibliothèques de communication. Ces recherches s'articulent autour de l'adaptation du placement des tâches en fonction des schémas de transfert et de la topologie des calculateurs, ou au contraire autour de l'adaptation des stratégies de mouvement de données à une répartition définie des tâches. Ce travail, intégré aux principales bibliothèques MPI, permet de réduire de façon significative le coût des communications et d'améliorer ainsi les performances applicatives. Les résultats obtenus témoignent de la nécessité de prendre en compte les caractéristiques matérielles des machines modernes pour en exploiter la quintessence.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Wen, Yuzhong. "Replication of Concurrent Applications in a Shared Memory Multikernel." Thesis, Virginia Tech, 2016. http://hdl.handle.net/10919/71813.

Повний текст джерела
Анотація:
State Machine Replication (SMR) has become the de-facto methodology of building a replication based fault-tolerance system. Current SMR systems usually have multiple machines involved, each of the machines in the SMR system acts as the replica of others. However having multiple machines leads to more cost to the infrastructure, in both hardware cost and power consumption. For tolerating non-critical CPU and memory failure that will not crash the entire machine, there is no need to have extra machines to do the job. As a result, intra-machine replication is a good fit for this scenario. However, current intra-machine replication approaches do not provide strong isolation among the replicas, which allows the faults to be propagated from one replica to another. In order to provide an intra-machine replication technique with strong isolation, in this thesis we present a SMR system on a multi-kernel OS. We implemented a replication system that is capable of replicating concurrent applications on different kernel instances of a multi-kernel OS. Modern concurrent application can be deployed on our system with minimal code modification. Additionally, our system provides two different replication modes that allows the user to switch freely according to the application type. With the evaluation of multiple real world applications, we show that those applications can be easily deployed on our system with 0 to 60 lines of code changes to the source code. From the performance perspective, our system only introduces 0.23\% to 63.39\% overhead compared to non-replicated execution.
Master of Science
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Lam, King-tin, and 林擎天. "Efficient shared object space support for distributed Java virtual machine." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2012. http://hub.hku.hk/bib/B47752877.

Повний текст джерела
Анотація:
Given the popularity of Java, extending the standard Java virtual machine (JVM) to become cluster-aware effectively brings the vision of transparent horizontal scaling of applications to fruition. With a set of cluster-wide JVMs orchestrated as a virtually single system, thread-level parallelism in Java is no longer confined to one multiprocessor. An unmodified multithreaded Java application running on such a Distributed JVM (DJVM) can scale out transparently, tapping into the vast computing power of the cluster. While this notion creates an easy-to-use and powerful parallel programming paradigm, research on DJVMs has remained largely at the proof-of-concept stage where successes were proven using trivial scientific computing workloads only. Real-life Java applications with commercial server workloads have not been well-studied on DJVMs. Their natures including complex and sometimes huge object graphs, irregular access patterns and frequent synchronizations are key scalability hurdles. To design a scalable DJVM for real-life applications, we identify three major unsolved issues calling for a top-to-bottom overhaul of traditional systems. First, we need a more time- and space-efficient cache coherence protocol to support fine-grained object sharing over the distributed shared heap. The recent prevalence of concurrent data structures with heavy use of volatile fields has added complications to the matter. Second, previous generations of DJVMs lack true support for memory-intensive applications. While the network-wide aggregated physical memory can be huge, mutual sharing of huge object graphs like Java collections may cause nodes to eventually run out of local heap space because the cached copies of remote objects, linked by active references, can’t be arbitrarily discarded. Third, thread affinity, which determines the overall communication cost, is vital to the DJVM performance. Data access locality can be improved by collocating highly-correlated threads, via dynamic thread migration. Tracking inter-thread correlations trades profiling costs for reduced object misses. Unfortunately, profiling techniques like active correlation tracking used in page-based DSMs would entail prohibitively high overheads and low accuracy when ported to fine-grained object-based DJVMs. This dissertation presents technical contributions towards all these problems. We use a dual-protocol approach to address the first problem. Synchronized (lock-based) and volatile accesses are handled by a home-based lazy release consistency (HLRC) protocol and a sequential consistency (SC) protocol respectively. The two protocols’ metadata are maintained in a conflict-free, memory-efficient manner. With further techniques like hierarchical passing of lock ownerships, the overall communication overheads of fine-grained distributed object sharing are pruned to a minimal level. For the second problem, we develop a novel uncaching mechanism to safely break a huge active object graph. When a JVM instance runs low on free memory, it initiates an uncaching policy, which eagerly assigns nulls to selected reference fields, thus detaching some older or less useful cached objects from the root set for reclamation. Careful orchestration is made between uncaching, local garbage collection and the coherence protocol to avoid possible data races. Lastly, we devise lightweight sampling-based profiling methods to derive inter-thread correlations, and a profile-guided thread migration policy to boost the system performance. Extensive experiments have demonstrated the effectiveness of all our solutions.
published_or_final_version
Computer Science
Doctoral
Doctor of Philosophy
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Fross, Bradley K. "Splash-2 shared-memory architecture for supporting high level language compilers." Thesis, Virginia Tech, 1995. http://hdl.handle.net/10919/42064.

Повний текст джерела
Анотація:

Modem computer technology has been evolving for nearly fifty years, and has seen many architectural innovations along the way. One of the latest technologies to come about is the reconfigurable processor-based custom computing machine (CCM). CCMs use field programmable gate arrays (FPGAs) as their processing cores, giving them the flexibility of software systems with performance comparable to that of dedicated custom hardware. Hardware description languages are currently used to program CCMs. However, research is being performed to investigate the use of high-level languages (HLLs), such as the C programming language, to create CCM programs. Many aspects of CCM architectures, such as local memory systems, are not conducive to HLL compiler usage. This thesis proposes and evaluates the use of a shared-memory architecture on a Splash-2 CCM to promote the development and usage of HLL compilers for CCM systems.


Master of Science
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Lee, Dong Ryeol. "A distributed kernel summation framework for machine learning and scientific applications." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/44727.

Повний текст джерела
Анотація:
The class of computational problems I consider in this thesis share the common trait of requiring consideration of pairs (or higher-order tuples) of data points. I focus on the problem of kernel summation operations ubiquitous in many data mining and scientific algorithms. In machine learning, kernel summations appear in popular kernel methods which can model nonlinear structures in data. Kernel methods include many non-parametric methods such as kernel density estimation, kernel regression, Gaussian process regression, kernel PCA, and kernel support vector machines (SVM). In computational physics, kernel summations occur inside the classical N-body problem for simulating positions of a set of celestial bodies or atoms. This thesis attempts to marry, for the first time, the best relevant techniques in parallel computing, where kernel summations are in low dimensions, with the best general-dimension algorithms from the machine learning literature. We provide a unified, efficient parallel kernel summation framework that can utilize: (1) various types of deterministic and probabilistic approximations that may be suitable for both low and high-dimensional problems with a large number of data points; (2) indexing the data using any multi-dimensional binary tree with both distributed memory (MPI) and shared memory (OpenMP/Intel TBB) parallelism; (3) a dynamic load balancing scheme to adjust work imbalances during the computation. I will first summarize my previous research in serial kernel summation algorithms. This work started from Greengard/Rokhlin's earlier work on fast multipole methods for the purpose of approximating potential sums of many particles. The contributions of this part of this thesis include the followings: (1) reinterpretation of Greengard/Rokhlin's work for the computer science community; (2) the extension of the algorithms to use a larger class of approximation strategies, i.e. probabilistic error bounds via Monte Carlo techniques; (3) the multibody series expansion: the generalization of the theory of fast multipole methods to handle interactions of more than two entities; (4) the first O(N) proof of the batch approximate kernel summation using a notion of intrinsic dimensionality. Then I move onto the problem of parallelization of the kernel summations and tackling the scaling of two other kernel methods, Gaussian process regression (kernel matrix inversion) and kernel PCA (kernel matrix eigendecomposition). The artifact of this thesis has contributed to an open-source machine learning package called MLPACK which has been first demonstrated at the NIPS 2008 and subsequently at the NIPS 2011 Big Learning Workshop. Completing a portion of this thesis involved utilization of high performance computing resource at XSEDE (eXtreme Science and Engineering Discovery Environment) and NERSC (National Energy Research Scientific Computing Center).
Стилі APA, Harvard, Vancouver, ISO та ін.

Книги з теми "Shared-Memory Machines"

1

Liu, Yue-Sheng. Simulation and analysis of different switch architectures for interconnection networks in MIMD shared memory machines. New York: Courant Institute of Mathematical Sciences, New York University, 1988.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Greenbaum, Anne. Parallelizing the adaptive fast multipole method on a shared memory MIMD machine. New York: Courant Institute of Mathematical Sciences, New York University, 1989.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Huster, Carl R. A parallel/vector Monte Carlo MESFET model for shared memory machines. 1992.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Liu, Yue-Sheng, and Susan Dickey. Simulation and Analysis of Different Switch Architectures for Interconnection Networks in MIMD Shared Memory Machines. Creative Media Partners, LLC, 2018.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Petersen, Wesley, and Peter Arbenz. Introduction to Parallel Computing. Oxford University Press, 2004. http://dx.doi.org/10.1093/oso/9780198515760.001.0001.

Повний текст джерела
Анотація:
In the last few years, courses on parallel computation have been developed and offered in many institutions in the UK, Europe and US as a recognition of the growing significance of this topic in mathematics and computer science. There is a clear need for texts that meet the needs of students and lecturers and this book, based on the author's lecture at ETH Zurich is an ideal practical student guide to scientific computing on parallel computers working up from a hardware instruction level, to shared memory machines and finally to distributed memory machines. Aimed at advanced undergraduate and graduate students in applied mathematics, computer science and engineering, subjects covered include linear algebra, fast Fourier transform, and Monte-Carlo simulations, including examples in C and in some cases Fortran. This book is also ideal for practitioners and programmers.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Allen, Michael P., and Dominic J. Tildesley. Parallel simulation. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780198803195.003.0007.

Повний текст джерела
Анотація:
Parallelization is essential for the effective use of modern high-performance computing facilities. This chapter summarizes some of the basic approaches that are commonly used in molecular simulation programs. The underlying shared-memory and distributed-memory architectures are explained. The concept of program threads and their use in parallelizing nested loops on a shared memory machine is described. Parallel tempering using message passing on a distributed memory machine is discussed and illustrated with an example code. Domain decomposition, and the implementation of constraints on parallel computers, are also explained.
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "Shared-Memory Machines"

1

Zaki, Mohammed J. "Parallel Sequence Mining on Shared-Memory Machines." In Large-Scale Parallel Data Mining, 161–89. Berlin, Heidelberg: Springer Berlin Heidelberg, 2000. http://dx.doi.org/10.1007/3-540-46502-2_8.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Nikhil, Rishiyur S. "Cid: A parallel, “shared-memory” C for distributed-memory machines." In Languages and Compilers for Parallel Computing, 376–90. Berlin, Heidelberg: Springer Berlin Heidelberg, 1995. http://dx.doi.org/10.1007/bfb0025891.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Meyer, Friedhelm. "Hashing strategies for simulating shared memory on distributed memory machines." In Lecture Notes in Computer Science, 20–29. Berlin, Heidelberg: Springer Berlin Heidelberg, 1993. http://dx.doi.org/10.1007/3-540-56731-3_3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Zlatev, Zahari. "Running Models on Parallel Machines with Shared Memory." In Computer Treatment of Large Air Pollution Models, 225–49. Dordrecht: Springer Netherlands, 1995. http://dx.doi.org/10.1007/978-94-011-0311-4_8.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Grün, Thomas, and Mark A. Hillebrand. "NAS Integer sort on multi-threaded shared memory machines." In Euro-Par’98 Parallel Processing, 999–1009. Berlin, Heidelberg: Springer Berlin Heidelberg, 1998. http://dx.doi.org/10.1007/bfb0057960.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Deo, Narsingh. "Data Structures for Parallel Computation on Shared-Memory Machines." In Supercomputing, 341–55. Berlin, Heidelberg: Springer Berlin Heidelberg, 1990. http://dx.doi.org/10.1007/978-3-642-75771-6_23.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Andreev, Alexander E., Andrea E. F. Clementi, Paolo Penna, and José D. P. Rolim. "Memory Organization Schemes for Large Shared Data: A Randomized Solution for Distributed Memory Machines." In STACS 99, 68–77. Berlin, Heidelberg: Springer Berlin Heidelberg, 1999. http://dx.doi.org/10.1007/3-540-49116-3_6.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Hoffmann, Ralf, and Thomas Rauber. "Profiling of Task-Based Applications on Shared Memory Machines: Scalability and Bottlenecks." In Euro-Par 2007 Parallel Processing, 118–28. Berlin, Heidelberg: Springer Berlin Heidelberg, 2007. http://dx.doi.org/10.1007/978-3-540-74466-5_14.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Benkner, Siegfried, and Thomas Brandes. "Exploiting Data Locality on Scalable Shared Memory Machines with Data Parallel Programs." In Euro-Par 2000 Parallel Processing, 647–57. Berlin, Heidelberg: Springer Berlin Heidelberg, 2000. http://dx.doi.org/10.1007/3-540-44520-x_90.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

González-Escribano, Arturo, Arjan J. C. van Gemund, Valentín Cardeñoso-Payo, Judith Alonso-López, David Martín-García, and Alberto Pedrosa-Calvo. "Measuring the Performance Impact of SP-Restricted Programming in Shared-Memory Machines." In Vector and Parallel Processing — VECPAR 2000, 128–41. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-44942-6_10.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Shared-Memory Machines"

1

Barton, Christopher, CĆlin Casçaval, George Almási, Yili Zheng, Montse Farreras, Siddhartha Chatterje, and José Nelson Amaral. "Shared memory programming for large scale machines." In the 2006 ACM SIGPLAN conference. New York, New York, USA: ACM Press, 2006. http://dx.doi.org/10.1145/1133981.1133995.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Garcia, Philip, and Katherine Compton. "Shared Memory Cache Organizations for Reconfigurable Computing Systems." In 2009 17th IEEE Symposium on Field Programmable Custom Computing Machines. IEEE, 2009. http://dx.doi.org/10.1109/fccm.2009.28.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Mitra, Reshmi, Bharat S. Joshi, Arun Ravindran, Arindam Mukherjee, and Ryan Adams. "Performance Modeling of Shared Memory Multiple Issue Multicore Machines." In 2012 41st International Conference on Parallel Processing Workshops (ICPPW). IEEE, 2012. http://dx.doi.org/10.1109/icppw.2012.64.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Zhang, Yan, Fan Zhang, and Jason Bakos. "Frequent Itemset Mining on Large-Scale Shared Memory Machines." In 2011 IEEE International Conference on Cluster Computing (CLUSTER). IEEE, 2011. http://dx.doi.org/10.1109/cluster.2011.69.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Wei Huang, Matthew J. Koop, and Dhabaleswar K. Panda. "Efficient one-copy MPI shared memory communication in Virtual Machines." In 2008 IEEE International Conference on Cluster Computing (CLUSTER). IEEE, 2008. http://dx.doi.org/10.1109/clustr.2008.4663761.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Cierniak, Michał, and Wei Li. "Unifying data and control transformations for distributed shared-memory machines." In the ACM SIGPLAN 1995 conference. New York, New York, USA: ACM Press, 1995. http://dx.doi.org/10.1145/207110.207145.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Mahmoudi, Ramzi, and Mohamed Akil. "Real-time topological image smoothing on shared memory parallel machines." In IS&T/SPIE Electronic Imaging, edited by Nasser Kehtarnavaz and Matthias F. Carlsohn. SPIE, 2011. http://dx.doi.org/10.1117/12.872275.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Heirman, W. "Wavelength tunable reconfigurable optical interconnection network for shared-memory machines." In 31st European Conference on Optical Communications (ECOC 2005). IEE, 2005. http://dx.doi.org/10.1049/cp:20050592.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Reis, Verônica L. M., Luis Miguel Campos, and Isaac D. Scherson. "Parallel Virtual Memory for Time Shared Environments." In International Symposium on Computer Architecture and High Performance Computing. Sociedade Brasileira de Computação, 1996. http://dx.doi.org/10.5753/sbac-pad.1996.19830.

Повний текст джерела
Анотація:
This paper analyses the issues involved in providing virtual distributed shared memory for time-shared parallel machines. We study the performance of two different page management policies, namely, static and dynamic page allocation under two widely accepted scheduling policies: Gang scheduling and independent processor scheduling. The performance of each page management policy is studied under different replacement scopes (local versus global replacement). Results obtained after extensive simulations show that dynamic page allocation performs better throughout all the environments simulated. We also observe a better performance of independent processor over gang scheduling as well as a similar performance between local and global replacement scope.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Bataineh, Ozguner, and Szauter. "Parallel logic and fault simulation algorithms for shared memory vector machines." In IEEE/ACM International Conference on Computer-Aided Design. IEEE Comput. Soc. Press, 1992. http://dx.doi.org/10.1109/iccad.1992.279345.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії