Дисертації з теми "Shared-Memory Machines"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся з топ-15 дисертацій для дослідження на тему "Shared-Memory Machines".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.
Roberts, Harriet. "Preconditioned iterative methods on virtual shared memory machines." Thesis, This resource online, 1994. http://scholar.lib.vt.edu/theses/available/etd-07292009-090522/.
Повний текст джерелаYounge, Andrew J., Christopher Reidy, Robert Henschel, and Geoffrey C. Fox. "Evaluation of SMP Shared Memory Machines for Use with In-Memory and OpenMP Big Data Applications." IEEE, 2016. http://hdl.handle.net/10150/622702.
Повний текст джерелаHines, Michael R. "Techniques for collective physical memory ubiquity within networked clusters of virtual machines." Diss., Online access via UMI:, 2009.
Знайти повний текст джерелаHuang, Wei. "High Performance Network I/O in Virtual Machines over Modern Interconnects." The Ohio State University, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=osu1218602792.
Повний текст джерелаMelo, Alba Cristina M. A. "Conception d'un système supportant des modèles de cohérence multiples pour les machines parallèles à mémoire virtuelle partagée." Grenoble INPG, 1996. http://www.theses.fr/1996INPG0108.
Повний текст джерелаMoreaud, Stéphanie. "Mouvement de données et placement des tâches pour les communications haute performance sur machines hiérarchiques." Phd thesis, Université Sciences et Technologies - Bordeaux I, 2011. http://tel.archives-ouvertes.fr/tel-00635651.
Повний текст джерелаWen, Yuzhong. "Replication of Concurrent Applications in a Shared Memory Multikernel." Thesis, Virginia Tech, 2016. http://hdl.handle.net/10919/71813.
Повний текст джерелаMaster of Science
Lam, King-tin, and 林擎天. "Efficient shared object space support for distributed Java virtual machine." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2012. http://hub.hku.hk/bib/B47752877.
Повний текст джерелаpublished_or_final_version
Computer Science
Doctoral
Doctor of Philosophy
Fross, Bradley K. "Splash-2 shared-memory architecture for supporting high level language compilers." Thesis, Virginia Tech, 1995. http://hdl.handle.net/10919/42064.
Повний текст джерелаModem computer technology has been evolving for nearly fifty years, and has seen many architectural innovations along the way. One of the latest technologies to come about is the reconfigurable processor-based custom computing machine (CCM). CCMs use field programmable gate arrays (FPGAs) as their processing cores, giving them the flexibility of software systems with performance comparable to that of dedicated custom hardware. Hardware description languages are currently used to program CCMs. However, research is being performed to investigate the use of high-level languages (HLLs), such as the C programming language, to create CCM programs. Many aspects of CCM architectures, such as local memory systems, are not conducive to HLL compiler usage. This thesis proposes and evaluates the use of a shared-memory architecture on a Splash-2 CCM to promote the development and usage of HLL compilers for CCM systems.
Master of Science
Lee, Dong Ryeol. "A distributed kernel summation framework for machine learning and scientific applications." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/44727.
Повний текст джерелаHuster, Carl R. "A parallel/vector Monte Carlo MESFET model for shared memory machines." Thesis, 1992. http://hdl.handle.net/1957/37306.
Повний текст джерелаGraduation date: 1993
Hsu, Po-Hsueh, and 許博學. "Run-Time Parallelization Techniques for Irregular Scientific Computations on Shared-Memory Machines." Thesis, 1999. http://ndltd.ncl.edu.tw/handle/30203154122182106494.
Повний текст джерела國立中山大學
電機工程學系
87
High performance computing power is crucial for the advanced calculations of scientific applications. A multiprocessor system derives its high performance from the fact that some computations can proceed in parallel. A parallelizing compiler can take a sequential program as input and automatically translate it into parallel form for the target multiprocessor system. The compiler checks for data dependences in the program to determine parallel executable loops. But for loops with arrays of irregular (i.e., indirectly indexed), nonlinear or dynamic access patterns, no state-of-the-art compilers can determine whether data dependences exist. Either the necessary information is not statically available or access patterns are too complex to analyze. Unfortunately, many scientific and engineering programs that perform complex simulations usually contain such loops. This phenomenon greatly limits the applicability of parallelizing compilers. Since all information is available during program execution, we can resort to the run-time data dependence analysis to overcome the above problem. Two kinds of approaches have been developed in run-time parallelization: the speculative doall parallelization and the run-time doacross parallelization. The former assumes full parallelism and executes the loop speculatively, then examines the correctness of parallel execution after loop termination. If the speculation succeeds, a significant speedup is obtained. Otherwise, those altered variables should be restored and the original loop is re-executed in sequence. The latter, known as the inspector/executor method, constructs a proper execution schedule to enforce parallelism in doacross loops. The inspector examines cross-iteration dependences and produces a parallel execution schedule at run-time. The executor then performs the actual operations of the loop based on the schedule arranged by the inspector. In this thesis, we present three practical run-time techniques to fully exploit loop-level parallelism. The first is speculative parallelization with new technology (SPNT). Two main characteristics make the SPNT test distinguished. One is improving the success rate of speculative parallelization by eliminating all cross-iteration data dependences except the cross-processor flow dependences. The other is reducing the failure penalty by aborting the speculative parallel execution immediately once a cross-processor flow dependence is detected during the execution. The second is the parallel group analysis (PGA) that schedules maximal sets of contiguous iterations with no cross-iteration flow dependence into parallel groups. The PGA technique has the advantages of simple but efficient algorithm and small amount of memory space. The third is the optimal parallel scheduler (OPS) that can obtain an optimal schedule in parallel. In addition, we make use of an atomic bitwise-OR instruction to remove the overhead of global synchronization and gain satisfactory speedup. Both the PGA and the OPS play the role in run-time doacross parallelization. Applying run-time parallelization to irregular scientific computations can actually discover much parallelism not found before. This meets the requirement of high performance computing for scientific applications. We believe that it is worthwhile to invest more resources in the development of run-time parallelization techniques.
Antony, Joseph. "Performance Models for Electronic Structure Methods on Modern Computer Architectures." Phd thesis, 2009. http://hdl.handle.net/1885/49420.
Повний текст джерелаYih, Wang Chung, and 王崇懿. "Design of A Parallel Virtual Machine with Support of Distributed* Shared Memory." Thesis, 1995. http://ndltd.ncl.edu.tw/handle/60621672842615961380.
Повний текст джерела國立清華大學
資訊科學學系
83
The distributed computing environment(DCE) is becoming more and more important as computer techniques progress. There is a trend to speedup computation tasks by DCE tools that merge computers together into a powerful parallel virtual machine based on networked workstations cooperatively. In the development of parallel programs on a parallel virtual machine, it is unfriendly for programmers to maintain data consistency with elementary communication primitives provided by most of the DCE tools. For the sake of simplifying programming style, the concept of distributed shared memory (DSM) is introduced instead of message passing. In this thesis, we discuss the evolution of distributed shared memory systems and implement the DSM mechanism on the DOS-PVM platform. The DOS-PVM platform that integrates distributed shared memory and virtual memory function on a cluster of PCs is a new DCE tool we design. The goal of our DOS-PVM environment is to make a cluster of PC working like a bus-based multiprocessor system.
Wang, Ying-Lung, and 王應龍. "Design and Implementation of a Parallel Virtual Machine with Distributed Shared Memory by Using Java." Thesis, 2000. http://ndltd.ncl.edu.tw/handle/62491381815597181663.
Повний текст джерела國立清華大學
資訊工程學系
88
In this thesis, we design and implement a system with Distributed Shared Memory (DSM) mechanism for distributed computing environments by using Java. There is a speed limit for the processor. Many complex and time-consuming computation tasks, such as astronomical, chemical, or meteorological data processing, usually need distributed systems to increase their computing power. A distributed system may consist of many different type machines. Due to the difference of machine types, the instruction sets used in those machines may be different. When developing parallel programs over a distributed system with different type machines, programmers must be careful and may spend a lot of efforts in writing several versions of their programs for running in different system platforms. However, porting programs is not an easy job. Hence, the portability and the cross-platform of Java make it be a good solution for simplifying the heterogeneous programming. For distributed computing, Java provides Remote Method Invocation (RMI) mechanism for invoking methods of a remote object like invoking methods of a local object. However, programmers still need to take care of problems of data replications. The maintenance of the data consistency decreases the development of parallel programs. DSM mechanism provides a shared memory abstraction over a cluster of physically distributed machines. Communication between programs running on different machines is achieved in a normal memory access fashion. Complicated procedures of the network communication and those of the data consistency are hidden by DSM. Therefore, the goal of this thesis is to combine the advantage of Java and DSM to provide a more powerful distributed computing environment.