Academic literature on the topic 'Parallel computers'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Parallel computers.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Parallel computers"

1

YAMAGUCHI, YOSHINORI. "Parallel Computers and Parallel Computation." Journal of the Institute of Electrical Engineers of Japan 118, no. 9 (1998): 526–29. http://dx.doi.org/10.1541/ieejjournal.118.526.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zhao, Yongwei, Yunji Chen, and Zhiwei Xu. "Fractal Parallel Computing." Intelligent Computing 2022 (September 5, 2022): 1–10. http://dx.doi.org/10.34133/2022/9797623.

Full text
Abstract:
As machine learning (ML) becomes the prominent technology for many emerging problems, dedicated ML computers are being developed at a variety of scales, from clouds to edge devices. However, the heterogeneous, parallel, and multilayer characteristics of conventional ML computers concentrate the cost of development on the software stack, namely, ML frameworks, compute libraries, and compilers, which limits the productivity of new ML computers. Fractal von Neumann architecture (FvNA) is proposed to address the programming productivity issue for ML computers. FvNA is scale-invariant to program, thus making the development of a family of scaled ML computers as easy as a single node. In this study, we generalize FvNA to the field of general-purpose parallel computing. We model FvNA as an abstract parallel computer, referred to as the fractal parallel machine (FPM), to demonstrate several representative general-purpose tasks that are efficiently programmable. FPM limits the entropy of programming by applying constraints on the control pattern of the parallel computing systems. However, FPM is still general-purpose and cost-optimal. We settle some preliminary results showing that FPM is as powerful as many fundamental parallel computing models such as BSP and alternating Turing machine. Therefore, FvNA is also generally applicable to various fields other than ML.
APA, Harvard, Vancouver, ISO, and other styles
3

MORIARTY, K. J. M., and T. TRAPPENBERG. "PROGRAMMING TOOLS FOR PARALLEL COMPUTERS." International Journal of Modern Physics C 04, no. 06 (December 1993): 1285–94. http://dx.doi.org/10.1142/s0129183193001002.

Full text
Abstract:
Although software tools already have a place on serial and vector computers they are becoming increasingly important for parallel computing. Message passing libraries, parallel operating systems and high level parallel languages are the basic software tools necessary to implement a parallel processing program. These tools up to now have been specific to each parallel computer system and a short survey will be given. The aim of another class of software tools for parallel computers is to help in writing or rewriting application programs. Because automatic parallelization tools are not very successful, an interactive component has to be incorporated. We will concentrate here on the discussion of SPEFY, a parallel program development facility.
APA, Harvard, Vancouver, ISO, and other styles
4

Shafarenko, A. "Software for Parallel Computers." Computing & Control Engineering Journal 3, no. 4 (1992): 194. http://dx.doi.org/10.1049/cce:19920049.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Lea, R. M., and I. P. Jalowiecki. "Associative massively parallel computers." Proceedings of the IEEE 79, no. 4 (April 1991): 469–79. http://dx.doi.org/10.1109/5.92041.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kallstrom, M., and S. S. Thakkar. "Programming three parallel computers." IEEE Software 5, no. 1 (January 1988): 11–22. http://dx.doi.org/10.1109/52.1990.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Moulic, J. R., and M. Kumar. "Highly parallel computers: Perspectives." Computing Systems in Engineering 3, no. 1-4 (January 1992): 1–5. http://dx.doi.org/10.1016/0956-0521(92)90088-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Wood, Alan. "Parallel computers and computations." European Journal of Operational Research 27, no. 3 (December 1986): 385–86. http://dx.doi.org/10.1016/0377-2217(86)90338-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Popov, Oleksandr, and Oleksiy Chystiakov. "On the Efficiency of Algorithms with Multi-level Parallelism." Physico-mathematical modelling and informational technologies, no. 33 (September 5, 2021): 133–37. http://dx.doi.org/10.15407/fmmit2021.33.133.

Full text
Abstract:
The paper investigates the efficiency of algorithms for solving computational mathematics problems that use a multilevel model of parallel computing on heterogeneous computer systems. A methodology for estimating the acceleration of algorithms for computers using a multilevel model of parallel computing is proposed. As an example, the parallel algorithm of the iteration method on a subspace for solving the generalized algebraic problem of eigenvalues of symmetric positive definite matrices of sparse structure is considered. For the presented algorithms, estimates of acceleration coefficients and efficiency were obtained on computers of hybrid architecture using graphics accelerators, on multi-core computers with shared memory and multi-node computers of MIMD-architecture.
APA, Harvard, Vancouver, ISO, and other styles
10

Bevilacqua, A., and E. Loli Piccolomini. "Parallel image restoration on parallel and distributed computers." Parallel Computing 26, no. 4 (March 2000): 495–506. http://dx.doi.org/10.1016/s0167-8191(99)00115-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Parallel computers"

1

Yousif, Hilal M. "Parallel algorithms for MIMD parallel computers." Thesis, Loughborough University, 1986. https://dspace.lboro.ac.uk/2134/15113.

Full text
Abstract:
This thesis mainly covers the design and analysis of asynchronous parallel algorithms that can be run on MIMD (Multiple Instruction Multiple Data) parallel computers, in particular the NEPTUNE system at Loughborough University. Initially the fundamentals of parallel computer architectures are introduced with different parallel architectures being described and compared. The principles of parallel programming and the design of parallel algorithms are also outlined. Also the main characteristics of the 4 processor MIMD NEPTUNE system are presented, and performance indicators, i.e. the speed-up and the efficiency factors are defined for the measurement of parallelism in a given system. Both numerical and non-numerical algorithms are covered in the thesis. In the numerical solution of partial differential equations, a new parallel 9-point block iterative method is developed. Here, the organization of the blocks is done in such a way that each process contains its own group of 9 points on the network, therefore, they can be run in parallel. The parallel implementation of both 9-point and 4- point block iterative methods were programmed using natural and redblack ordering with synchronous and asynchronous approaches. The results obtained for these different implementations were compared and analysed. Next the parallel version of the A.G.E. (Alternating Group Explicit) method is developed in which the explicit nature of the difference equation is revealed and exploited when applied to derive the solution of both linear and non-linear 2-point boundary value problems. Two strategies have been used in the implementation of the parallel A.G.E. method using the synchronous and asynchronous approaches. The results from these implementations were compared. Also for comparison reasons the results obtained from the parallel A.G.E. were compared with the ~ corresponding results obtained from the parallel versions of the Jacobi, Gauss-Seidel and S.O.R. methods. Finally, a computational complexity analysis of the parallel A.G.E. algorithms is included. In the area of non-numeric algorithms, the problems of sorting and searching were studied. The sorting methods which were investigated was the shell and the digit sort methods. with each method different parallel strategies and approaches were used and compared to find the best results which can be obtained on the parallel machine. In the searching methods, the sequential search algorithm in an unordered table and the binary search algorithms were investigated and implemented in parallel with a presentation of the results. Finally, a complexity analysis of these methods is presented. The thesis concludes with a chapter summarizing the main results.
APA, Harvard, Vancouver, ISO, and other styles
2

Su, (Philip) Shin-Chen. "Parallel subdomain method for massively parallel computers." Diss., Georgia Institute of Technology, 1992. http://hdl.handle.net/1853/17376.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Miller, R. Quentin. "Programming bulk-synchronous parallel computers." Thesis, University of Oxford, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.318894.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kalaiselvi, S. "Checkpointing Algorithms for Parallel Computers." Thesis, Indian Institute of Science, 1997. https://etd.iisc.ac.in/handle/2005/3908.

Full text
Abstract:
Checkpointing is a technique widely used in parallel/distributed computers for rollback error recovery. Checkpointing is defined as the coordinated saving of process state information at specified time instances. Checkpoints help in restoring the computation from the latest saved state, in case of failure. In addition to fault recovery, checkpointing has applications in fault detection, distributed debugging and process migration. Checkpointing in uniprocessor systems is easy due to the fact that there is a single clock and events occur with respect to this clock. There is a clear demarcation of events that happens before a checkpoint and events that happens after a checkpoint. In parallel computers a large number of computers coordinate to solve a single problem. Since there might be multiple streams of execution, checkpoints have to be introduced along all these streams simultaneously. Absence of a global clock necessitates explicit coordination to obtain a consistent global state. Events occurring in a distributed system, can be ordered partially using Lamport's happens before relation. Lamport's happens before relation ->is a partial ordering relation to identify dependent and concurrent events occurring in a distributed system. It is defined as follows: ·If two events a and b happen in the same process, and if a happens before b, then a->b ·If a is the sending event of a message and b is the receiving event of the same message then a -> b ·If neither a à b nor b -> a, then a and b are said to be concurrent. A consistent global state may have concurrent checkpoints. In the first chapter of the thesis we discuss issues regarding ordering of events in a parallel computer, need for coordination among checkpoints and other aspects related to checkpointing. Checkpointing locations can either be identified statically or dynamically. The static approach assumes that a representation of a program to be checkpointed is available with information that enables a programmer to specify the places where checkpoints are to be taken. The dynamic approach identifies the checkpointing locations at run time. In this thesis, we have proposed algorithms for both static and dynamic checkpointing. The main contributions of this thesis are as follows: 1. Parallel computers that are being built now have faster communication and hence more efficient clock synchronisation compared to those built a few years ago. Based on efficient clock synchronisation protocols, the clock drift in current machines can be maintained within a few microseconds. We have proposed a dynamic checkpointing algorithm for parallel computers assuming bounded clock drifts. 2. The shared memory paradigm is convenient for programming while message passing paradigm is easy to scale. Distributed Shared Memory (DSM) systems combine the advantage of both paradigms and can be visualized easily on top of a network of workstations. IEEE has recently proposed an interconnect standard called Scalable Coherent Interface (SCI), to con6gure computers as a Distributed Shared Memory system. A periodic dynamic checkpointing algorithm has been proposed in the thesis for a DSM system which uses the SCI standard. 3. When information about a parallel program is available one can make use of this knowledge to perform efficient checkpointing. A static checkpointing approach based on task graphs is proposed for parallel programs. The proposed task graph based static checkpointing approach has been implemented on a Parallel Virtual Machine (PVM) platform. We now give a gist of various chapters of the thesis. Chapter 2 of the thesis gives a classification of existing checkpointing algorithms. The chapter surveys algorithm that have been reported in literature for checkpointing parallel/distributed systems. A point to be noted is that most of the algorithms published for checkpointing message passing systems are based on the seminal article by Chandy & Lamport. A large number of checkpointing algorithms have been published by relaxing the assumptions made in the above mentioned article and by extending the features to minimise the overheads of coordination and context saving. Checkpointing for shared memory systems primarily extend cache coherence protocols to maintain a consistent memory. All of them assume that the main memory is safe for storing the context. Recently algorithms have been published for distributed shared memory systems, which extend the cache coherence protocols used in shared memory systems. They however also include methods for storing the status of distributed memory in stable storage. Chapter 2 concludes with brief comments on the desirable features of a checkpointing algorithm. In Chapter 3, we develop a dynamic checkpointing algorithm for message passing systems assuming that the clock drift of processors in the system is bounded. Efficient clock synchronisation protocols have been implemented on recent parallel computers owing to the fact that communication between processors is very fast. Based on efficient clock synchronisation protocols, clock skew can be limited to a few microseconds. The algorithm proposed in the thesis uses clocks for checkpoint coordination and vector counts for identifying messages to be logged. The algorithm is a periodic, distributed algorithm. We prove correctness of the algorithm and compare it with similar clock based algorithms. Distributed Shared Memory (DSM) systems provide the benefit of ease of programming in a scalable system. The recently proposed IEEE Scalable Coherent Interface (SCI) standard, facilitates the construction of scalable coherent systems. In Chapter 4 we discuss a checkpointing algorithm for an SCI based DSM system. SCI maintains cache coherence in hardware using a distributed cache directory which scales with the number of processors in the system. SCI recommends a two phase transaction protocol for communication. Our algorithm is a two phase centralised coordinated algorithm. Phase one initiates checkpoints and the checkpointing activity is completed in phase two. The correctness of the algorithm is established theoretically. The chapter concludes with the discussion of the features of SCI exploited by the checkpointing algorithm proposed in the thesis. In Chapter 5, a static checkpointing algorithm is developed assuming that the program to be executed on a parallel computer is given as a directed acyclic task graph. We assume that the estimates of the time to execute each task in the task graph is given. Given the timing at which checkpoints are to be taken, the algorithm identifies a set of edges where checkpointing tasks can be placed ensuring that they form a consistent global checkpoint. The proposed algorithm eliminates coordination overhead at run time. It significantly reduces the context saving overhead by taking checkpoints along edges of the task graph. The algorithm is used as a preprocessing step before scheduling the tasks to processors. The algorithm complexity is O(km) where m is the number of edges in the graph and k the maximum number of global checkpoints to be taken. The static algorithm is implemented on a parallel computer with a PVM environment as it is widely available and portable. The task graph of a program can be constructed manually or through program development tools. Our implementation is a collection of preprocessing and run time routines. The preprocessing routines operate on the task graph information to generate a set of edges to be checkpointed for each global checkpoint and write the information on disk. The run time routines save the context along the marked edges. In case of recovery, the recovery algorithms read the information from stable storage and reconstruct the context. The limitation of our static checkpointing algorithm is that it can operate only on deterministic task graphs. To demonstrate the practical feasibility of the proposed approach, case studies of checkpointing some parallel programs are included in the thesis. We conclude the thesis with a summary of proposed algorithms and possible directions to continue research in the area of checkpointing.
APA, Harvard, Vancouver, ISO, and other styles
5

Kalaiselvi, S. "Checkpointing Algorithms for Parallel Computers." Thesis, Indian Institute of Science, 1997. http://hdl.handle.net/2005/67.

Full text
Abstract:
Checkpointing is a technique widely used in parallel/distributed computers for rollback error recovery. Checkpointing is defined as the coordinated saving of process state information at specified time instances. Checkpoints help in restoring the computation from the latest saved state, in case of failure. In addition to fault recovery, checkpointing has applications in fault detection, distributed debugging and process migration. Checkpointing in uniprocessor systems is easy due to the fact that there is a single clock and events occur with respect to this clock. There is a clear demarcation of events that happens before a checkpoint and events that happens after a checkpoint. In parallel computers a large number of computers coordinate to solve a single problem. Since there might be multiple streams of execution, checkpoints have to be introduced along all these streams simultaneously. Absence of a global clock necessitates explicit coordination to obtain a consistent global state. Events occurring in a distributed system, can be ordered partially using Lamport's happens before relation. Lamport's happens before relation ->is a partial ordering relation to identify dependent and concurrent events occurring in a distributed system. It is defined as follows: ·If two events a and b happen in the same process, and if a happens before b, then a->b ·If a is the sending event of a message and b is the receiving event of the same message then a -> b ·If neither a à b nor b -> a, then a and b are said to be concurrent. A consistent global state may have concurrent checkpoints. In the first chapter of the thesis we discuss issues regarding ordering of events in a parallel computer, need for coordination among checkpoints and other aspects related to checkpointing. Checkpointing locations can either be identified statically or dynamically. The static approach assumes that a representation of a program to be checkpointed is available with information that enables a programmer to specify the places where checkpoints are to be taken. The dynamic approach identifies the checkpointing locations at run time. In this thesis, we have proposed algorithms for both static and dynamic checkpointing. The main contributions of this thesis are as follows: 1. Parallel computers that are being built now have faster communication and hence more efficient clock synchronisation compared to those built a few years ago. Based on efficient clock synchronisation protocols, the clock drift in current machines can be maintained within a few microseconds. We have proposed a dynamic checkpointing algorithm for parallel computers assuming bounded clock drifts. 2. The shared memory paradigm is convenient for programming while message passing paradigm is easy to scale. Distributed Shared Memory (DSM) systems combine the advantage of both paradigms and can be visualized easily on top of a network of workstations. IEEE has recently proposed an interconnect standard called Scalable Coherent Interface (SCI), to con6gure computers as a Distributed Shared Memory system. A periodic dynamic checkpointing algorithm has been proposed in the thesis for a DSM system which uses the SCI standard. 3. When information about a parallel program is available one can make use of this knowledge to perform efficient checkpointing. A static checkpointing approach based on task graphs is proposed for parallel programs. The proposed task graph based static checkpointing approach has been implemented on a Parallel Virtual Machine (PVM) platform. We now give a gist of various chapters of the thesis. Chapter 2 of the thesis gives a classification of existing checkpointing algorithms. The chapter surveys algorithm that have been reported in literature for checkpointing parallel/distributed systems. A point to be noted is that most of the algorithms published for checkpointing message passing systems are based on the seminal article by Chandy & Lamport. A large number of checkpointing algorithms have been published by relaxing the assumptions made in the above mentioned article and by extending the features to minimise the overheads of coordination and context saving. Checkpointing for shared memory systems primarily extend cache coherence protocols to maintain a consistent memory. All of them assume that the main memory is safe for storing the context. Recently algorithms have been published for distributed shared memory systems, which extend the cache coherence protocols used in shared memory systems. They however also include methods for storing the status of distributed memory in stable storage. Chapter 2 concludes with brief comments on the desirable features of a checkpointing algorithm. In Chapter 3, we develop a dynamic checkpointing algorithm for message passing systems assuming that the clock drift of processors in the system is bounded. Efficient clock synchronisation protocols have been implemented on recent parallel computers owing to the fact that communication between processors is very fast. Based on efficient clock synchronisation protocols, clock skew can be limited to a few microseconds. The algorithm proposed in the thesis uses clocks for checkpoint coordination and vector counts for identifying messages to be logged. The algorithm is a periodic, distributed algorithm. We prove correctness of the algorithm and compare it with similar clock based algorithms. Distributed Shared Memory (DSM) systems provide the benefit of ease of programming in a scalable system. The recently proposed IEEE Scalable Coherent Interface (SCI) standard, facilitates the construction of scalable coherent systems. In Chapter 4 we discuss a checkpointing algorithm for an SCI based DSM system. SCI maintains cache coherence in hardware using a distributed cache directory which scales with the number of processors in the system. SCI recommends a two phase transaction protocol for communication. Our algorithm is a two phase centralised coordinated algorithm. Phase one initiates checkpoints and the checkpointing activity is completed in phase two. The correctness of the algorithm is established theoretically. The chapter concludes with the discussion of the features of SCI exploited by the checkpointing algorithm proposed in the thesis. In Chapter 5, a static checkpointing algorithm is developed assuming that the program to be executed on a parallel computer is given as a directed acyclic task graph. We assume that the estimates of the time to execute each task in the task graph is given. Given the timing at which checkpoints are to be taken, the algorithm identifies a set of edges where checkpointing tasks can be placed ensuring that they form a consistent global checkpoint. The proposed algorithm eliminates coordination overhead at run time. It significantly reduces the context saving overhead by taking checkpoints along edges of the task graph. The algorithm is used as a preprocessing step before scheduling the tasks to processors. The algorithm complexity is O(km) where m is the number of edges in the graph and k the maximum number of global checkpoints to be taken. The static algorithm is implemented on a parallel computer with a PVM environment as it is widely available and portable. The task graph of a program can be constructed manually or through program development tools. Our implementation is a collection of preprocessing and run time routines. The preprocessing routines operate on the task graph information to generate a set of edges to be checkpointed for each global checkpoint and write the information on disk. The run time routines save the context along the marked edges. In case of recovery, the recovery algorithms read the information from stable storage and reconstruct the context. The limitation of our static checkpointing algorithm is that it can operate only on deterministic task graphs. To demonstrate the practical feasibility of the proposed approach, case studies of checkpointing some parallel programs are included in the thesis. We conclude the thesis with a summary of proposed algorithms and possible directions to continue research in the area of checkpointing.
APA, Harvard, Vancouver, ISO, and other styles
6

Harrison, Ian. "Locality and parallel optimizations for parallel supercomputing." Diss., Connect to the thesis, 2003. http://hdl.handle.net/10066/1274.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

練偉森 and Wai-sum Lin. "Adaptive parallel rendering." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1999. http://hub.hku.hk/bib/B31221415.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Lin, Wai-sum. "Adaptive parallel rendering /." Hong Kong : University of Hong Kong, 1999. http://sunzi.lib.hku.hk/hkuto/record.jsp?B20868236.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Sundar, N. S. "Data access optimizations for parallel computers /." The Ohio State University, 1998. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487950658548697.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Katiker, Rushikesh. "Automatic generation of dynamic parallel architectures." Click here for download, 2007. http://proquest.umi.com/pqdweb?did=1475182071&sid=1&Fmt=2&clientId=3260&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Parallel computers"

1

Treleaven, P., and M. Vanneschi, eds. Future Parallel Computers. Berlin, Heidelberg: Springer Berlin Heidelberg, 1987. http://dx.doi.org/10.1007/3-540-18203-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

M, Maresca, and Fountain T. J, eds. Massively parallel computers. New York: IEEE, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Lerman, Gil. Parallel evolution of parallel processors. New York: Plenum Press, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lawrence, Snyder, and Purdue Workshop on Algorithmically-Specialized Computer Organizations (1982 : West Lafayette, Ind.), eds. Algorithmically specialized parallel computers. Orlando: Academic Press, 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

1942-, Perrott R. H., ed. Software for parallel computers. London: Chapman and Hall, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Hungwen, Li, and Stout Quentin F, eds. Reconfigurable massively parallel computers. Englewood Cliffs, N.J: Prentice Hall, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Alan, Gibbons. Efficient parallel algorithms. Cambridge [England]: Cambridge University Press, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Institute for Computer Applications in Science and Engineering., ed. Parallel rendering. Hampton, VA: Institute for Computer Applications in Science and Engineering, NASA Langley Research Center, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Choudhary, Alok N. Parallel architectures and parallel algorithms for integrated vision systems. Urbana, Ill: Coordinated Science Laboratory, College of Engineering, University of Illinois at Urbana-Champaign, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Choudhary, Alok N. Parallel architectures and parallel algorithms for integrated vision systems. Boston: Kluwer Academic Publishers, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Parallel computers"

1

Roosta, Seyed H. "Components of Parallel Computers." In Parallel Processing and Parallel Algorithms, 57–108. New York, NY: Springer New York, 2000. http://dx.doi.org/10.1007/978-1-4612-1220-1_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Treleaven, Philip C. "Future parallel computers." In Lecture Notes in Computer Science, 40–47. Berlin, Heidelberg: Springer Berlin Heidelberg, 1986. http://dx.doi.org/10.1007/3-540-16811-7_151.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

von Praun, Christoph, Christoph von Praun, Jeremy T. Fineman, Charles E. Leiserson, Efstratios Gallopoulos, Marc Snir, Michael Heath, et al. "Reconfigurable Computers." In Encyclopedia of Parallel Computing, 1728. Boston, MA: Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-09766-4_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Miura, Kenichi. "Fujitsu Vector Computers." In Encyclopedia of Parallel Computing, 735–44. Boston, MA: Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-09766-4_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Falsafi, Babak, Samuel Midkiff, JackB Dennis, JackB Dennis, Amol Ghoting, Roy H. Campbell, Christof Klausecker, et al. "Distributed Memory Computers." In Encyclopedia of Parallel Computing, 573. Boston, MA: Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-09766-4_2404.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Steele, Guy L., Xiaowei Shen, Josep Torrellas, Mark Tuckerman, Eric J. Bohm, Laxmikant V. Kalé, Glenn Martyna, et al. "Cray Vector Computers." In Encyclopedia of Parallel Computing, 441–53. Boston, MA: Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-09766-4_479.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Dally, William J., D. Scott Wills, and Richard Lethin. "Mechanisms for Parallel Computers." In Parallel Computing on Distributed Memory Multiprocessors, 3–25. Berlin, Heidelberg: Springer Berlin Heidelberg, 1993. http://dx.doi.org/10.1007/978-3-642-58066-6_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Jamieson, Leah H. "Making Parallel Computers Usable:." In Opportunities and Constraints of Parallel Computing, 69–71. New York, NY: Springer US, 1989. http://dx.doi.org/10.1007/978-1-4613-9668-0_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Valiant, L. G. "Optimally Universal Parallel Computers." In Opportunities and Constraints of Parallel Computing, 155–58. New York, NY: Springer US, 1989. http://dx.doi.org/10.1007/978-1-4613-9668-0_39.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Cypher, Robert, and Jorge L. C. Sanz. "Hypercube Computers." In The SIMD Model of Parallel Computation, 61–68. New York, NY: Springer New York, 1994. http://dx.doi.org/10.1007/978-1-4612-2612-3_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Parallel computers"

1

Fulton, Robert E., and Philip S. Su. "Parallel Substructure Approach for Massively Parallel Computers." In ASME 1992 International Computers in Engineering Conference and Exposition. American Society of Mechanical Engineers, 1992. http://dx.doi.org/10.1115/cie1992-0093.

Full text
Abstract:
Abstract New massively parallel computer architectures have revolutionized the design of computer algorithms, and promise to have significant influence on algorithms for engineering computations. The traditional global model parallel method has a limited benefit for massively parallel computers. An alternative method is to use the substructure approach. This paper is to explore the potential for substructure strategy through actual examples. Each substructure is mapped on to some processors of a MIMD parallel computer. The internal nodes variables will be condensed into boundary nodes variables in each substructure. All substructures computations can be performed in parallel until the global boundary system equation is formed. A direct solution strategy for the global boundary displacements is performed. The final internal nodes displacements solution in each substructure can be performed in parallel. Examples for two-dimensional static analysis are presented on a BBN Butterfly GP1000 parallel computer.
APA, Harvard, Vancouver, ISO, and other styles
2

Zhi, Yuzhe, Yi Liu, Lin Jiao, and Peng Zhang. "A Parallel Simulator for Large-Scale Parallel Computers." In 2010 9th International Conference on Grid and Cloud Computing (GCC 2010). IEEE, 2010. http://dx.doi.org/10.1109/gcc.2010.48.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Cappello, F., J.-L. Bechennec, F. Delaplace, C. Germain, J.-L. Giavitto, V. Neri, and D. Etiemble. "Balanced Distributed Memory Parallel Computers." In 1993 International Conference on Parallel Processing - ICPP'93 Vol1. IEEE, 1993. http://dx.doi.org/10.1109/icpp.1993.52.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

McLatchie, R. C. F. "The impact of parallel computers." In IEE Seminar Practical Electromagnetic Design Synthesis. IEE, 1999. http://dx.doi.org/10.1049/ic:19990052.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Levinthal, Adam, Pat Hanrahan, Mike Paquette, and Jim Lawson. "Parallel computers for graphics applications." In ASPLOS II: Architectual support for programming languages and operating systems. New York, NY, USA: ACM, 1987. http://dx.doi.org/10.1145/36206.36202.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Vornberger, O. "Parallel computing on personal computers." In the 1986 ACM SIGSMALL/PC symposium. New York, New York, USA: ACM Press, 1986. http://dx.doi.org/10.1145/317559.322756.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Aluru, Srinivas. "Computational biology on parallel computers." In 2003 European Control Conference (ECC). IEEE, 2003. http://dx.doi.org/10.23919/ecc.2003.7086561.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kamada, Tomio, Satoshi Matsuoka, and Akinori Yonezawa. "Efficient parallel global garbage collection on massively parallel computers." In the 1994 ACM/IEEE conference. New York, New York, USA: ACM Press, 1994. http://dx.doi.org/10.1145/602770.602790.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Minsoo Jeon and Dongseung Kim. "Load-balanced parallel merge sort on distributed memory parallel computers." In Proceedings 16th International Parallel and Distributed Processing Symposium. IPDPS 2002. IEEE, 2002. http://dx.doi.org/10.1109/ipdps.2002.1016670.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kodi, Avinash Karanth, and Ahmed Louri. "Switchless Photonic Architecture for Parallel Computers." In Frontiers in Optics. Washington, D.C.: OSA, 2005. http://dx.doi.org/10.1364/fio.2005.ftuw5.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Parallel computers"

1

LEUNG, VITUS J., and JOHN M. DELAURENTIS. Maximum Utilization of Parallel Computers. Office of Scientific and Technical Information (OSTI), May 2002. http://dx.doi.org/10.2172/800811.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Harrison, R. J., R. Shepard, and A. F. Wagner. Computational chemistry on parallel computers. Office of Scientific and Technical Information (OSTI), March 1994. http://dx.doi.org/10.2172/10132716.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Vichniac, Gerard, and Kim Molvig. Innovative Uses of Parallel Computers. Fort Belvoir, VA: Defense Technical Information Center, May 1990. http://dx.doi.org/10.21236/ada223578.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kedem, Zvi. Algorithms and Techniques for Parallel Computers. Fort Belvoir, VA: Defense Technical Information Center, November 1989. http://dx.doi.org/10.21236/ada252780.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Esener, Sadik C. Technological Development for Interfacing Parallel Access Memories to Parallel Computers. Fort Belvoir, VA: Defense Technical Information Center, March 1999. http://dx.doi.org/10.21236/ada362650.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Sullivan, F. Measuring performance of parallel computers. Final report. Office of Scientific and Technical Information (OSTI), July 1994. http://dx.doi.org/10.2172/10163107.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Womble, D. E. A time stepping algorithm for parallel computers. Office of Scientific and Technical Information (OSTI), February 1990. http://dx.doi.org/10.2172/7083952.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Scherson, Isaac D. Orthogonal Interconnection Networks for Massively Parallel Computers. Fort Belvoir, VA: Defense Technical Information Center, June 1995. http://dx.doi.org/10.21236/ada299987.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Foster, I. T., B. Toonen, and P. H. Worley. Performance of parallel computers for spectral atmospheric models. Office of Scientific and Technical Information (OSTI), June 1995. http://dx.doi.org/10.2172/113769.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Sullivan, F. Measuring performance of parallel computers. Progress report, 1989. Office of Scientific and Technical Information (OSTI), July 1994. http://dx.doi.org/10.2172/10163100.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography