To see the other types of publications on this topic, follow the link: Parallel computers.

Journal articles on the topic 'Parallel computers'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Parallel computers.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

YAMAGUCHI, YOSHINORI. "Parallel Computers and Parallel Computation." Journal of the Institute of Electrical Engineers of Japan 118, no. 9 (1998): 526–29. http://dx.doi.org/10.1541/ieejjournal.118.526.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zhao, Yongwei, Yunji Chen, and Zhiwei Xu. "Fractal Parallel Computing." Intelligent Computing 2022 (September 5, 2022): 1–10. http://dx.doi.org/10.34133/2022/9797623.

Full text
Abstract:
As machine learning (ML) becomes the prominent technology for many emerging problems, dedicated ML computers are being developed at a variety of scales, from clouds to edge devices. However, the heterogeneous, parallel, and multilayer characteristics of conventional ML computers concentrate the cost of development on the software stack, namely, ML frameworks, compute libraries, and compilers, which limits the productivity of new ML computers. Fractal von Neumann architecture (FvNA) is proposed to address the programming productivity issue for ML computers. FvNA is scale-invariant to program, thus making the development of a family of scaled ML computers as easy as a single node. In this study, we generalize FvNA to the field of general-purpose parallel computing. We model FvNA as an abstract parallel computer, referred to as the fractal parallel machine (FPM), to demonstrate several representative general-purpose tasks that are efficiently programmable. FPM limits the entropy of programming by applying constraints on the control pattern of the parallel computing systems. However, FPM is still general-purpose and cost-optimal. We settle some preliminary results showing that FPM is as powerful as many fundamental parallel computing models such as BSP and alternating Turing machine. Therefore, FvNA is also generally applicable to various fields other than ML.
APA, Harvard, Vancouver, ISO, and other styles
3

MORIARTY, K. J. M., and T. TRAPPENBERG. "PROGRAMMING TOOLS FOR PARALLEL COMPUTERS." International Journal of Modern Physics C 04, no. 06 (December 1993): 1285–94. http://dx.doi.org/10.1142/s0129183193001002.

Full text
Abstract:
Although software tools already have a place on serial and vector computers they are becoming increasingly important for parallel computing. Message passing libraries, parallel operating systems and high level parallel languages are the basic software tools necessary to implement a parallel processing program. These tools up to now have been specific to each parallel computer system and a short survey will be given. The aim of another class of software tools for parallel computers is to help in writing or rewriting application programs. Because automatic parallelization tools are not very successful, an interactive component has to be incorporated. We will concentrate here on the discussion of SPEFY, a parallel program development facility.
APA, Harvard, Vancouver, ISO, and other styles
4

Shafarenko, A. "Software for Parallel Computers." Computing & Control Engineering Journal 3, no. 4 (1992): 194. http://dx.doi.org/10.1049/cce:19920049.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Lea, R. M., and I. P. Jalowiecki. "Associative massively parallel computers." Proceedings of the IEEE 79, no. 4 (April 1991): 469–79. http://dx.doi.org/10.1109/5.92041.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kallstrom, M., and S. S. Thakkar. "Programming three parallel computers." IEEE Software 5, no. 1 (January 1988): 11–22. http://dx.doi.org/10.1109/52.1990.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Moulic, J. R., and M. Kumar. "Highly parallel computers: Perspectives." Computing Systems in Engineering 3, no. 1-4 (January 1992): 1–5. http://dx.doi.org/10.1016/0956-0521(92)90088-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Wood, Alan. "Parallel computers and computations." European Journal of Operational Research 27, no. 3 (December 1986): 385–86. http://dx.doi.org/10.1016/0377-2217(86)90338-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Popov, Oleksandr, and Oleksiy Chystiakov. "On the Efficiency of Algorithms with Multi-level Parallelism." Physico-mathematical modelling and informational technologies, no. 33 (September 5, 2021): 133–37. http://dx.doi.org/10.15407/fmmit2021.33.133.

Full text
Abstract:
The paper investigates the efficiency of algorithms for solving computational mathematics problems that use a multilevel model of parallel computing on heterogeneous computer systems. A methodology for estimating the acceleration of algorithms for computers using a multilevel model of parallel computing is proposed. As an example, the parallel algorithm of the iteration method on a subspace for solving the generalized algebraic problem of eigenvalues of symmetric positive definite matrices of sparse structure is considered. For the presented algorithms, estimates of acceleration coefficients and efficiency were obtained on computers of hybrid architecture using graphics accelerators, on multi-core computers with shared memory and multi-node computers of MIMD-architecture.
APA, Harvard, Vancouver, ISO, and other styles
10

Bevilacqua, A., and E. Loli Piccolomini. "Parallel image restoration on parallel and distributed computers." Parallel Computing 26, no. 4 (March 2000): 495–506. http://dx.doi.org/10.1016/s0167-8191(99)00115-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Hattori, Masamitsu, Nobuhiro Ito, Wei Chen, and Koichi Wada. "Parallel matrix-multiplication algorithm for distributed parallel computers." Systems and Computers in Japan 36, no. 4 (April 2005): 48–59. http://dx.doi.org/10.1002/scj.10551.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Caraiman, Simona, and Vasile Manta. "Parallel Simulation of Quantum Search." International Journal of Computers Communications & Control 5, no. 5 (December 1, 2010): 634. http://dx.doi.org/10.15837/ijccc.2010.5.2219.

Full text
Abstract:
Simulation of quantum computers using classical computers is a computationally hard problem, requiring a huge amount of operations and storage. Parallelization can alleviate this problem, allowing the simulation of more qubits at the same time or the same number of qubits to be simulated in less time. A promising approach is represented by executing these simulators in Grid systems that can provide access to high performance resources. In this paper we present a parallel implementation of the QC-lib quantum computer simulator deployed as a Grid service. Using a specific scheme for partitioning the terms describing quantum states and efficient parallelization of the general singe qubit operator and of the controlled operators, very good speed-ups were obtained for the simulation of the quantum search problem.
APA, Harvard, Vancouver, ISO, and other styles
13

Groma, I., M. Verdier, P. Balogh, and B. Bakó. "Dislocation dynamics on parallel computers." Modelling and Simulation in Materials Science and Engineering 7, no. 5 (September 1, 1999): 795–803. http://dx.doi.org/10.1088/0965-0393/7/5/311.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Qiu, B. "Book Review: Parallel Computers 2." International Journal of Electrical Engineering & Education 27, no. 1 (January 1990): 89. http://dx.doi.org/10.1177/002072099002700131.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Ševčíková, Hana. "Statistical Simulations on Parallel Computers." Journal of Computational and Graphical Statistics 13, no. 4 (December 2004): 886–906. http://dx.doi.org/10.1198/106186004x12605.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Li, H., and Q. F. Stout. "Reconfigurable SIMD massively parallel computers." Proceedings of the IEEE 79, no. 4 (April 1991): 429–43. http://dx.doi.org/10.1109/5.92038.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Jevtovic, Milojko. "Parallel networking of the computers." Vojnotehnicki glasnik 55, no. 2 (2007): 198–205. http://dx.doi.org/10.5937/vojtehg0702198j.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Deo, Narsingh. "Parallel Computers in Signal Processing." Defence Science Journal 35, no. 3 (January 24, 1985): 375–82. http://dx.doi.org/10.14429/dsj.35.6031.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Venkataraman, G. "Parallel Computers : A Personal Overview." Defence Science Journal 46, no. 4 (January 1, 1996): 257–67. http://dx.doi.org/10.14429/dsj.46.4086.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Parkinson, D. "Computers: More parallel than others." Nature 316, no. 6031 (August 1985): 765–66. http://dx.doi.org/10.1038/316765a0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Anderson, Alun. "Why should computers go parallel?" Nature 317, no. 6038 (October 1985): 565. http://dx.doi.org/10.1038/317565b0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Thole, Clemens-August, and Klaus Stüben. "Industrial simulation on parallel computers." Parallel Computing 25, no. 13-14 (December 1999): 2015–37. http://dx.doi.org/10.1016/s0167-8191(99)00065-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Neta, Beny, and Heng-Ming Tai. "LU factorization on parallel computers." Computers & Mathematics with Applications 11, no. 6 (June 1985): 573–79. http://dx.doi.org/10.1016/0898-1221(85)90039-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Nelson, Mark E., and James M. Bower. "Brain maps and parallel computers." Trends in Neurosciences 13, no. 10 (October 1990): 403–8. http://dx.doi.org/10.1016/0166-2236(90)90119-u.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Gropp, William D., and David E. Keyes. "Domain decomposition on parallel computers." IMPACT of Computing in Science and Engineering 1, no. 4 (December 1989): 421–39. http://dx.doi.org/10.1016/0899-8248(89)90003-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Crenell, K. M. "Molecular dynamics on parallel computers." Journal of Molecular Graphics 9, no. 2 (June 1991): 128–30. http://dx.doi.org/10.1016/0263-7855(91)85012-n.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Levinthal, Adam, Pat Hanrahan, Mike Paquette, and Jim Lawson. "Parallel computers for graphics applications." ACM SIGARCH Computer Architecture News 15, no. 5 (November 1987): 193–98. http://dx.doi.org/10.1145/36177.36202.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Levinthal, Adam, Pat Hanrahan, Mike Paquette, and Jim Lawson. "Parallel computers for graphics applications." ACM SIGOPS Operating Systems Review 21, no. 4 (October 1987): 193–98. http://dx.doi.org/10.1145/36204.36202.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Levinthal, Adam, Pat Hanrahan, Mike Paquette, and Jim Lawson. "Parallel computers for graphics applications." ACM SIGPLAN Notices 22, no. 10 (October 1987): 193–98. http://dx.doi.org/10.1145/36205.36202.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Lickly, Daniel J., and Philip J. Hatcher. "C++ and Massively Parallel Computers." Scientific Programming 2, no. 4 (1993): 193–202. http://dx.doi.org/10.1155/1993/450517.

Full text
Abstract:
Our goal is to apply the software engineering advantages of object-oriented programming to the raw power of massively parallel architectures. To do this we have constructed a hierarchy of C++ classes to support the data-parallel paradigm. Feasibility studies and initial coding can be supported by any serial machine that has a C++ compiler. Parallel execution requires an extended Cfront, which understands the data-parallel classes and generates C*code. (C*is a data-parallel superset of ANSI C developed by Thinking Machines Corporation). This approach provides potential portability across parallel architectures and leverages the existing compiler technology for translating data-parallel programs onto both SIMD and MIMD hardware.
APA, Harvard, Vancouver, ISO, and other styles
31

Fincham, David. "Parallel Computers and Molecular Simulation." Molecular Simulation 1, no. 1-2 (November 1987): 1–45. http://dx.doi.org/10.1080/08927028708080929.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Carlson, D. A. "Modified-mesh connected parallel computers." IEEE Transactions on Computers 37, no. 10 (1988): 1315–21. http://dx.doi.org/10.1109/12.5998.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

de O. Cruz, Adriano J. "Parallel algorithms for SIMD computers." Microprocessing and Microprogramming 28, no. 1-5 (March 1990): 85–90. http://dx.doi.org/10.1016/0165-6074(90)90154-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Ramesh, R., R. M. Verma, T. Krishnaprasad, and I. V. Ramakrishnan. "Term matching on parallel computers." Journal of Logic Programming 6, no. 3 (May 1989): 213–28. http://dx.doi.org/10.1016/0743-1066(89)90014-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Blech, R. A., E. J. Milner, A. Quealy, and S. E. Townsend. "Turbomachinery CFD on parallel computers." Computing Systems in Engineering 3, no. 6 (December 1992): 613–23. http://dx.doi.org/10.1016/0956-0521(92)90013-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Doyle, John. "Serial, parallel and neural computers." Futures 23, no. 6 (July 1991): 577–93. http://dx.doi.org/10.1016/0016-3287(91)90080-l.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Li, Keqin. "Scalable Parallel Matrix Multiplication on Distributed Memory Parallel Computers." Journal of Parallel and Distributed Computing 61, no. 12 (December 2001): 1709–31. http://dx.doi.org/10.1006/jpdc.2001.1768.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Chronopoulos, Anthony Theodore, and Gang Wang. "Traffic Flow Simulation through Parallel Processing." Transportation Research Record: Journal of the Transportation Research Board 1566, no. 1 (January 1996): 31–38. http://dx.doi.org/10.1177/0361198196156600104.

Full text
Abstract:
Numerical methods for solving traffic flow continuum models have been studied and efficiently implemented in traffic simulation codes in the past. Explicit and implicit methods have been used in traffic simulation codes in the past. Implicit methods allow a much larger time step size than explicit methods to achieve the same accuracy. However, at each time step a nonlinear system must be solved. The Newton method, coupled with a linear iterative method (Orthomin), is used. The efficient implementation of explicit and implicit numerical methods for solving the high-order flow conservation traffic model on parallel computers was studied. Simulation tests were run with traffic data from an 18-mile freeway section in Minnesota on the nCUBE2 parallel computer. These tests gave the same accuracy as past tests, which were performed on one-processor computers, and the overall execution time was significantly reduced.
APA, Harvard, Vancouver, ISO, and other styles
39

Ayuningtyas, Astika. "Pemrosesan Paralel pada Low Pass Filtering Menggunakan Transform Cosinus di MPI (Message Passing Interface)." Conference SENATIK STT Adisutjipto Yogyakarta 2 (November 15, 2016): 115. http://dx.doi.org/10.28989/senatik.v2i0.68.

Full text
Abstract:
Parallel processing is a process of calculating two or more tasks simultaneously through the optimization of the computer system resource, one treatment models is a desktop system. The model allows to perform parallel processing between computers with specifications different computers. An implementation of a model network of workstations using MPI (Message Passing Interface). In this study, applied to the case of the low-pass filtering (LPF), a process in the image or the image of the shape of the filter that retrieves the data at low frequencies. filtering programs lowpass using the cosine transform MPI implemented by modifying the algorithm in the process on each node (computer). Depending on the test results, so that the processing speed of a parallel system is influenced by the number of nodes / processes and the number of frequency components that are processed. In the treatment of single larger process, the time it takes more and more and the value prop affects only the amount of high frequency data is filtered on the field. While parallel processing of more and more computers involved in the filter calculation process low-pass, plus the time required to perform the calculation.
APA, Harvard, Vancouver, ISO, and other styles
40

Hanuliak, Peter, and Michal Hanuliak. "Unique Analytical Model of Parallel Computers." International Review on Modelling and Simulations (IREMOS) 9, no. 4 (August 31, 2016): 246. http://dx.doi.org/10.15866/iremos.v9i4.9716.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Byun, Chansup, and Guru P. Guruswamy. "Wing-body aeroelasticity on parallel computers." Journal of Aircraft 33, no. 2 (March 1996): 421–28. http://dx.doi.org/10.2514/3.46954.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

S., L. R., Philip J. Hatcher, Michael J. Quinn, Piyush Mehrotra, Joel Saltz, Robert Voigt, and Horst D. Simon. "Data-Parallel Programming on MIMD Computers." Mathematics of Computation 63, no. 207 (July 1994): 424. http://dx.doi.org/10.2307/2153588.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Barbosa, V. C., R. Donangelo, and R. S. Wedemann. "A BUU Code for Parallel Computers." International Journal of Modern Physics C 09, no. 04 (June 1998): 573–83. http://dx.doi.org/10.1142/s0129183198000479.

Full text
Abstract:
We have developed a distributed algorithm for the simulation of systems that evolve continuously, but which are also subject to discrete events that affect their evolution. As an example we consider the description through the BUU equations of the time evolution of a highly excited nuclear system, and show that this algorithm attains almost optimal speedups.
APA, Harvard, Vancouver, ISO, and other styles
44

EVANGELINOS, CONSTANTINOS, and GEORGE EM KARNIADAKIS. "PARALLEL CFD BENCHMARKS ON CRAY COMPUTERS." Parallel Algorithms and Applications 9, no. 3-4 (June 1996): 273–98. http://dx.doi.org/10.1080/10637199608915581.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

LI, TAO, and LIXIN TAO. "TOPOLOGICAL FEATURE MAPS ON PARALLEL COMPUTERS." International Journal of High Speed Computing 07, no. 04 (December 1995): 531–46. http://dx.doi.org/10.1142/s0129053395000294.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Petersen, Johnny. "Computation on Parallel Message-Passing Computers." Physica Scripta T38 (January 1, 1991): 33. http://dx.doi.org/10.1088/0031-8949/1991/t38/006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Tanimoto, S. L. "Memory systems for highly parallel computers." Proceedings of the IEEE 79, no. 4 (April 1991): 403–15. http://dx.doi.org/10.1109/5.92036.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Madisetti, V. K., and D. G. Messerschmitt. "Seismic migration algorithms on parallel computers." IEEE Transactions on Signal Processing 39, no. 7 (July 1991): 1642–54. http://dx.doi.org/10.1109/78.134401.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Bailey, David H. "How Useful are Today's Parallel Computers?" Computers in Physics 6, no. 2 (1992): 216. http://dx.doi.org/10.1063/1.4823066.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

America, P. H. M., B. J. A. Hulshof, E. A. M. Odijk, F. Sijstermans, R. A. H. van Twist, and R. h. H. Wester. "Parallel computers for advanced information processing." IEEE Micro 10, no. 6 (December 1990): 12–15. http://dx.doi.org/10.1109/40.62724.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography