Academic literature on the topic 'Parallel'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Parallel.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Parallel"

1

Kloppenborg, John. "Gospel Parallels/Parallel Gospels." Biblical Theology Bulletin: Journal of Bible and Culture 44, no. 3 (July 22, 2014): 156–61. http://dx.doi.org/10.1177/0146107914540490.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Failinger, Marie. "Parallel Justice: Creating Causes of Action for Mandatory Mediation." University of Michigan Journal of Law Reform, no. 47.2 (2014): 359. http://dx.doi.org/10.36646/mjlr.47.2.parallel.

Full text
Abstract:
The American common law system should adopt court-connected mandatory mediation as a parallel system of justice for some cases that are currently not justiciable, such as wrongs caused by constitutionally protected behavior. As evidence that such a system is practical, this Article describes systemic and ethical parallels between court-connected mediation and the rise of the equity courts in medieval England, demonstrating that there are no insurmountable practical objections to the creation of “mediation-only” causes of action. The Article then explores the constitutional concerns surrounding the idea of “mandatory mediation-only” causes of action, using constitutional hate speech and invasion of privacy cases to test the validity of these concerns.
APA, Harvard, Vancouver, ISO, and other styles
3

Lapaine, Miljenko. "Parallels in Cartography: Standard, Equidistantly Mapped and True Length Parallels." Geographies 4, no. 1 (January 10, 2024): 52–65. http://dx.doi.org/10.3390/geographies4010004.

Full text
Abstract:
In the literature on map projections, we regularly encounter the name standard parallel or standard parallels. However, it is obvious that a unique definition of a standard parallel is not universally accepted. To fully clarify the meaning of standard parallels, the author proposes the notion of equidistantly mapped parallels, which has not been common in the literature so far. Equidistantly mapped parallels can be in the direction of the parallel or in the direction of the meridian. Here, it is shown that every standard parallel is also an equidistantly mapped parallel, but that the reverse need not be true. If the parallel is mapped equidistantly in the direction of the parallel, then its length in the projection plane is equal to the length of that parallel on the sphere. The opposite does not have to be true, i.e., if the length of the image of the parallel in the projection plane is equal to the length of the parallel on the sphere, this does not mean that the parallel was mapped equidistantly. In addition to standard and equidistant parallels, the concept of parallels of true length also appears in the theory of map projections. They should also be distinguished from standard and equidistant parallels.
APA, Harvard, Vancouver, ISO, and other styles
4

Tae Won Cho, S. S. Pyo, and J. R. Heath. "PARALLEX: a parallel approach to switchbox routing." IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 13, no. 6 (June 1994): 684–93. http://dx.doi.org/10.1109/43.285241.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Dempsey, P. "Parallel worlds [parallel programming]." Engineering & Technology 3, no. 12 (July 5, 2008): 44–45. http://dx.doi.org/10.1049/et:20081208.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Mitchell, Lisa. "Parallel Languages, Parallel Cultures." Indian Economic & Social History Review 42, no. 4 (December 2005): 445–67. http://dx.doi.org/10.1177/001946460504200402.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Wilson, D. R. "Parallel processing—parallel thinking." Electronic Systems News 1987, no. 1 (1987): 35. http://dx.doi.org/10.1049/esn.1987.0014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Merriam, T. "Parallel Nephews, Parallel Uncles." Notes and Queries 54, no. 3 (September 1, 2007): 272–74. http://dx.doi.org/10.1093/notesj/gjm134.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

M., Thilagavathi. "Enhancing Blockchain Performance Using Parallel Merkle Root and Parallel Proof of Work." Journal of Advanced Research in Dynamical and Control Systems 12, no. 01-Special Issue (February 13, 2020): 609–20. http://dx.doi.org/10.5373/jardcs/v12sp1/20201110.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Lapaine, М. "Secant Parallels in Azimuthal Projections." Geodesy and Cartography 946, no. 4 (May 20, 2019): 39–54. http://dx.doi.org/10.22389/0016-7126-2019-946-4-39-54.

Full text
Abstract:
Map projections are commonly approached as mapping onto developable surfaces; cylindrical projections onto the lateral surface of a cylinder, conic projections onto the lateral surface of a cone, and azimuthal projections onto a plane. If an intermediate developable surface intersects the Earth’s sphere or ellipsoid, the projection is referred to as a secant projection. The intersection of a developable surface and the Earth’s sphere or ellipsoid, e.g. secant parallel is considered a standard parallel. In this paper the definitions of secant and standard parallel in azimuthal projections are given. The first conclusion is that the secant and standard parallels are two distinct notions. The second one is that a standard parallel, if such a parallel exists in an azimuthal projection, is a secant parallel, while the converse statement is not true in general. Furthermore, it is shown that there are azimuthal projections with only one secant parallel that is not standard, with only one standard parallel which is also secant one, with two different secant parallels, and with one standard and one secant parallel.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Parallel"

1

Ferlin, Edson Pedro. "Avaliação de métodos de paralelização automática." Universidade de São Paulo, 1997. http://www.teses.usp.br/teses/disponiveis/76/76132/tde-09102008-111750/.

Full text
Abstract:
Este trabalho aborda alguns conceitos e definições de processamento paralelo, que são aplicados a paralelização automática, e também às análises e condições para as dependências dos dados, de modo a aplicarmos os métodos de paralelização: Hiperplano, Transformação Unimodular, Alocação de Dados Sem Comunicação e Particionamento & Rotulação. Desta forma, transformamos um programa seqüencial em seu equivalente paralelo. Utilizando-os em um sistema de memória distribuída com comunicação através da passagem de mensagem MPI (Message-Passing Interface), e obtemos algumas métricas para efetuarmos as avaliações/comparações entre os métodos.
This work invoke some concepts and definitions about parallel processing, applicable in the automatic parallelization, and also the analysis and conditions for the data dependence, in order to apply the methods for parallelization: Hyperplane, Unimodular Transformation, Communication-Free Data Allocation and Partitioning & Labeling. On this way, transform a sequential program into an equivalent parallel one. Applying these programs on the distributed-memory system with communication through message-passing MPI (Message-Passing Interface), and we obtain some measurements for the evaluations/comparison between those methods.
APA, Harvard, Vancouver, ISO, and other styles
2

Oliver, William R. "The Matrix a metaphorical paralell [i.e. parallel] to language /." View electronic thesis, 2008. http://dl.uncw.edu/etd/2008-3/oliverw/williamoliver.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Yousif, Hilal M. "Parallel algorithms for MIMD parallel computers." Thesis, Loughborough University, 1986. https://dspace.lboro.ac.uk/2134/15113.

Full text
Abstract:
This thesis mainly covers the design and analysis of asynchronous parallel algorithms that can be run on MIMD (Multiple Instruction Multiple Data) parallel computers, in particular the NEPTUNE system at Loughborough University. Initially the fundamentals of parallel computer architectures are introduced with different parallel architectures being described and compared. The principles of parallel programming and the design of parallel algorithms are also outlined. Also the main characteristics of the 4 processor MIMD NEPTUNE system are presented, and performance indicators, i.e. the speed-up and the efficiency factors are defined for the measurement of parallelism in a given system. Both numerical and non-numerical algorithms are covered in the thesis. In the numerical solution of partial differential equations, a new parallel 9-point block iterative method is developed. Here, the organization of the blocks is done in such a way that each process contains its own group of 9 points on the network, therefore, they can be run in parallel. The parallel implementation of both 9-point and 4- point block iterative methods were programmed using natural and redblack ordering with synchronous and asynchronous approaches. The results obtained for these different implementations were compared and analysed. Next the parallel version of the A.G.E. (Alternating Group Explicit) method is developed in which the explicit nature of the difference equation is revealed and exploited when applied to derive the solution of both linear and non-linear 2-point boundary value problems. Two strategies have been used in the implementation of the parallel A.G.E. method using the synchronous and asynchronous approaches. The results from these implementations were compared. Also for comparison reasons the results obtained from the parallel A.G.E. were compared with the ~ corresponding results obtained from the parallel versions of the Jacobi, Gauss-Seidel and S.O.R. methods. Finally, a computational complexity analysis of the parallel A.G.E. algorithms is included. In the area of non-numeric algorithms, the problems of sorting and searching were studied. The sorting methods which were investigated was the shell and the digit sort methods. with each method different parallel strategies and approaches were used and compared to find the best results which can be obtained on the parallel machine. In the searching methods, the sequential search algorithm in an unordered table and the binary search algorithms were investigated and implemented in parallel with a presentation of the results. Finally, a complexity analysis of these methods is presented. The thesis concludes with a chapter summarizing the main results.
APA, Harvard, Vancouver, ISO, and other styles
4

Harrison, Ian. "Locality and parallel optimizations for parallel supercomputing." Diss., Connect to the thesis, 2003. http://hdl.handle.net/10066/1274.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Su, (Philip) Shin-Chen. "Parallel subdomain method for massively parallel computers." Diss., Georgia Institute of Technology, 1992. http://hdl.handle.net/1853/17376.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Gecgel, Murat. "Parallel, Navier." Master's thesis, METU, 2003. http://etd.lib.metu.edu.tr/upload/12604807/index.pdf.

Full text
Abstract:
The aim of this study is to extend a parallel Fortran90 code to compute three&ndash
dimensional laminar and turbulent flowfields over rotary wing configurations. The code employs finite volume discretization and the compact, four step Runge-Kutta type time integration technique to solve unsteady, thin&ndash
layer Navier&ndash
Stokes equations. Zero&ndash
order Baldwin&ndash
Lomax turbulence model is utilized to model the turbulence for the computation of turbulent flowfields. A fine, viscous, H type structured grid is employed in the computations. To reduce the computational time and memory requirements parallel processing with distributed memory is used. The data communication among the processors is executed by using the MPI ( Message Passing Interface ) communication libraries. Laminar and turbulent solutions around a two bladed UH &ndash
1 helicopter rotor and turbulent solution around a flat plate is obtained. For the rotary wing configurations, nonlifting and lifting rotor cases are handled seperately for subsonic and transonic blade tip speeds. The results are, generally, in good agreement with the experimental data.
APA, Harvard, Vancouver, ISO, and other styles
7

Windowmaker, Tricia. "Parallel adolescents." Honors in the Major Thesis, University of Central Florida, 2010. http://digital.library.ucf.edu/cdm/ref/collection/ETH/id/1525.

Full text
Abstract:
This item is only available in print in the UCF Libraries. If this is your Honors Thesis, you can help us make it available online for use by researchers around the world by following the instructions on the distribution consent form at http://library.ucf.edu/Systems/DigitalInitiatives/DigitalCollections/InternetDistributionConsentAgreementForm.pdf You may also contact the project coordinator, Kerri Bottorff, at kerri.bottorff@ucf.edu for more information.
Bachelors
Arts and Humanities
English
APA, Harvard, Vancouver, ISO, and other styles
8

Hassel, Karen Louise. "Parallel memories." The Ohio State University, 1993. http://rave.ohiolink.edu/etdc/view?acc_num=osu1314801102.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Andersson, Håkan. "Parallel Simulation : Parallel computing for high performance LTE radio network simulations." Thesis, Mittuniversitetet, Institutionen för informationsteknologi och medier, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-12390.

Full text
Abstract:
Radio access technologies for cellular mobile networks are continuously being evolved to meet the future demands for higher data rates, and lower end‐to‐end delays. In the research and development of LTE, radio network simulations play an essential role. The evolution of parallel processing hardware makes it desirable to exploit the potential gains of parallelizing LTE radio network simulations using multithreading techniques in contrast to distributing experiments over processors as independent simulation job processes. There is a hypothesis that parallel speedup gain diminishes when running many parallel simulation jobs concurrently on the same machine due to the increased memory requirements. A proposed multithreaded prototype of the Ericsson LTE simulator has been constructed, encapsulating scheduling, execution and synchronization of asynchronous physical layer computations. In order to provide implementation transparency, an algorithm has been proposed to sort and synchronize log events enabling a sequential logging model on top of non‐deterministic execution. In order to evaluate and compare multithreading techniques to parallel simulation job distribution, a large number of experiments have been carried out for four very diverse simulation scenarios. The evaluation of the results from these experiments involved analysis of average measured execution times and comparison with ideal estimates derived from Amdahl’s law in order to analyze overhead. It has been shown that the proposed multithreaded task‐oriented framework provides a convenient way to execute LTE physical layer models asynchronously on multi‐core processors, still providing deterministic results that are equivalent to the results of a sequential simulator. However, it has been indicated that distributing parallel independent jobs over processors is currently more efficient than multithreading techniques, even though the achieved speedup is far from ideal. This conclusion is based on the observation that the overhead caused by increased memory requirements, memory access and system bus congestion is currently smaller than the thread management and synchronization overhead of the proposed multithreaded Java prototype.
APA, Harvard, Vancouver, ISO, and other styles
10

Dai, Jiehua. "Automatic Parallel Memory Address Generation for Parallel DSP Computing." Thesis, Linköping University, Department of Electrical Engineering, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-11110.

Full text
Abstract:

The concept of Parallel Vector (scratch pad) Memories (PVM) was introduced as one solution for Parallel Computing in DSP, which can provides parallel memory addressing efficiently with minimum latency. The parallel programming more efficient by using the parallel addressing generator for parallel vector memory (PVM) proposed in this thesis. However, without hiding complexities by cache, the cost of programming is high. To minimize the programming cost, automatic parallel memory address generation is needed to hide the complexities of memory access.

This thesis investigates methods for implementing conflict-free vector addressing algorithms on a parallel hardware structure. In particular, match vector addressing requirements extracted from the behaviour model to a prepared parallel memory addressing template, in order to supply data in parallel from the main memory to the on-chip vector memory.

According to the template and usage of the main and on-chip parallel vector memory, models for data pre-allocation and permutation in scratch pad memories of ASIP can be decided and configured. By exposing the parallel memory access of source code, the memory access flow graph (MFG) will be generated. Then MFG will be used combined with hardware information to match templates in the template library. When it is matched with one template, suited permutation equation will be gained, and the permutation table that include target addresses for data pre-allocation and permutation is created. Thus it is possible to automatically generate memory address for parallel memory accesses.

A tool for achieving the goal mentioned above is created, Permutator, which is implemented in C++ combined with XML. Memory access coding template is selected, as a result that permutation formulas are specified. And then PVM address table could be generated to make the data pre-allocation, so that efficient parallel memory access is possible.

The result shows that the memory access complexities is hiden by using Permutator, so that the programming cost is reduced.It works well in the context that each algorithm with its related hardware information is corresponding to a template case, so that extra memory cost is eliminated.

APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Parallel"

1

Alan, Gibbons. Efficient parallel algorithms. Cambridge [England]: Cambridge University Press, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Mussi-Dias, Vicente. Restinga Paralela = Parallel Restinga. Campos dos Goytacazes, Rio de Janeiro, Brasil: Maria das Graças Machado Freire, 2021.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Carol, Maund, Rollig Stella 1960-, Oechsler Monika, Site Gallery (Sheffield England), and Lentos Kunstmuseum Linz, eds. Monika Oechsler: Parallel worlds = parallele Welten. Sheffield: Site Gallery, 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Perrott, Ronald H. Parallel programming. Wokingham, England: Addison-Wesley Pub. Co., 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

H, Jamieson Leah, Gannon Dennis B. 1947-, and Douglass Robert J, eds. The characteristics of parallel algorithms. Cambridge, Mass: MIT Press, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Byrne, John. Parallel. Milwaukie, OR: Dark Horse Comics, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Campbell, Colin. Parallel. Cambridge: Cambridge University Press, 2005.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Roosta, Seyed H. Parallel Processing and Parallel Algorithms. New York, NY: Springer New York, 2000. http://dx.doi.org/10.1007/978-1-4612-1220-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Lerman, Gil, and Larry Rudolph. Parallel Evolution of Parallel Processors. Boston, MA: Springer US, 1993. http://dx.doi.org/10.1007/978-1-4615-2856-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Iannucci, Robert A. Parallel Machines: Parallel Machine Languages. Boston, MA: Springer US, 1990. http://dx.doi.org/10.1007/978-1-4613-1543-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Parallel"

1

Weik, Martin H. "parallel." In Computer Science and Communications Dictionary, 1224. Boston, MA: Springer US, 2000. http://dx.doi.org/10.1007/1-4020-0613-6_13601.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hossfeld, F. "Parallel Processes and Parallel Algorithms." In Complex Systems — Operational Approaches in Neurobiology, Physics, and Computers, 194–211. Berlin, Heidelberg: Springer Berlin Heidelberg, 1985. http://dx.doi.org/10.1007/978-3-642-70795-7_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Lerman, Gil, and Larry Rudolph. "Introduction." In Parallel Evolution of Parallel Processors, 1–3. Boston, MA: Springer US, 1993. http://dx.doi.org/10.1007/978-1-4615-2856-2_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lerman, Gil, and Larry Rudolph. "Classification of Parallel Processors." In Parallel Evolution of Parallel Processors, 5–19. Boston, MA: Springer US, 1993. http://dx.doi.org/10.1007/978-1-4615-2856-2_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Lerman, Gil, and Larry Rudolph. "Emergent Trends." In Parallel Evolution of Parallel Processors, 21–97. Boston, MA: Springer US, 1993. http://dx.doi.org/10.1007/978-1-4615-2856-2_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Lerman, Gil, and Larry Rudolph. "Popular Machine Models." In Parallel Evolution of Parallel Processors, 99–113. Boston, MA: Springer US, 1993. http://dx.doi.org/10.1007/978-1-4615-2856-2_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Lerman, Gil, and Larry Rudolph. "The Shape of Things to Come?" In Parallel Evolution of Parallel Processors, 115–22. Boston, MA: Springer US, 1993. http://dx.doi.org/10.1007/978-1-4615-2856-2_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Roosta, Seyed H. "Computer Architecture." In Parallel Processing and Parallel Algorithms, 1–56. New York, NY: Springer New York, 2000. http://dx.doi.org/10.1007/978-1-4612-1220-1_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Roosta, Seyed H. "Asynchronous Parallel Programming." In Parallel Processing and Parallel Algorithms, 439–76. New York, NY: Springer New York, 2000. http://dx.doi.org/10.1007/978-1-4612-1220-1_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Roosta, Seyed H. "Data Parallel Programming." In Parallel Processing and Parallel Algorithms, 477–99. New York, NY: Springer New York, 2000. http://dx.doi.org/10.1007/978-1-4612-1220-1_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Parallel"

1

Pettey, C. C., and M. R. Leuze. "Parallel placement of parallel processes." In the third conference. New York, New York, USA: ACM Press, 1988. http://dx.doi.org/10.1145/62297.62325.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Rubin, Robert, Larry Rudolph, and Dror Zernik. "Debugging parallel programs in parallel." In the 1988 ACM SIGPLAN and SIGOPS workshop. New York, New York, USA: ACM Press, 1988. http://dx.doi.org/10.1145/68210.69236.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Vandevoorde, M. T., and D. Kapur. "Parallel user interfaces for parallel applications." In Proceedings of 5th IEEE International Symposium on High Performance Distributed Computing. IEEE, 1996. http://dx.doi.org/10.1109/hpdc.1996.546185.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Gross, T., A. Sobel, and M. Zolg. "Parallel compilation for a parallel machine." In the ACM SIGPLAN 1989 Conference. New York, New York, USA: ACM Press, 1989. http://dx.doi.org/10.1145/73141.74826.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Schwartz-Narbonne, Daniel, Feng Liu, Tarun Pondicherry, David August, and Sharad Malik. "Parallel assertions for debugging parallel programs." In 2011 9th IEEE/ACM International Conference on Formal Methods and Models for Codesign (MEMOCODE 2011). IEEE, 2011. http://dx.doi.org/10.1109/memcod.2011.5970525.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Gao, Guang R., Thomas Sterling, Rick Stevens, Mark Hereld, and Weirong Zhu. "ParalleX: A Study of A New Parallel Computation Model." In 2007 IEEE International Parallel and Distributed Processing Symposium. IEEE, 2007. http://dx.doi.org/10.1109/ipdps.2007.370484.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Clapp, R. M., and T. N. Mudge. "Parallel language constructs for efficient parallel processing." In Proceedings of the Twenty-Fifth Hawaii International Conference on System Sciences. IEEE, 1992. http://dx.doi.org/10.1109/hicss.1992.183295.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Bailey, D. H., E. D. Brooks, D. W. Forslund, R. J. Harrison, D. Heller, T. Kraay, and G. Mountry. "Massively parallel vs. parallel vector supercomputers (Panel)." In the 1993 ACM/IEEE conference. New York, New York, USA: ACM Press, 1993. http://dx.doi.org/10.1145/169627.169858.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Pingali, Keshav. "Parallel program = operator + schedule + parallel data structure." In 2015 International Conference on Embedded Computer Systems: Architectures, Modeling, and Simulation (SAMOS). IEEE, 2015. http://dx.doi.org/10.1109/samos.2015.7363652.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Steele, G. L. "Parallel programming and parallel abstractions in Fortress." In 14th International Conference on Parallel Architectures and Compilation Techniques (PACT'05). IEEE, 2005. http://dx.doi.org/10.1109/pact.2005.34.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Parallel"

1

Sahni, Sartaj. Parallel Algorithms. Fort Belvoir, VA: Defense Technical Information Center, June 1999. http://dx.doi.org/10.21236/ada369856.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Chandy, K. M. Parallel Computation. Fort Belvoir, VA: Defense Technical Information Center, September 1994. http://dx.doi.org/10.21236/ada284831.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Minker, Jack. Parallel Problem Solving System, PRISM (Parallel Inference System). Fort Belvoir, VA: Defense Technical Information Center, April 1988. http://dx.doi.org/10.21236/ada204915.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Foulser, David E. Highly Parallel Iterative Methods for Massively Parallel Multiprocessors. Fort Belvoir, VA: Defense Technical Information Center, February 1989. http://dx.doi.org/10.21236/ada206305.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Neifeld, Mark A. Parallel Error Coding Decoding for Highly Parallel Memories. Fort Belvoir, VA: Defense Technical Information Center, August 1997. http://dx.doi.org/10.21236/ada329704.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Fan, W. C., and J. A. Sr Halbleib. Parallel processing ITS. Office of Scientific and Technical Information (OSTI), September 1996. http://dx.doi.org/10.2172/402520.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Phillips, Cynthia A. Parallel Graph Contraction. Fort Belvoir, VA: Defense Technical Information Center, May 1989. http://dx.doi.org/10.21236/ada211916.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Oldehoeft, Rodney R. Parallel Functional Computation. Fort Belvoir, VA: Defense Technical Information Center, November 1989. http://dx.doi.org/10.21236/ada214627.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Stewart, G. W., and Dianne P. O'Leary. Parallel Matrix Computations. Fort Belvoir, VA: Defense Technical Information Center, March 1988. http://dx.doi.org/10.21236/ada196246.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ribbens, C., and L. Watson. Parallel mathematical software. Office of Scientific and Technical Information (OSTI), October 1989. http://dx.doi.org/10.2172/5587283.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography