Academic literature on the topic 'Parallel'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Parallel.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Parallel"
Kloppenborg, John. "Gospel Parallels/Parallel Gospels." Biblical Theology Bulletin: Journal of Bible and Culture 44, no. 3 (July 22, 2014): 156–61. http://dx.doi.org/10.1177/0146107914540490.
Full textFailinger, Marie. "Parallel Justice: Creating Causes of Action for Mandatory Mediation." University of Michigan Journal of Law Reform, no. 47.2 (2014): 359. http://dx.doi.org/10.36646/mjlr.47.2.parallel.
Full textLapaine, Miljenko. "Parallels in Cartography: Standard, Equidistantly Mapped and True Length Parallels." Geographies 4, no. 1 (January 10, 2024): 52–65. http://dx.doi.org/10.3390/geographies4010004.
Full textTae Won Cho, S. S. Pyo, and J. R. Heath. "PARALLEX: a parallel approach to switchbox routing." IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 13, no. 6 (June 1994): 684–93. http://dx.doi.org/10.1109/43.285241.
Full textDempsey, P. "Parallel worlds [parallel programming]." Engineering & Technology 3, no. 12 (July 5, 2008): 44–45. http://dx.doi.org/10.1049/et:20081208.
Full textMitchell, Lisa. "Parallel Languages, Parallel Cultures." Indian Economic & Social History Review 42, no. 4 (December 2005): 445–67. http://dx.doi.org/10.1177/001946460504200402.
Full textWilson, D. R. "Parallel processing—parallel thinking." Electronic Systems News 1987, no. 1 (1987): 35. http://dx.doi.org/10.1049/esn.1987.0014.
Full textMerriam, T. "Parallel Nephews, Parallel Uncles." Notes and Queries 54, no. 3 (September 1, 2007): 272–74. http://dx.doi.org/10.1093/notesj/gjm134.
Full textM., Thilagavathi. "Enhancing Blockchain Performance Using Parallel Merkle Root and Parallel Proof of Work." Journal of Advanced Research in Dynamical and Control Systems 12, no. 01-Special Issue (February 13, 2020): 609–20. http://dx.doi.org/10.5373/jardcs/v12sp1/20201110.
Full textLapaine, М. "Secant Parallels in Azimuthal Projections." Geodesy and Cartography 946, no. 4 (May 20, 2019): 39–54. http://dx.doi.org/10.22389/0016-7126-2019-946-4-39-54.
Full textDissertations / Theses on the topic "Parallel"
Ferlin, Edson Pedro. "Avaliação de métodos de paralelização automática." Universidade de São Paulo, 1997. http://www.teses.usp.br/teses/disponiveis/76/76132/tde-09102008-111750/.
Full textThis work invoke some concepts and definitions about parallel processing, applicable in the automatic parallelization, and also the analysis and conditions for the data dependence, in order to apply the methods for parallelization: Hyperplane, Unimodular Transformation, Communication-Free Data Allocation and Partitioning & Labeling. On this way, transform a sequential program into an equivalent parallel one. Applying these programs on the distributed-memory system with communication through message-passing MPI (Message-Passing Interface), and we obtain some measurements for the evaluations/comparison between those methods.
Oliver, William R. "The Matrix a metaphorical paralell [i.e. parallel] to language /." View electronic thesis, 2008. http://dl.uncw.edu/etd/2008-3/oliverw/williamoliver.pdf.
Full textYousif, Hilal M. "Parallel algorithms for MIMD parallel computers." Thesis, Loughborough University, 1986. https://dspace.lboro.ac.uk/2134/15113.
Full textHarrison, Ian. "Locality and parallel optimizations for parallel supercomputing." Diss., Connect to the thesis, 2003. http://hdl.handle.net/10066/1274.
Full textSu, (Philip) Shin-Chen. "Parallel subdomain method for massively parallel computers." Diss., Georgia Institute of Technology, 1992. http://hdl.handle.net/1853/17376.
Full textGecgel, Murat. "Parallel, Navier." Master's thesis, METU, 2003. http://etd.lib.metu.edu.tr/upload/12604807/index.pdf.
Full textdimensional laminar and turbulent flowfields over rotary wing configurations. The code employs finite volume discretization and the compact, four step Runge-Kutta type time integration technique to solve unsteady, thin&ndash
layer Navier&ndash
Stokes equations. Zero&ndash
order Baldwin&ndash
Lomax turbulence model is utilized to model the turbulence for the computation of turbulent flowfields. A fine, viscous, H type structured grid is employed in the computations. To reduce the computational time and memory requirements parallel processing with distributed memory is used. The data communication among the processors is executed by using the MPI ( Message Passing Interface ) communication libraries. Laminar and turbulent solutions around a two bladed UH &ndash
1 helicopter rotor and turbulent solution around a flat plate is obtained. For the rotary wing configurations, nonlifting and lifting rotor cases are handled seperately for subsonic and transonic blade tip speeds. The results are, generally, in good agreement with the experimental data.
Windowmaker, Tricia. "Parallel adolescents." Honors in the Major Thesis, University of Central Florida, 2010. http://digital.library.ucf.edu/cdm/ref/collection/ETH/id/1525.
Full textBachelors
Arts and Humanities
English
Hassel, Karen Louise. "Parallel memories." The Ohio State University, 1993. http://rave.ohiolink.edu/etdc/view?acc_num=osu1314801102.
Full textAndersson, Håkan. "Parallel Simulation : Parallel computing for high performance LTE radio network simulations." Thesis, Mittuniversitetet, Institutionen för informationsteknologi och medier, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-12390.
Full textDai, Jiehua. "Automatic Parallel Memory Address Generation for Parallel DSP Computing." Thesis, Linköping University, Department of Electrical Engineering, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-11110.
Full textThe concept of Parallel Vector (scratch pad) Memories (PVM) was introduced as one solution for Parallel Computing in DSP, which can provides parallel memory addressing efficiently with minimum latency. The parallel programming more efficient by using the parallel addressing generator for parallel vector memory (PVM) proposed in this thesis. However, without hiding complexities by cache, the cost of programming is high. To minimize the programming cost, automatic parallel memory address generation is needed to hide the complexities of memory access.
This thesis investigates methods for implementing conflict-free vector addressing algorithms on a parallel hardware structure. In particular, match vector addressing requirements extracted from the behaviour model to a prepared parallel memory addressing template, in order to supply data in parallel from the main memory to the on-chip vector memory.
According to the template and usage of the main and on-chip parallel vector memory, models for data pre-allocation and permutation in scratch pad memories of ASIP can be decided and configured. By exposing the parallel memory access of source code, the memory access flow graph (MFG) will be generated. Then MFG will be used combined with hardware information to match templates in the template library. When it is matched with one template, suited permutation equation will be gained, and the permutation table that include target addresses for data pre-allocation and permutation is created. Thus it is possible to automatically generate memory address for parallel memory accesses.
A tool for achieving the goal mentioned above is created, Permutator, which is implemented in C++ combined with XML. Memory access coding template is selected, as a result that permutation formulas are specified. And then PVM address table could be generated to make the data pre-allocation, so that efficient parallel memory access is possible.
The result shows that the memory access complexities is hiden by using Permutator, so that the programming cost is reduced.It works well in the context that each algorithm with its related hardware information is corresponding to a template case, so that extra memory cost is eliminated.
Books on the topic "Parallel"
Alan, Gibbons. Efficient parallel algorithms. Cambridge [England]: Cambridge University Press, 1988.
Find full textMussi-Dias, Vicente. Restinga Paralela = Parallel Restinga. Campos dos Goytacazes, Rio de Janeiro, Brasil: Maria das Graças Machado Freire, 2021.
Find full textCarol, Maund, Rollig Stella 1960-, Oechsler Monika, Site Gallery (Sheffield England), and Lentos Kunstmuseum Linz, eds. Monika Oechsler: Parallel worlds = parallele Welten. Sheffield: Site Gallery, 2004.
Find full textPerrott, Ronald H. Parallel programming. Wokingham, England: Addison-Wesley Pub. Co., 1987.
Find full textH, Jamieson Leah, Gannon Dennis B. 1947-, and Douglass Robert J, eds. The characteristics of parallel algorithms. Cambridge, Mass: MIT Press, 1987.
Find full textByrne, John. Parallel. Milwaukie, OR: Dark Horse Comics, 1994.
Find full textCampbell, Colin. Parallel. Cambridge: Cambridge University Press, 2005.
Find full textRoosta, Seyed H. Parallel Processing and Parallel Algorithms. New York, NY: Springer New York, 2000. http://dx.doi.org/10.1007/978-1-4612-1220-1.
Full textLerman, Gil, and Larry Rudolph. Parallel Evolution of Parallel Processors. Boston, MA: Springer US, 1993. http://dx.doi.org/10.1007/978-1-4615-2856-2.
Full textIannucci, Robert A. Parallel Machines: Parallel Machine Languages. Boston, MA: Springer US, 1990. http://dx.doi.org/10.1007/978-1-4613-1543-8.
Full textBook chapters on the topic "Parallel"
Weik, Martin H. "parallel." In Computer Science and Communications Dictionary, 1224. Boston, MA: Springer US, 2000. http://dx.doi.org/10.1007/1-4020-0613-6_13601.
Full textHossfeld, F. "Parallel Processes and Parallel Algorithms." In Complex Systems — Operational Approaches in Neurobiology, Physics, and Computers, 194–211. Berlin, Heidelberg: Springer Berlin Heidelberg, 1985. http://dx.doi.org/10.1007/978-3-642-70795-7_14.
Full textLerman, Gil, and Larry Rudolph. "Introduction." In Parallel Evolution of Parallel Processors, 1–3. Boston, MA: Springer US, 1993. http://dx.doi.org/10.1007/978-1-4615-2856-2_1.
Full textLerman, Gil, and Larry Rudolph. "Classification of Parallel Processors." In Parallel Evolution of Parallel Processors, 5–19. Boston, MA: Springer US, 1993. http://dx.doi.org/10.1007/978-1-4615-2856-2_2.
Full textLerman, Gil, and Larry Rudolph. "Emergent Trends." In Parallel Evolution of Parallel Processors, 21–97. Boston, MA: Springer US, 1993. http://dx.doi.org/10.1007/978-1-4615-2856-2_3.
Full textLerman, Gil, and Larry Rudolph. "Popular Machine Models." In Parallel Evolution of Parallel Processors, 99–113. Boston, MA: Springer US, 1993. http://dx.doi.org/10.1007/978-1-4615-2856-2_4.
Full textLerman, Gil, and Larry Rudolph. "The Shape of Things to Come?" In Parallel Evolution of Parallel Processors, 115–22. Boston, MA: Springer US, 1993. http://dx.doi.org/10.1007/978-1-4615-2856-2_5.
Full textRoosta, Seyed H. "Computer Architecture." In Parallel Processing and Parallel Algorithms, 1–56. New York, NY: Springer New York, 2000. http://dx.doi.org/10.1007/978-1-4612-1220-1_1.
Full textRoosta, Seyed H. "Asynchronous Parallel Programming." In Parallel Processing and Parallel Algorithms, 439–76. New York, NY: Springer New York, 2000. http://dx.doi.org/10.1007/978-1-4612-1220-1_10.
Full textRoosta, Seyed H. "Data Parallel Programming." In Parallel Processing and Parallel Algorithms, 477–99. New York, NY: Springer New York, 2000. http://dx.doi.org/10.1007/978-1-4612-1220-1_11.
Full textConference papers on the topic "Parallel"
Pettey, C. C., and M. R. Leuze. "Parallel placement of parallel processes." In the third conference. New York, New York, USA: ACM Press, 1988. http://dx.doi.org/10.1145/62297.62325.
Full textRubin, Robert, Larry Rudolph, and Dror Zernik. "Debugging parallel programs in parallel." In the 1988 ACM SIGPLAN and SIGOPS workshop. New York, New York, USA: ACM Press, 1988. http://dx.doi.org/10.1145/68210.69236.
Full textVandevoorde, M. T., and D. Kapur. "Parallel user interfaces for parallel applications." In Proceedings of 5th IEEE International Symposium on High Performance Distributed Computing. IEEE, 1996. http://dx.doi.org/10.1109/hpdc.1996.546185.
Full textGross, T., A. Sobel, and M. Zolg. "Parallel compilation for a parallel machine." In the ACM SIGPLAN 1989 Conference. New York, New York, USA: ACM Press, 1989. http://dx.doi.org/10.1145/73141.74826.
Full textSchwartz-Narbonne, Daniel, Feng Liu, Tarun Pondicherry, David August, and Sharad Malik. "Parallel assertions for debugging parallel programs." In 2011 9th IEEE/ACM International Conference on Formal Methods and Models for Codesign (MEMOCODE 2011). IEEE, 2011. http://dx.doi.org/10.1109/memcod.2011.5970525.
Full textGao, Guang R., Thomas Sterling, Rick Stevens, Mark Hereld, and Weirong Zhu. "ParalleX: A Study of A New Parallel Computation Model." In 2007 IEEE International Parallel and Distributed Processing Symposium. IEEE, 2007. http://dx.doi.org/10.1109/ipdps.2007.370484.
Full textClapp, R. M., and T. N. Mudge. "Parallel language constructs for efficient parallel processing." In Proceedings of the Twenty-Fifth Hawaii International Conference on System Sciences. IEEE, 1992. http://dx.doi.org/10.1109/hicss.1992.183295.
Full textBailey, D. H., E. D. Brooks, D. W. Forslund, R. J. Harrison, D. Heller, T. Kraay, and G. Mountry. "Massively parallel vs. parallel vector supercomputers (Panel)." In the 1993 ACM/IEEE conference. New York, New York, USA: ACM Press, 1993. http://dx.doi.org/10.1145/169627.169858.
Full textPingali, Keshav. "Parallel program = operator + schedule + parallel data structure." In 2015 International Conference on Embedded Computer Systems: Architectures, Modeling, and Simulation (SAMOS). IEEE, 2015. http://dx.doi.org/10.1109/samos.2015.7363652.
Full textSteele, G. L. "Parallel programming and parallel abstractions in Fortress." In 14th International Conference on Parallel Architectures and Compilation Techniques (PACT'05). IEEE, 2005. http://dx.doi.org/10.1109/pact.2005.34.
Full textReports on the topic "Parallel"
Sahni, Sartaj. Parallel Algorithms. Fort Belvoir, VA: Defense Technical Information Center, June 1999. http://dx.doi.org/10.21236/ada369856.
Full textChandy, K. M. Parallel Computation. Fort Belvoir, VA: Defense Technical Information Center, September 1994. http://dx.doi.org/10.21236/ada284831.
Full textMinker, Jack. Parallel Problem Solving System, PRISM (Parallel Inference System). Fort Belvoir, VA: Defense Technical Information Center, April 1988. http://dx.doi.org/10.21236/ada204915.
Full textFoulser, David E. Highly Parallel Iterative Methods for Massively Parallel Multiprocessors. Fort Belvoir, VA: Defense Technical Information Center, February 1989. http://dx.doi.org/10.21236/ada206305.
Full textNeifeld, Mark A. Parallel Error Coding Decoding for Highly Parallel Memories. Fort Belvoir, VA: Defense Technical Information Center, August 1997. http://dx.doi.org/10.21236/ada329704.
Full textFan, W. C., and J. A. Sr Halbleib. Parallel processing ITS. Office of Scientific and Technical Information (OSTI), September 1996. http://dx.doi.org/10.2172/402520.
Full textPhillips, Cynthia A. Parallel Graph Contraction. Fort Belvoir, VA: Defense Technical Information Center, May 1989. http://dx.doi.org/10.21236/ada211916.
Full textOldehoeft, Rodney R. Parallel Functional Computation. Fort Belvoir, VA: Defense Technical Information Center, November 1989. http://dx.doi.org/10.21236/ada214627.
Full textStewart, G. W., and Dianne P. O'Leary. Parallel Matrix Computations. Fort Belvoir, VA: Defense Technical Information Center, March 1988. http://dx.doi.org/10.21236/ada196246.
Full textRibbens, C., and L. Watson. Parallel mathematical software. Office of Scientific and Technical Information (OSTI), October 1989. http://dx.doi.org/10.2172/5587283.
Full text