Dissertations / Theses on the topic 'Parallel computers'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Parallel computers.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Yousif, Hilal M. "Parallel algorithms for MIMD parallel computers." Thesis, Loughborough University, 1986. https://dspace.lboro.ac.uk/2134/15113.
Full textSu, (Philip) Shin-Chen. "Parallel subdomain method for massively parallel computers." Diss., Georgia Institute of Technology, 1992. http://hdl.handle.net/1853/17376.
Full textMiller, R. Quentin. "Programming bulk-synchronous parallel computers." Thesis, University of Oxford, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.318894.
Full textKalaiselvi, S. "Checkpointing Algorithms for Parallel Computers." Thesis, Indian Institute of Science, 1997. https://etd.iisc.ac.in/handle/2005/3908.
Full textKalaiselvi, S. "Checkpointing Algorithms for Parallel Computers." Thesis, Indian Institute of Science, 1997. http://hdl.handle.net/2005/67.
Full textHarrison, Ian. "Locality and parallel optimizations for parallel supercomputing." Diss., Connect to the thesis, 2003. http://hdl.handle.net/10066/1274.
Full text練偉森 and Wai-sum Lin. "Adaptive parallel rendering." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1999. http://hub.hku.hk/bib/B31221415.
Full textLin, Wai-sum. "Adaptive parallel rendering /." Hong Kong : University of Hong Kong, 1999. http://sunzi.lib.hku.hk/hkuto/record.jsp?B20868236.
Full textSundar, N. S. "Data access optimizations for parallel computers /." The Ohio State University, 1998. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487950658548697.
Full textKatiker, Rushikesh. "Automatic generation of dynamic parallel architectures." Click here for download, 2007. http://proquest.umi.com/pqdweb?did=1475182071&sid=1&Fmt=2&clientId=3260&RQT=309&VName=PQD.
Full textElabed, Jamal. "Implementing parallel sorting algorithms." Virtual Press, 1989. http://liblink.bsu.edu/uhtbin/catkey/543997.
Full textDepartment of Computer Science
Hybinette, Maria. "Interactive parallel simulation environments." Diss., Georgia Institute of Technology, 2000. http://hdl.handle.net/1853/9174.
Full textIzadi, Mohammad. "Hierarchical Matrix Techniques on Massively Parallel Computers." Doctoral thesis, Universitätsbibliothek Leipzig, 2012. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-101164.
Full textPan, Yinfei. "Parallel XML parsing." Diss., Online access via UMI:, 2009.
Find full textVelusamy, Vijay. "Adapting Remote Direct Memory Access based file system to parallel Input-/Output." Master's thesis, Mississippi State : Mississippi State University, 2003. http://library.msstate.edu/etd/show.asp?etd=etd-11112003-092209.
Full text邱祖淇 and Cho-ki Joe Yau. "Efficient solutions for the load distribution problem." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1999. http://hub.hku.hk/bib/B31222031.
Full textYau, Cho-ki Joe. "Efficient solutions for the load distribution problem /." Hong Kong : University of Hong Kong, 1999. http://sunzi.lib.hku.hk/hkuto/record.jsp?B20971953.
Full textFarreras, Esclusa Montse. "Optimizing programming models for massively parallel computers." Doctoral thesis, Universitat Politècnica de Catalunya, 2008. http://hdl.handle.net/10803/31776.
Full textWang, Diangin. "Solving the algebraic eigenproblem on parallel computers." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp03/NQ31909.pdf.
Full textDavis, Martin H. Jr. "Optical waveguides in general purpose parallel computers." Diss., Georgia Institute of Technology, 1992. http://hdl.handle.net/1853/8153.
Full textWilson, Gregory V. "Structuring and supporting programs on parallel computers." Thesis, University of Edinburgh, 1992. http://hdl.handle.net/1842/12151.
Full textBooth, Stephen Peter. "Application of parallel computers to particle physics." Thesis, University of Edinburgh, 1992. http://hdl.handle.net/1842/15213.
Full textNordström, Tomas. "Highly parallel computers for artificial neural networks." Doctoral thesis, Luleå tekniska universitet, 1995. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-25655.
Full textGodkänd; 1995; 20070426 (ysko)
Levin, Matthew D. "Parallel algorithms for SIMD and MIMD computers." Thesis, Loughborough University, 1990. https://dspace.lboro.ac.uk/2134/32962.
Full textBhalerao, Rohit Dinesh. "Parallel XML parsing." Diss., Online access via UMI:, 2007.
Find full textPalmer, Joseph McRae. "The Hybrid Architecture Parallel Fast Fourier Transform (HAPFFT) /." Diss., CLICK HERE for online access, 2005. http://contentdm.lib.byu.edu/ETD/image/etd855.pdf.
Full textByrd, Jonathan Michael Robert. "Parallel Markov Chain Monte Carlo." Thesis, University of Warwick, 2010. http://wrap.warwick.ac.uk/3634/.
Full textPerks, Oliver F. J. "Addressing parallel application memory consumption." Thesis, University of Warwick, 2013. http://wrap.warwick.ac.uk/58493/.
Full textFrancis, Nicholas David. "Parallel architectures for image analysis." Thesis, University of Warwick, 1991. http://wrap.warwick.ac.uk/108844/.
Full textLewis, E. Christopher. "Achieving robust performance in parallel programming languages /." Thesis, Connect to this title online; UW restricted, 2001. http://hdl.handle.net/1773/6996.
Full textKondo, Boubacar. "An investigation of parallel algorithms developed for graph problems and their implementation on parallel computers." Virtual Press, 1991. http://liblink.bsu.edu/uhtbin/catkey/770951.
Full textDepartment of Computer Science
Aljabri, Malak Saleh. "GUMSMP : a scalable parallel Haskell implementation." Thesis, University of Glasgow, 2015. http://theses.gla.ac.uk/6822/.
Full textGrove, Duncan A. "Performance modelling of message-passing parallel programs." Title page, contents and abstract only, 2003. http://web4.library.adelaide.edu.au/theses/09PH/09phg8832.pdf.
Full textAlahmadi, Marwan Ibrahim. "Optimizing data parallelism in applicative languages." Diss., Georgia Institute of Technology, 1990. http://hdl.handle.net/1853/8457.
Full textSalam, Mohammed Abdul. "A study of hypercube graph and its application to parallel computing." Virtual Press, 1991. http://liblink.bsu.edu/uhtbin/catkey/774739.
Full textDepartment of Computer Science
Lehmann, Uwe. "Schedules for Dynamic Bidirectional Simulations on Parallel Computers." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2003. http://nbn-resolving.de/urn:nbn:de:swb:14-1054281056187-31742.
Full textBei der Berechnung von Adjungierten, zum Debuggen und für ähnliche Anwendungen kann man die Umkehr der entsprechenden Programmauswertung verwenden. Der einfachste Ansatz, nämlich das Erstellen einer kompletten Mitschrift der Vorwärtsrechnung, welche anschließend rückwärts gelesen wird, verursacht einen enormen Speicherplatzbedarf. Als Alternative dazu kann man die Mitschrift auch stückweise erzeugen, indem die Programmauswertung von passend gewählten Checkpoints wiederholt gestartet wird. In dieser Arbeit wird die Theorie der optimalen parallelen Umkehrschemata erweitert. Zum einen erfolgt die Konstruktion von adaptiven parallelen Umkehrschemata. Dafür wird ein Algorithmus beschrieben, der es durch die Nutzung von mehreren Prozessen ermöglicht, Checkpoints so zu verteilen, daß die Umkehrung des Programmes jederzeit ohne Zeitverlust erfolgen kann. Hierbei bleibt die Zahl der verwendeten Checkpoints und Prozesse innerhalb der bekannten Optimalitätsgrenzen. Zum anderen konnte für die adaptiven parallelen Umkehrschemata ein Algorithmus entwickelt werden, welcher ein Restart der eigentlichen Programmauswertung basierend auf der laufenden Programmumkehr erlaubt. Dieser Restart kann wieder jederzeit ohne Zeitverlust erfolgen und die entstehenden Checkpointverteilung erfüllen wieder sowohl Optimalitäts- als auch die Adaptivitätskriterien. Zusammenfassend wurden damit in dieser Arbeit Schemata konstruiert, die bidirektionale Simulationen ermöglichen
Goehlich, Ralph Dietmar. "Design of finite element systems for parallel computers." Diss., Georgia Institute of Technology, 1989. http://hdl.handle.net/1853/16817.
Full textAndersen, Johannes Harder. "Linear programming methods for fine grain parallel computers." Thesis, Brunel University, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.241255.
Full textZhang, Zhao. "Enabling efficient parallel scripting on large-scale computers." Thesis, The University of Chicago, 2014. http://pqdtopen.proquest.com/#viewpdf?dispub=3627911.
Full textMany-task computing (MTC) applications assemble existing sequential (or parallel) programs, using POSIX files for intermediate data. The parallelism of such applications often comes from data parallelism. MTC applications can be grouped into stages, and dependencies between tasks in different stages can be in the form of file production and consumption. The computation stage of MTC applications can have a large number of tasks, thus it can have a large amount of I/O traffic (metadata traffic and I/O traffic) which is also highly concurrent. Some MTC applications are iterative, where the computation iterates over a dataset and exit when some condition(s) are reached. Some MTC applications are interactive, where the application requires human action between computation stages.
In this dissertation we develop a complete parallel scripting framework called AMFORA, which has a shared in-RAM file system and task execution engine. It implements the multi-read single-write consistency model, preserves the POSIX interface for original applications, and provides an interface for collective data movement and functional data transformation. It is interoperable with many existing serial scripting languages (e.g., Bash, Python). AMFORA runs on thousands of compute nodes on an IBM BG/P supercomputer. It also runs on cloud environments such as Amazon EC2 and Google Compute Engine. To understand the baseline MTC application performance on large-scale computers, we define MTC Envelope, which is a file system benchmark to measure the capacity of a given software/hardware stack in the context of MTC applications.
The main contributions of this dissertation are: A system independent approach to profile and understand the concurrency of MTC applications' I/O behavior; A benchmark definition that measures the file system's capacity for MTC applications; A theoretical model to estimate the I/O overhead of MTC applications on large-scale computers; A scalable distributed file system design, with no centralized component, that achieves good scalability; A collective file system management toolkit to enable fast data movement; A functional file system management toolkit to enable fast file content transformation; A new parallel scripting programming model that extends a scripting language (e.g., Bash); A novel file system access interface design that combines both POSIX and non-POSIX interfaces to ease programming without loss of efficiency; An automated method for identifying data flow patterns that are amenable to collective optimizations at runtime; The open source implementation of the entire framework to enable MTC applications on large-scale computers. (Abstract shortened by UMI.)
Goldberg, Andrew Vladislav. "Efficient graph algorithms for sequential and parallel computers." Thesis, Massachusetts Institute of Technology, 1987. http://hdl.handle.net/1721.1/14912.
Full textMICROFICHE COPY AVAILABLE IN ARCHIVES AND ENGINEERING.
Bibliography: p. 117-123.
by Andrew Vladislav Goldberg.
Ph.D.
Baker, James McCall Jr. "Run-time systems for fine-grain message-passing parallel computers." Diss., Georgia Institute of Technology, 1998. http://hdl.handle.net/1853/15366.
Full textDas, Samir Ranjan. "Performance issues in time warp parallel simulations." Diss., Georgia Institute of Technology, 1994. http://hdl.handle.net/1853/8152.
Full textWai, Siu-kit, and 衛兆傑. "Virtual links for multicomputers." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1996. http://hub.hku.hk/bib/B18038050.
Full textabstract
toc
Computer Science
Master
Master of Philosophy
Wai, Siu-kit. "Virtual links for multicomputers /." Hong Kong : University of Hong Kong, 1996. http://sunzi.lib.hku.hk/hkuto/record.jsp?B18038050.
Full textRajaram, Kumaran. "Principal design criteria influencing the performance of a portable, high performance parallel I/O implementation." Master's thesis, Mississippi State : Mississippi State University, 2002. http://library.msstate.edu/etd/show.asp?etd=etd-04052002-105711.
Full textHuang, Chun-Hsi. "Communication-efficient bulk synchronous parallel algorithms." Buffalo, N.Y. : Dept. of Computer Science, State University of New York at Buffalo, 2001. http://www.cse.buffalo.edu/tech%2Dreports/2001%2D06.ps.Z.
Full textChoi, Jong-Deok. "Parallel program debugging with flowback analysis." Madison, Wis. : University of Wisconsin-Madison, Computer Sciences Dept, 1989. http://catalog.hathitrust.org/api/volumes/oclc/20839575.html.
Full textHopper, Michael A. "A compiler framework for multithreaded parallel systems." Diss., Georgia Institute of Technology, 1997. http://hdl.handle.net/1853/15638.
Full textBissland, Lesley. "Hardware and software aspects of parallel computing." Thesis, University of Glasgow, 1996. http://theses.gla.ac.uk/3953/.
Full textChild, Christopher H. T. "Approximate dynamic programming with parallel stochastic planning operators." Thesis, City University London, 2011. http://openaccess.city.ac.uk/1109/.
Full text