Dissertations / Theses on the topic 'Memory management (Computer science)'

To see the other types of publications on this topic, follow the link: Memory management (Computer science).

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Memory management (Computer science).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Wilhelmsson, Jesper. "Efficient memory management for message-passing concurrency." Licentiate thesis, Uppsala : Univ. : Dept. of Information Technology, Univ, 2005. http://www.it.uu.se/research/reports/lic/2005-001/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zhang, Yang. "Dynamic Memory Management for the Loci Framework." MSSTATE, 2004. http://sun.library.msstate.edu/ETD-db/theses/available/etd-04062004-215627/.

Full text
Abstract:
Resource management is a critical part in high-performance computing software. While management of processing resources to increase performance is the most critical, efficient management of memory resources plays an important role in solving large problems. This thesis research seeks to create an effective dynamic memory management scheme for a declarative data-parallel programming system. In such systems, some sort of automatic resource management is a requirement. Using the Loci framework, this thesis research focuses on exploring such opportunities. We believe there exists an automatic memory management scheme for such declarative data-parallel systems that provides good compromise between memory utilization and performance. In addition to basic memory management, this thesis research also seeks to develop methods that take advantages of the cache memory subsystem and explore balances between memory utilization and parallel communication costs in such declarative data-parallel frameworks.
APA, Harvard, Vancouver, ISO, and other styles
3

Holk, Eric. "Region-based memory management for expressive GPU programming." Thesis, Indiana University, 2016. http://pqdtopen.proquest.com/#viewpdf?dispub=10132089.

Full text
Abstract:

Over the last decade, graphics processing units (GPUs) have seen their use broaden from purely graphical tasks to general purpose computation. The increased programmability required by demanding graphics applications has proven useful for a number of non-graphical problems as well. GPUs' high memory bandwidth and floating point performance make them attractive for general computation workloads, yet these benefits come at the cost of added complexity. One particular problem is the fact that GPUs and their associated high performance memory typically lie on discrete cards that are separated from the host CPU} by the PCI-Express bus. This requires programmers to carefully manage the transfer of data between the CPU and GPU memory so that the right data is in the right place at the right time. Programmers must design data structures with serialization in mind in order to efficiently move data across the PCI bus. In practice, this leads to programmers working with only simple data structures such as one or two-dimensional arrays and the applications that can be easily expressed in terms of these structures. CPU programmers have long had access to richer data structures, such as trees or first class procedures, which enable new and simpler approaches to solving certain problems.

This thesis explores the use of RBMM to overcome these data movement challenges. RBMM is a technique in which data is assigned to regions and these regions can then be operated on as a unit. One of the first uses of regions was to amortize the cost of deallocation. Many small objects would be allocated in a single region and the region could be deallocated as a single operation independent of the number of items in the region. In this thesis, regions are used as the unit of data movement between the CPU and GPU. Data structures are assigned to a region and thus the runtime system does not have to be aware of the internal layout of a data structure. The runtime system can simply move the entire region from one device to another, keeping the internal layout intact and allowing code running on either device to operate on the data in the same way.

These ideas are explored through a new programming language called Harlan. Harlan is designed to simplify programming GPUs and other data parallel processors. It provides kernel expressions as its fundamental mechanism for parallelism. Kernels function similarly to a parallel map or zipWith operation from other functional programming languages. For example, the expression (kernel ([x xs] [y ys]) (+ x y)) evaluates to a vector where each element is the sum of the corresponding elements in xs and ys. Kernels can have arbitrary body expressions that can even include kernels, thereby supporting nested data parallelism. Harlan uses a region-based memory system to enable higher level programming features such as trees and ADTs and even first class procedures. Like all data in Harlan, first class procedures are device-independent, so a procedure created in GPU code can be applied in CPU code and vice-versa.

Besides providing the design and description of the implementation of Harlan, this thesis includes a type safety proof for a small model of Harlan's region system as well as a number of small application case studies. The type safety proof provides formal support that Harlan ensures programs will have the right data in the right place at the right time. The application case studies show that Harlan and the ideas embodied within it are useful both for a number of traditional applications as well as problems that are problematic for previous GPU programming languages. The design and implementation of Harlan, its proof of type safety and the set of application case studies together show that region-based memory management is an effective way of enabling high level features in languages targeting CPU/GPU systems and other machines with disjoint memories.

APA, Harvard, Vancouver, ISO, and other styles
4

Feeley, Michael Joseph. "Global memory management for workstation networks /." Thesis, Connect to this title online; UW restricted, 1996. http://hdl.handle.net/1773/6997.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Modzelewski, Kevin (Kevin Paul). "Scalable memory management using a distributed buddy allocator." Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/61002.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (p. 53-56).
The recent rise of multicore processors has forced us to reexamine old computer science problems in a new light. As multicores turn into manycores, we need to visit these problems yet again to find solutions that will work on these drastically different architectures. This thesis presents the design of a new page allocator algorithm based on a new distributed buddy allocator algorithm, one which is made with future processor architectures in mind. The page allocator is a vital and heavily-used part of an operating system, and making this more scalable is a necessary step to build a scalable operating system. This design was implemented in the fos [34] research operating system, and evaluated on 8- and 16-core machines. The results show that this design has comparable performance with Linux for small core counts, and with its better scalability, surpasses the performance of Linux at higher core counts.
by Kevin Modzelewski.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
6

Beebee, William S. (William Scripps) 1977. "Region-based memory management for real-time Java." Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/86801.

Full text
Abstract:
Thesis (M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2001.
Includes bibliographical references (p. 81-82).
by William S. Beebee, Jr.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
7

Panthulu, Pradeep. "Intelligent Memory Management Heuristics." Thesis, University of North Texas, 2003. https://digital.library.unt.edu/ark:/67531/metadc4399/.

Full text
Abstract:
Automatic memory management is crucial in implementation of runtime systems even though it induces a significant computational overhead. In this thesis I explore the use of statistical properties of the directed graph describing the set of live data to decide between garbage collection and heap expansion in a memory management algorithm combining the dynamic array represented heaps with a mark and sweep garbage collector to enhance its performance. The sampling method predicting the density and the distribution of useful data is implemented as a partial marking algorithm. The algorithm randomly marks the nodes of the directed graph representing the live data at different depths with a variable probability factor p. Using the information gathered by the partial marking algorithm in the current step and the knowledge gathered in the previous iterations, the proposed empirical formula predicts with reasonable accuracy the density of live nodes on the heap, to decide between garbage collection and heap expansion. The resulting heuristics are tested empirically and shown to improve overall execution performance significantly in the context of the Jinni Prolog compiler's runtime system.
APA, Harvard, Vancouver, ISO, and other styles
8

McNamee, Dylan James. "Virtual memory alternatives for transaction buffer management in a single-level store /." Thesis, Connect to this title online; UW restricted, 1996. http://hdl.handle.net/1773/6961.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Yoon, Myungchul. "Development and analysis of weak memory consistency models to accelerate shared memory multiprocessor systems /." Digital version accessible at:, 1998. http://wwwlib.umi.com/cr/utexas/main.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Karlsson, Martin. "Memory System Design for Chip-Multiprocessors." Doctoral thesis, Uppsala : Acta Universitatis Upsaliensis : Universitetsbiblioteket [distributör], 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-6250.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Li, Wentong Kavi Krishna M. "High performance architecture using speculative threads and dynamic memory management hardware." [Denton, Tex.] : University of North Texas, 2007. http://digital.library.unt.edu/permalink/meta-dc-5150.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Yu, Ching-han. "Memory management strategies to improve the space-time performance of Java programs." View the Table of Contents & Abstract, 2006. http://sunzi.lib.hku.hk/hkuto/record/B36396679.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Yu, Ching-han, and 余靜嫺. "Memory management strategies to improve the space-time performance of Java programs." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2006. http://hub.hku.hk/bib/B37031569.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Kim, Jinwoo. "Memory hierarchy management through off-line computational learning." Diss., Georgia Institute of Technology, 2003. http://hdl.handle.net/1853/8194.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Crowell, Jonathan B. "Multifractal analysis of memory usage patterns." Morgantown, W. Va. : [West Virginia University Libraries], 2001. http://etd.wvu.edu/templates/showETD.cfm?recnum=2240.

Full text
Abstract:
Thesis (M.S.)--West Virginia University, 2001.
Title from document title page. Document formatted into pages; contains vii, 47 p. : ill. Includes abstract. Includes bibliographical references (p. 45-47).
APA, Harvard, Vancouver, ISO, and other styles
16

Akritidis, Periklis. "Practical memory safety for C." Thesis, University of Cambridge, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.609600.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Jiang, Song. "Efficient caching algorithms for memory management in computer systems." W&M ScholarWorks, 2004. https://scholarworks.wm.edu/etd/1539623446.

Full text
Abstract:
As disk performance continues to lag behind that of memory systems and processors, fully utilizing memory to reduce disk accesses is a highly effective effort to improve the entire system performance. Furthermore, to serve the applications running on a computer in distributed systems, not only the local memory but also the memory on remote servers must be effectively managed to minimize I/O operations. The critical challenges in an effective memory cache management include: (1) Insightfully understanding and quantifying the locality inherent in the memory access requests; (2) Effectively utilizing the locality information in replacement algorithms; (3) Intelligently placing and replacing data in the multi-level caches of a distributed system; (4) Ensuring that the overheads of the proposed schemes are acceptable.;This dissertation provides solutions and makes unique and novel contributions in application locality quantification, general replacement algorithms, low-cost replacement policy, thrashing protection, as well as multi-level cache management in a distributed system. First, the dissertation proposes a new method to quantify locality strength, and accurately to identify the data with strong locality. It also provides a new replacement algorithm, which significantly outperforms existing algorithms. Second, considering the extremely low-cost requirements on replacement policies in virtual memory management, the dissertation proposes a policy meeting the requirements, and considerably exceeding the performance existing policies. Third, the dissertation provides an effective scheme to protect the system from thrashing for running memory-intensive applications. Finally, the dissertation provides a multi-level block placement and replacement protocol in a distributed client-server environment, exploiting non-uniform locality strengths in the I/O access requests.;The methodology used in this study include careful application behavior characterization, system requirement analysis, algorithm designs, trace-driven simulation, and system implementations. A main conclusion of the work is that there is still much room for innovation and significant performance improvement for the seemingly mature and stable policies that have been broadly used in the current operating system design.
APA, Harvard, Vancouver, ISO, and other styles
18

Nussbaum, Daniel Seth. "Run-time thread management for large-scale distributed-memory multiprocessors." Thesis, Massachusetts Institute of Technology, 1993. http://hdl.handle.net/1721.1/37178.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1993.
Includes bibliographical references (p. 214-216).
by Daniel Nussbaum.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
19

Zhou, Qinan. "Memory Management and Garbage Collection Algorithms for Java-Based Prolog." Thesis, University of North Texas, 2001. https://digital.library.unt.edu/ark:/67531/metadc2825/.

Full text
Abstract:
Implementing a Prolog Runtime System in a language like Java which provides its own automatic memory management and safety features such as built--in index checking and array initialization requires a consistent approach to memory management based on a simple ultimate goal: minimizing total memory management time and extra space involved. The total memory management time for Jinni is made up of garbage collection time both for Java and Jinni itself. Extra space is usually requested at Jinni's garbage collection. This goal motivates us to find a simple and practical garbage collection algorithm and implementation for our Prolog engine. In this thesis we survey various algorithms already proposed and offer our own contribution to the study of garbage collection by improvements and optimizations for some classic algorithms. We implemented these algorithms based on the dynamic array algorithm for an all--dynamic Prolog engine (JINNI 2000). The comparisons of our implementations versus the originally proposed algorithm allow us to draw informative conclusions on their theoretical complexity model and their empirical effectiveness.
APA, Harvard, Vancouver, ISO, and other styles
20

Janapsatya, Andhi Computer Science &amp Engineering Faculty of Engineering UNSW. "Optimization of instruction memory for embedded systems." Awarded by:University of New South Wales. School of Computer Science and Engineering, 2005. http://handle.unsw.edu.au/1959.4/24210.

Full text
Abstract:
This thesis presents methodologies for improving system performance and energy consumption by optimizing the memory hierarchy performance. The processor-memory performance gap is a well-known problem that is predicted to get worse, as the performance gap between processor and memory is widening. The author describes a method to estimate the best L1 cache configuration for a given application. In addition, three methods are presented to improve the performance and reduce energy in embedded systems by optimizing the instruction memory. Performance estimation is an important procedure to assess the performance of the system and to assess the effectiveness of any applied optimizations. A cache memory performance estimation methodology is presented in this thesis. The methodology is designed to quickly and accurately estimate the performance of multiple cache memory configurations. Experimental results showed that the methodology is on average 45 times faster compared to a widely used tool (Dinero IV). The first optimization method is a software-only method, called code placement, was implemented to improve the performance of instruction cache memory. The method involves careful placement of code within memory to ensure high cache hit rate when code is brought into the cache memory. Code placement methodology aims to improve cache hit rates to improve cache memory performance. Experimental results show that by applying the code placement method, a reduction in cache miss rate by up to 71%, and energy consumption reduction of up to 63% are observed when compared to application without code placement. The second method involves a novel architecture for utilizing scratchpad memory. The scratchpad memory is designed as a replacement of the instruction cache memory. Hardware modification was designed to allow data to be written into the scratchpad memory during program execution, allowing dynamic control of the scratchpad memory content. Scratchpad memory has a faster memory access time and a lower energy consumption per access compared to cache memory; the usage of scratchpad memory aims to improve performance and lower energy consumption of systems compared to system with cache memory. Experimental results show an average energy reduction of 26.59% and an average performance improvement of 25.63% when compared to a system with cache memory. The third is an application profiling method using statistical information to identify application???s hot-spots. Application profiling is important for identifying section in the application where performance degradation might occur and/or where maximum performance gain can be obtained through optimization. The method was applied and tested on the scratchpad based system described in this thesis. Experimental results show the effectiveness of the analysis method in reducing energy and improving performance when compared to previous method for utilizing the scratchpad memory based system (average performance improvement of 23.6% and average energy reduction of 27.1% are observed).
APA, Harvard, Vancouver, ISO, and other styles
21

Huang, Jipeng. "Efficient Context Sensitivity for Dynamic Analyses via Calling Context Uptrees and Customized Memory Management." The Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1397231571.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Ananthanarayanan, R. (Rajagopal). "High performance distributed shared memory." Diss., Georgia Institute of Technology, 1997. http://hdl.handle.net/1853/8129.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Romer, Theodore H. "Using virtual memory to improve cache and TLB performance /." Thesis, Connect to this title online; UW restricted, 1998. http://hdl.handle.net/1773/6913.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Sinha, Aman. "Memory management and transaction scheduling for large-scale databases /." Digital version accessible at:, 1999. http://wwwlib.umi.com/cr/utexas/main.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

McDonald, Ian Lindsay. "Memory management in a distributed system of single address space operating systems supporting quality of service." Thesis, University of Glasgow, 2001. http://theses.gla.ac.uk/5427/.

Full text
Abstract:
The choices provided by an operating system to the application developer for managing memory came in two forms: no choice at all, with the operating system making all decisions about managing memory; or the choice to implement virtual memory management specific to the individual application. The second of these choices is, for all intents and purposes, the same as the first: no choice at all. For many application developers, the cost of implementing a customised virtual memory management system is just too high. The results is that, regardless of the level of flexibility available, the developer ends up using the system-provided default. Further exacerbating the problem is the tendency for operating system developers to be extremely unimaginative when providing that same default. Advancements in virtual memory techniques such as prefetching, remote paging, compressed caching, and user-level page replacement coupled with the provision of user-level virtual memory management should have heralded a new era of choice and an application-centric approach to memory management. Unfortunately, this has failed to materialise. This dissertation describes the design and implementation of the Heracles virtual memory management system. The Heracles approach is one of inclusion rather than exclusion. The main goal of Heracles is to provide an extensible environment that is configurable to the extent of providing application-centric memory management without the need for application developers to implement their own. However, should the application developer wish to provide a more specialised implementation for all or any part of Heracles, the system is constructed around well-defined interfaces that allow new implementations to be "plugged in" where required. The result is a virtual memory management hierarchy that is highly configurable, highly flexible, and can be adapted at run-time to meet new phases in the application's behaviour. Furthermore, different parts of an application's address space can have different hierarchies associated with managing its memory.
APA, Harvard, Vancouver, ISO, and other styles
26

Subramaniam, Samantika. "Improving processor efficiency by exploiting common-case behaviors of memory instructions." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/28165.

Full text
Abstract:
Thesis (M. S.)--Computing, Georgia Institute of Technology, 2009.
Committee Chair: Loh, Gabriel H.; Committee Member: Clark, Nathan; Committee Member: Jaleel, Aamer; Committee Member: Kim, Hyesoon; Committee Member: Lee, Hsien-Hsin S.; Committee Member: Prvulovic, Milos.
APA, Harvard, Vancouver, ISO, and other styles
27

Ghosh, Mrinmoy. "Microarchitectural techniques to reduce energy consumption in the memory hierarchy." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/28265.

Full text
Abstract:
Thesis (M. S.)--Electrical and Computer Engineering, Georgia Institute of Technology, 2009.
Committee Chair: Lee, Hsien-Hsin S.; Committee Member: Cahtterjee,Abhijit; Committee Member: Mukhopadhyay, Saibal; Committee Member: Pande, Santosh; Committee Member: Yalamanchili, Sudhakar.
APA, Harvard, Vancouver, ISO, and other styles
28

Leslie, Rebekah. "A Functional Approach to Memory-Safe Operating Systems." PDXScholar, 2011. https://pdxscholar.library.pdx.edu/open_access_etds/499.

Full text
Abstract:
Purely functional languages--with static type systems and dynamic memory management using garbage collection--are a known tool for helping programmers to reduce the number of memory errors in programs. By using such languages, we can establish correctness properties relating to memory-safety through our choice of implementation language alone. Unfortunately, the language characteristics that make purely functional languages safe also make them more difficult to apply in a low-level domain like operating systems construction. The low-level features that support the kinds of hardware manipulations required by operating systems are not typically available in memory-safe languages with garbage collection. Those that are provided may have the ability to violate memory- and type-safety, destroying the guarantees that motivate using such languages in the first place. This work demonstrates that it is possible to bridge the gap between the requirements of operating system implementations and the features of purely functional languages without sacrificing type- and memory-safety. In particular, we show that this can be achieved by isolating the potentially unsafe memory operations required by operating systems in an abstraction layer that is well integrated with a purely functional language. The salient features of this abstraction layer are that the operations it exposes are memory-safe and yet sufficiently expressive to support the implementation of realistic operating systems. The abstraction layer enables systems programmers to perform all of the low-level tasks necessary in an OS implementation, such as manipulating an MMU and executing user-level programs, without compromising the static memory-safety guarantees of programming in a purely functional language. A specific contribution of this work is an analysis of memory-safety for the abstraction layer by formalizing a meaning for memory-safety in the presence of virtual-memory using a novel application of noninterference security policies. In addition, we evaluate the expressiveness of the abstraction layer by implementing the L4 microkernel API, which has a flexible set of virtual memory management operations.
APA, Harvard, Vancouver, ISO, and other styles
29

Kimbrel, Tracy. "Parallel prefetching and caching /." Thesis, Connect to this title online; UW restricted, 1997. http://hdl.handle.net/1773/6943.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Van, Vleet Taylor. "Dynamic cache-line sizes /." Thesis, Connect to this title online; UW restricted, 2000. http://hdl.handle.net/1773/6899.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Wong, Wayne A. "Techniques utilizing memory reference characteristics for improved performance /." Thesis, Connect to this title online; UW restricted, 2002. http://hdl.handle.net/1773/6934.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Österlund, Erik. "Automatic memory management system for automatic parallelization." Thesis, Linnéuniversitetet, Institutionen för datavetenskap, fysik och matematik, DFM, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-13693.

Full text
Abstract:
With Moore’s law coming to an end and the era of multiprocessor chips emerging, the need for ways of dealing with the essential problems with concurrency is becoming imminent. Automatic parallelization for imperative languages and pure functions in functional programming languages all try to prove independence statically. This thesis argues that independence is dynamic in nature. Static analysis for automatic parallelization has failed to do anything but trivial optimizations. This thesis shows a new approach where dynamic analysis about the system is provided for very low costs using a garbage collector that has to go through all live cells anyway. Immutable sub-graphs of objects that cannot change state are found. Their methods become pure functions that can be parallelized. The garbage collector implemented is a kind of replicating collector. It is about three times faster than Boehm’s collector in garbage collection, fully concurrent and provides the dynamic analysis almost for free.
APA, Harvard, Vancouver, ISO, and other styles
33

Van, Kleek Max 1980. "Effort, memory, attention and time : paths to more effective personal information management." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/66466.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2011.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 256-272).
With the widespread availability of digital tools for storing, accessing, and sharing information, why is so much information still lost, forgotten, or kept on paper? The work in this thesis finds that such disorganization results from problems in the designs of the personal information management (PIM) tools in common use today. Such problems impede information capture, force many information forms to be left out, and cause information to be forgotten. How can these problems be mitigated? Our Information Scraps study identifies the need to support more diverse kinds of information, while conserving time, attention, and memory for retained information items. Our first approach to achieving these goals is to eliminate the artificial separation and homogeneity that structured PIM tools impose, so that arbitrary information can be captured in any way desired. A two-year study of List-it, our short-note-taking tool, discovers that people keep notes serving 5 primary roles: reminders, reference items, progress trackers, places to think, and archives of personal value. The second reintroduces structured data to support more effective use and management of information collections. Jourknow addresses the manageability of large note collections with lightweight-structured note contents and contextual retrieval, the access of notes by the contexts and activities at the time of creation. Poyozo reinforces recollection of previously seen information, by providing visualizations of all of a person's past information activities. Finally, Atomate addresses the challenge of managing the ever-increasing deluge of new information, by letting people delegate to software behaviors actions to be automatically taken when new information arrives. These studies identify critical needs of PIM tools and offer viable solutions.
by Max Goodwin Van Kleek.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
34

Chan, Kinson, and 陳傑信. "Distributed software transactional memory with clock validation on clusters." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2013. http://hub.hku.hk/bib/B5053404X.

Full text
Abstract:
Within a decade, multicore processors emerged and revolutionised the world of computing. Nowadays, even a low-end computer comes with a multi-core processor and is capable running multiple threads simultaneously. It becomes impossible to make the best computation power out from a computer with a single-threaded program. Meanwhile, writing multi-threaded software is daunting to a lot of programmers as the threads share data and involve complicated synchronisation techniques such as locks and conditions. Software transactional memory is a promising alternative model that programmers simply need to understand transactional consistency and segment code into transactions. Programming becomes exciting again, without races, deadlocks and other issues that are common in lock-based paradigms. To pursue high throughput, performance-oriented computers have several multicore processors per each. A processor’s cache is not directly accessible by the cores in other processors, leading to non-uniform latency when the threads share data. These computers no longer behave like the classical symmetric multiprocessor computers. Although old programs continue to work, they do not necessary benefit from the added cores and caches. Most software transactional memory implementations fall into this category. They rely on a centralised and shared meta-variable (like logical clock) in order to provide the single-lock atomicity. On a computer with two or more multicore processors, the single and shared meta-variable gets regularly updated by different processors. This leads to a tremendous amount of cache contentions. Much time is spent on inter-processor cache invalidations rather than useful computations. Nevertheless, as computers with four processors or more are exponentially complex and expensive, people would desire solving sophisticated problems with several smaller computers whenever possible. Supporting software transactional consistency across multiple computers is a rarely explored research area. Although we have similar mature research topics such as distributed shared memory and distributed relational database, they have remarkably different characteristics so that most of the implementation techniques and tricks are not applicable to the new system. There are several existing distributed software transactional memory systems, but we feel there is much room for improvement. One crucial area is the conflict detection mechanism. Some of these systems make use of broadcast messages to commit transactions, which are certainly not scalable for large-scale clusters. Others use directories to direct messages to the relevant nodes only, but they also keep visible reader lists for invalidation per node. Updating a shared reader lists involves cache invalidations on processors. Reading shared data on such systems are more expensive compared to the conventional low-cost invisible reader validation systems. In this research, we aim to have a distributed software transactional memory system, with distributed clock validation for conflict detection purpose. As preparation, we first investigate some issues such as concurrency control and conflict detection in single-node systems. Finally, we combine the techniques with a tailor-made cache coherence protocol that is differentiated from typical distributed shared memory.
published_or_final_version
Computer Science
Doctoral
Doctor of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
35

Lee, Min. "Memory region: a system abstraction for managing the complex memory structures of multicore platforms." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/50398.

Full text
Abstract:
The performance of modern many-core systems depends on the effective use of their complex cache and memory structures, and this will likely become more pronounced with the impending arrival of on-chip 3D stacked and non-volatile off-chip byte-addressable memory. Yet to date, operating systems have not treated memory as a first class schedulable resource, embracing memory heterogeneity. This dissertation presents a new software abstraction, called ‘memory region’, which denotes the current set of physical memory pages actively used by workloads. Using this abstraction, memory resources can be scheduled for applications to fully exploit a platform's underlying cache and memory system, thereby gaining improved performance and predictability in execution, particularly for the consolidated workloads seen in virtualized and cloud computing infrastructures. The abstraction's implementation in the Xen hypervisor involves the run-time detection of memory regions, the scheduled mapping of these regions to caches to match performance goals, and maintaining region-to-cache mappings using per-cache page tables. This dissertation makes the following specific contributions. First, its region scheduling method proposes that the location of memory blocks rather than CPU utilization is the principal determinant where workloads are run. It proposes a new scheduling method, the region scheduling that the location of memory blocks determines where the workloads are run. Second, treating memory blocks as first-class resources, new methods for efficient cache management are shown to improve application performance as well as the performance of certain operating system functions. Third, explicit memory scheduling makes it possible to disaggregate operating systems, without the need to change OS sources and with only small markups of target guest OS functionality. With this method, OS functions can be mapped to specific desired platform components, such as file system confined to running on specific cores and using only certain memory resources designated for its use. This can improve performance for applications heavily dependent on certain OS functions, by dynamically providing those functions with the resources needed for their current use, and it can prevent performance-critical application functionality from being needlessly perturbed by OS functions used for other purposes or by other jobs. Fourth, extensions of region scheduling can also help applications deal with the heterogeneous memory resources present in future systems, including on-chip stacked DRAM and NUMA or even NVRAM memory modules. More generally, regions scheduling is shown to apply to memory structures with well-defined differences in memory access latencies.
APA, Harvard, Vancouver, ISO, and other styles
36

Gilgeous, Latoya Tabita. "An integrated software/hardware approach to detecting memory bounds violations." Diss., Online access via UMI:, 2007.

Find full text
Abstract:
Thesis (M.S.)--State University of New York at Binghamton, Thomas J. Watson School of Engineering and Applied Science, Department of Electrical and Computer Engineering, 2007.
Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
37

Gao, Shen. "Transaction logging and recovery on phase-change memory." HKBU Institutional Repository, 2013. http://repository.hkbu.edu.hk/etd_ra/1549.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Chan, Kinson, and 陳傑信. "An adaptive software transactional memory support for multi-core programming." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2009. http://hub.hku.hk/bib/B43278759.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Chan, Kinson. "An adaptive software transactional memory support for multi-core programming." Click to view the E-thesis via HKUTO, 2009. http://sunzi.lib.hku.hk/hkuto/record/B43278759.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Tsegaye, Melekam Asrat. "A model for a context aware machine-based personal memory manager and its implementation using a visual programming environment." Thesis, Rhodes University, 2007. http://hdl.handle.net/10962/d1006563.

Full text
Abstract:
Memory is a part of cognition. It is essential for an individual to function normally in society. It encompasses an individual's lifetime experience, thus defining his identity. This thesis develops the concept of a machine-based personal memory manager which captures and manages an individual's day-to-day external memories. Rather than accumulating large amounts of data which has to be mined for useful memories, the machine-based memory manager automatically organizes memories as they are captured to enable their quick retrieval and use. The main functions of the machine-based memory manager envisioned in this thesis are the support and the augmentation of an individual's biological memory system. In the thesis, a model for a machine-based memory manager is developed. A visual programming environment, which can be used to build context aware applications as well as a proof-of-concept machine-based memory manager, is conceptualized and implemented. An experimental machine-based memory manager is implemented and evaluated. The model describes a machine-based memory manager which manages an individual's external memories by context. It addresses the management of external memories which accumulate over long periods of time by proposing a context aware file system which automatically organizes external memories by context. It describes how personal memory management can be facilitated by machine using six entities (life streams, memory producers, memory consumers, a memory manager, memory fragments and context descriptors) and the processes in which these entities participate (memory capture, memory encoding and decoding, memory decoding and retrieval). The visual programming environment represents a development tool which contains facilities that support context aware application programming. For example, it provides facilities which enable the definition and use of virtual sensors. It enables rapid programming with a focus on component re-use and dynamic composition of applications through a visual interface. The experimental machine-based memory manager serves as an example implementation of the machine-based memory manager which is described by the model developed in this thesis. The hardware used in its implementation consists of widely available components such as a camera, microphone and sub-notebook computer which are assembled in the form of a wearable computer. The software is constructed using the visual programming environment developed in this thesis. It contains multiple sensor drivers, context interpreters, a context aware file system as well as memory retrieval and presentation interfaces. The evaluation of the machine-based memory manager shows that it is possible to create a machine which monitors the states of an individual and his environment, and manages his external memories, thus supporting and augmenting his biological memory.
APA, Harvard, Vancouver, ISO, and other styles
41

Cheung, Wang-leung Benny. "Large object space support for software distributed shared memory." Click to view the E-thesis via HKUTO, 2005. http://sunzi.lib.hku.hk/hkuto/record/B31601741.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Blomster, Katie Ann. "Schemes for reducing power and delay in SRAMs." Online access for everyone, 2006. http://www.dissertations.wsu.edu/Thesis/Summer2006/k%5Fblomster%5F071706.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

張宏亮 and Wang-leung Benny Cheung. "Migrating-home protocol for software distributed shared-memory system." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2000. http://hub.hku.hk/bib/B31222377.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Cheung, Wang-leung Benny, and 張宏亮. "Large object space support for software distributed shared memory." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2005. http://hub.hku.hk/bib/B31601741.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Ammari, Rami J. "A study for reducing conflict misses in data cache." Master's thesis, Mississippi State : Mississippi State University, 2004. http://library.msstate.edu/etd/show.asp?etd=etd-04032004-211908.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Clark, Thomas K. "Logging Subsystem Performance: Model and Evaluation." PDXScholar, 1994. https://pdxscholar.library.pdx.edu/open_access_etds/4724.

Full text
Abstract:
Transaction logging is an integral part of ensuring proper transformation of data from one state to another in modern data management. Because of this, the throughput of the logging subsystem can be critical to the throughput of an application. The purpose of this research is to break the log bottleneck at minimum cost. We first present a model for evaluating a logging subsystem, where a logging subsystem is made up of a log device, a log backup device, and the interconnect algorithm between the two, which we term the log backup method. Included in the logging model is a set of criteria for evaluating a logging subsystem and a system for weighting the criteria in order to facilitate comparisons of two logging subsystem configurations to determine the better of the two. We then present an evaluation of each of the pieces of the logging subsystem in order to increase the bandwidth of both the log device and log backup device, while selecting the best log backup method, at minimum cost. We show that the use of striping and RAID is the best alternative for increasing log device bandwidth. Along with our discussion of RAID, we introduce a new RAID algorithm that is designed to overcome the performance problems of small writes in a RAID log. In order to increase the effective bandwidth of the log backup device, we suggest the use of inexpensive magnetic tape drives and striping in the log backup device, where the bandwidth of the log backup device is increased to the point that it matches the bandwidth of the log device. For the log backup interconnect algorithm, we present the novel approach of backing up the log synchronously, where the log backup device is essentially a mirror of the log device, as well as evaluating other log backup interconnect algorithms. Finally, we present a discussion of a prototype implementation of some of the ideas in the thesis. The prototype was implemented in a commercial database system, using a beta version of INFORMIX-OnLine Dynamic Server™ version 6.0.
APA, Harvard, Vancouver, ISO, and other styles
47

Muthu, Srinivas. "A Context-Aware Approach to Android Memory Management." University of Toledo / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1449665506.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Zhang, Xiushan. "L2 cache replacement based on inter-access time per access count prediction." Diss., Online access via UMI:, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
49

Wu, Jiesheng. "Communication and memory management in networked storage systems." The Ohio State University, 2004. http://rave.ohiolink.edu/etdc/view?acc_num=osu1095696917.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Li, Wentong. "High Performance Architecture using Speculative Threads and Dynamic Memory Management Hardware." Thesis, University of North Texas, 2007. https://digital.library.unt.edu/ark:/67531/metadc5150/.

Full text
Abstract:
With the advances in very large scale integration (VLSI) technology, hundreds of billions of transistors can be packed into a single chip. With the increased hardware budget, how to take advantage of available hardware resources becomes an important research area. Some researchers have shifted from control flow Von-Neumann architecture back to dataflow architecture again in order to explore scalable architectures leading to multi-core systems with several hundreds of processing elements. In this dissertation, I address how the performance of modern processing systems can be improved, while attempting to reduce hardware complexity and energy consumptions. My research described here tackles both central processing unit (CPU) performance and memory subsystem performance. More specifically I will describe my research related to the design of an innovative decoupled multithreaded architecture that can be used in multi-core processor implementations. I also address how memory management functions can be off-loaded from processing pipelines to further improve system performance and eliminate cache pollution caused by runtime management functions.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography