To see the other types of publications on this topic, follow the link: Managed memory.

Dissertations / Theses on the topic 'Managed memory'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 37 dissertations / theses for your research on the topic 'Managed memory.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Jost, Tiago Trevisan. "SoMMA : a software managed memory architecture for multi-issue processors." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2017. http://hdl.handle.net/10183/170975.

Full text
Abstract:
Processadores embarcados utilizam eficientemente o paralelismo a nível de instrução para atender as necessidades de desempenho e energia em aplicações atuais. Embora a melhoria de performance seja um dos principais objetivos em processadores em geral, ela pode levar a um impacto negativo no consumo de energia, uma restrição crítica para sistemas atuais. Nesta dissertação, apresentamos o SoMMA, uma arquitetura de memória gerenciada por software para processadores embarcados capaz de reduz consumo de energia e energy-delay product (EDP), enquanto ainda aumenta a banda de memória. A solução combina o uso de memórias gerenciadas por software com a cache de dados, de modo a reduzir o consumo de energia e EDP do sistema. SoMMA também melhora a performance do sistema, pois os acessos à memória podem ser realizados em paralelo, sem custo em portas de memória extra na cache de dados. Transformações de código do compilador auxiliam o programador a utilizar a arquitetura proposta. Resultados experimentais mostram que SoMMA é mais eficiente em termos de energia e desempenho tanto a nível de processador quanto a nível do sistema completo. A técnica apresenta speedups de 1.118x e 1.121x, consumindo 11% e 12.8% menos energia quando comparando processadores que utilizam e não utilizam SoMMA. Há ainda redução de até 41.5% em EDP do sistema, sempre mantendo a área dos processadores equivalentes. Por fim, SoMMA também reduz o número de cache misses quando comparado ao processador baseline.
Embedded processors rely on the efficient use of instruction-level parallelism to answer the performance and energy needs of modern applications. Though improving performance is the primary goal for processors in general, it might lead to a negative impact on energy consumption, a particularly critical constraint for current systems. In this dissertation, we present SoMMA, a software-managed memory architecture for embedded multi-issue processors that can reduce energy consumption and energy-delay product (EDP), while still providing an increase in memory bandwidth. We combine the use of software-managed memories (SMM) with the data cache, and leverage the lower energy access cost of SMMs to provide a processor with reduced energy consumption and EDP. SoMMA also provides a better overall performance, as memory accesses can be performed in parallel, with no cost in extra memory ports. Compiler-automated code transformations minimize the programmer’s effort to benefit from the proposed architecture. Our experimental results show that SoMMA is more energy- and performance-efficient not only for the processing cores, but also at full-system level. Comparisons were done using the VEX processor, a VLIW reconfigurable processor. The approach shows average speedups of 1.118x and 1.121x, while consuming up to 11% and 12.8% less energy when comparing two modified processors and their baselines. SoMMA also shows reduction of up to 41.5% on full-system EDP, maintaining the same processor area as baseline processors. Lastly, even with SoMMA halving the data cache size, we still reduce the number of data cache misses in comparison to baselines.
APA, Harvard, Vancouver, ISO, and other styles
2

Barua, Rajeev K. (Rajeev Kumar). "Maps : a compiler-managed memory system for software-exposed architectures." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/37194.

Full text
Abstract:
Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2000.
Includes bibliographical references (p. 155-161).
Microprocessors must exploit both instruction-level parallelism (ILP) and memory parallelism for high performance. Sophisticated techniques for ILP have boosted the ability of modern-day microprocessors to exploit ILP when available. Unfortunately, improvements in memory parallelism in microprocessors have lagged behind. This thesis explains why memory parallelism is hard to exploit in microprocessors and advocate bank-exposed architectures as an effective way to exploit more memory parallelism. Bank exposed architectures are a kind of software-exposed architecture: one in which the low level details of the hardware are visible to the software. In a bank-exposed architecture, the memory banks are visible to the software, enabling the compiler to exploit a high degree of memory parallelism in addition to ILP. Bank-exposed architectures can be employed by general-purpose processors, and by embedded chips, such as those used for digital-signal processing. This thesis presents Maps, an enabling compiler technology for bank-exposed architectures. Maps solves the problem of bank-disambiguation, i.e., how to distribute data in sequential programs among several banks to best exploit memory parallelism, while retaining the ability to disambiguate each data reference to a particular bank. Two methods for bank disambiguation are presented: equivalence-class unification and modulo unrolling. Taking a sequential program as input, a bank-disambiguation method produces two outputs: first, a distribution of each program object among the memory banks; and second, a bank number for every reference that can be proven to access a single, known bank for that data distribution. Finally, the thesis shows why non-disambiguated accesses are sometimes desirable. Dependences between disambiguated and non-disambiguated accesses are enforced through explicit synchronization and software serial ordering. The MIT Raw machine is an example of a software-exposed architecture. Raw exposes its ILP, memory and communication mechanisms. The Maps system has been implemented in the Raw compiler. Results on Raw using sequential codes demonstrate that using bank disambiguation in addition to ILP improves performance by a factor of 3 to 5 over using ILP alone.
by Rajeev Barua.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
3

Nagel, Fabian Oliver. "Efficient query processing in managed runtimes." Thesis, University of Edinburgh, 2015. http://hdl.handle.net/1842/15869.

Full text
Abstract:
This thesis presents strategies to improve the query evaluation performance over huge volumes of relational-like data that is stored in the memory space of managed applications. Storing and processing application data in the memory space of managed applications is motivated by the convergence of two recent trends in data management. First, dropping DRAM prices have led to memory capacities that allow the entire working set of an application to fit into main memory and to the emergence of in-memory database systems (IMDBs). Second, language-integrated query transparently integrates query processing syntax into programming languages and, therefore, allows complex queries to be composed in the application. IMDBs typically serve as data stores to applications written in an object-oriented language running on a managed runtime. In this thesis, we propose a deeper integration of the two by storing all application data in the memory space of the application and using language-integrated query, combined with query compilation techniques, to provide fast query processing. As a starting point, we look into storing data as runtime-managed objects in collection types provided by the programming language. Queries are formulated using language-integrated query and dynamically compiled to specialized functions that produce the result of the query in a more efficient way by leveraging query compilation techniques similar to those used in modern database systems. We show that the generated query functions significantly improve query processing performance compared to the default execution model for language-integrated query. However, we also identify additional inefficiencies that can only be addressed by processing queries using low-level techniques which cannot be applied to runtime-managed objects. To address this, we introduce a staging phase in the generated code that makes query-relevant managed data accessible to low-level query code. Our experiments in .NET show an improvement in query evaluation performance of up to an order of magnitude over the default language-integrated query implementation. Motivated by additional inefficiencies caused by automatic garbage collection, we introduce a new collection type, the black-box collection. Black-box collections integrate the in-memory storage layer of a relational database system to store data and hide the internal storage layout from the application by employing existing object-relational mapping techniques (hence, the name black-box). Our experiments show that black-box collections provide better query performance than runtime-managed collections by allowing the generated query code to directly access the underlying relational in-memory data store using low-level techniques. Black-box collections also outperform a modern commercial database system. By removing huge volumes of collection data from the managed heap, black-box collections further improve the overall performance and response time of the application and improve the application’s scalability when facing huge volumes of collection data. To enable a deeper integration of the data store with the application, we introduce self-managed collections. Self-managed collections are a new type of collection for managed applications that, in contrast to black-box collections, store objects. As the data elements stored in the collection are objects, they are directly accessible from the application using references which allows for better integration of the data store with the application. Self-managed collections manually manage the memory of objects stored within them in a private heap that is excluded from garbage collection. We introduce a special collection syntax and a novel type-safe manual memory management system for this purpose. As was the case for black-box collections, self-managed collections improve query performance by utilizing a database-inspired data layout and allowing the use of low-level techniques. By also supporting references between collection objects, they outperform black-box collections.
APA, Harvard, Vancouver, ISO, and other styles
4

Rose, Benjamin Aaron. "Intra- and Inter-chip Communication Support for Asymmetric Multicore Processors with Explicitly Managed Memory Hierarchies." Thesis, Virginia Tech, 2009. http://hdl.handle.net/10919/32824.

Full text
Abstract:
The use of asymmetric multi-core processors with on-chip computational accelerators is becoming common in a variety of environments ranging from scientific computing to enterprise applications. The focus of current research has been on making efficient use of individual systems, and porting applications to asymmetric processors. The use of these asymmetric processors, like the Cell processor, in a cluster setting is the inspiration for the Cell Connector framework presented in this thesis. Cell Connector adopts a streaming approach for providing data to compute nodes with high computing potential but limited memory resources. Instead of dividing very large data sets once among computation resources, Cell Connector slices, distributes, and collects work units off of a master data held by a single large memory machine. Using this methodology, Cell Connector is able to maximize the use of limited resources and produces results that are up to 63.3\% better compared to standard non-streaming approaches.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
5

Sandhu, Tahir S. Drake Frederick D. "Beyond American Memory technologies of library and office automation and their impact on multimedia computing for public education in the United States, 1963-present /." Normal, Ill. Illinois State University, 2001. http://wwwlib.umi.com/cr/ilstu/fullcit?p3006627.

Full text
Abstract:
Thesis (D.A.)--Illinois State University, 2001.
Title from title page screen, viewed April 18, 2006. Dissertation Committee: Frederick D. Drake (chair), Lawrence McBride, John B. Freed. Includes bibliographical references (leaves 351-398) and abstract. Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
6

Nejadfard, Kian. "Context-aware automated refactoring for unified memory allocation in NVIDIA CUDA programs." Cleveland State University / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=csu1624622944458295.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ayers, Andrew Edward. "M̲, a memory manager for Ḻ." Thesis, Massachusetts Institute of Technology, 1988. http://hdl.handle.net/1721.1/45691.

Full text
Abstract:
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1988.
Characters with an underscore appear as italic on the t.p.
Bibliography: leaves 94-95.
by Andrew Edward Ayers.
M.S.
APA, Harvard, Vancouver, ISO, and other styles
8

Harrison, Pascale. "How individuals with Fibromyalgia manage their memory problems." Thesis, University of the West of England, Bristol, 2012. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.572883.

Full text
Abstract:
Background: Previous studies have reported that there is both objective and subjective evidence that individuals with Fibromyalgia (FM) suffer from memory problems (Landro et al 1997; Katz et al 2004; Munoz et al 2005). The current literature has not examined how individuals cope with these difficulties. Aims of the study: The three aims were to: 1) measure the perceptions of managing memory problems for a FM population; 2) examine their coping response/strategies to establish if the responses can be explained by Models of Stress, Coping and Adjustment (The SRM and Transactional Model of Stress and Coping). 3) To explore the relationships between memory problems, mood, employment, health and pain. Methods: A mixed methodology design was adopted, within an internet survey. 100 participants (91 female, 4 males and 5 preferred to not answer) with FM were recruited from two FM Charity Websites. The modal age of the sample was 51-55 years old. 61 % of participants were from 'other' countries and 38% from the UK. Eight subjective questions (with comments) and psychometric tests were employed (Hospital Anxiety Depression Scale, Pain Catastrophizing Scale, Memory Aids Checklist and the Memory Complaint Questionnaire; and, demographic questions including employment). The questions were analysed using thematic analysis and inductive/deductive application of two theoretical models (Self Regulatory Model (Leventhal et al 2003) and Transactional Model of Stress and Coping (Lazarus et al 1984b). The data from the psychometric tests were analysed using SPSS (multiple regression and ANOVA). Results: Six themes emerged from the qualitative data, these were: "external memory aids"," impact upon daily life", "barriers that impact upon using strategies", "emotional well being", "other strategies and "denial of memory problems". The evidence from the themes implies that memory problems had a global impact upon a person's daily life: well being, home life, employment and coping behaviour. The SRM provided a framework for explaining the coping process. The Regression data proposed that depression significantly influenced perceptions of memory problems but not pain or anxiety. Conclusion: The outcome of this study suggests that both FM and memory problems impact upon well being by causing difficulties in the home and employment. The six themes suggest that memory problems can be managed by using external aids, family members/professionals and hobbies and interests. Recommendations from this research are that psycho-educational interventions would help to reduce memory problems and improve well being in this population.
APA, Harvard, Vancouver, ISO, and other styles
9

Rezaei, Mehran. "Intelligent Memory Manager: Towards improving the locality behavior of allocation-intensive applications." Thesis, University of North Texas, 2004. https://digital.library.unt.edu/ark:/67531/metadc4491/.

Full text
Abstract:
Dynamic memory management required by allocation-intensive (i.e., Object Oriented and linked data structured) applications has led to a large number of research trends. Memory performance due to the cache misses in these applications continues to lag in terms of execution cycles as ever increasing CPU-Memory speed gap continues to grow. Sophisticated prefetcing techniques, data relocations, and multithreaded architectures have tried to address memory latency. These techniques are not completely successful since they require either extra hardware/software in the system or special properties in the applications. Software needed for prefetching and data relocation strategies, aimed to improve cache performance, pollutes the cache so that the technique itself becomes counter-productive. On the other hand, extra hardware complexity needed in multithreaded architectures decelerates CPU's clock, since "Simpler is Faster." This dissertation, directed to seek the cause of poor locality behavior of allocation--intensive applications, studies allocators and their impact on the cache performance of these applications. Our study concludes that service functions, in general, and memory management functions, in particular, entangle with application's code and become the major cause of cache pollution. In this dissertation, we present a novel technique that transfers the allocation and de-allocation functions entirely to a separate processor residing in chip with DRAM (Intelligent Memory Manager). Our empirical results show that, on average, 60% of the cache misses caused by allocation and de-allocation service functions are eliminated using our technique.
APA, Harvard, Vancouver, ISO, and other styles
10

Muthukumaraswamy, Sivakumar Vijay. "An Evaluation of the Linux Virtual Memory Manager to Determine Suitability for Runtime Variation of Memory." Thesis, Virginia Tech, 2007. http://hdl.handle.net/10919/31608.

Full text
Abstract:
Systems that support virtual memory virtualize the available physical memory such that the applications running on them operate under the assumption that these systems have a larger amount of memory available than is actually present. The memory managers of these systems manage the virtual and the physical address spaces and are responsible for converting the virtual addresses used by the applications to the physical addresses used by the hardware. The memory managers assume that the amount of physical memory is constant and does not change during their period of operation. Some operating scenarios however, such as the power conservation mechanisms and virtual machine monitors, require the ability to vary the physical memory available at runtime, thereby making invalid the assumptions made by these memory managers. In this work we evaluate the suitability of the Linux Memory Manager, which assumes that the available physical memory is constant, for the purposes of varying the memory at run time. We have implemented an infrastructure over the Linux 2.6.11 kernel that enables the user to vary the physical memory available to the system. The available physical memory is logically divided into banks and each bank can be turned on or off independent of the others, using the new system calls we have added to the kernel. Apart from adding support for the new system calls, other changes had to be made to the Linux memory manager to support the runtime variation of memory. To evaluate the suitability for varying memory we have performed experiments with varying memory sizes on both the modified and the unmodified kernels. We have observed that the design of the existing memory manager is not well suited to support the runtime variation of memory; we provide suggestions to make it better suited for such purposes. Even though applications running on systems that support virtual memory do not use the physical memory directly and are not aware of the physical addresses they use, the amount of physical memory available for use affects the performance of the applications. The results of our experiments have helped us study the influence the amount of physical memory available for use has on the performance of various types of applications. These results can be used in scenarios requiring the ability to vary the memory at runtime to do so with least degradation in the application performance.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
11

Zhang, Q. "Memory management architecture for next generation networks traffic managers." Thesis, Queen's University Belfast, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.557859.

Full text
Abstract:
The trend of moving conventionallP networks towards Next Generation Networks (NGNs) has highlighted the need for more sophisticated Traffic Managers (TMs) to guarantee better network and Quality of Service (QoS); these have to be scalable to support increasing link bandwidth and to cater for more diverse emerging applications. Current TM solutions though, are limited and not flexible enough to support new TM functionality or QoS with increasing diversity at faster speeds. This thesis investigates efficient and flexible memory management architectures that are critical in determining scalability and upper limits of TM performance. The approach presented takes advantage of current FPGA technology that now offers a high density of computational resources and flexible memory configurations, leading to what the author contends to be an ideal, programmable platform for distributed network management. The thesis begins with a survey of current TM solutions and their underlying technologies/architectures, the outcome of which indicates that memory and memory interfacing are the major factors in determining the scalability and upper limits of TM performance. An analysis of the implementation cost for a new TM with the capability of integrated queuing and scheduling further highlights the need to develop a more effective memory management architecture. A new on-demand QM architecture for programmable TM is then proposed that can dynamically map the ongoing active flows to a limited number of physical queues. Compared to the traditional QMs, it consumes much less memory resources, leading to a more scalable and effiCient TM solution. Based on the analysis of the effect of varying Internet traffic on the proposed OM, a more robust and resilient QM architecture is derived that achieves higher scalability and pefformance by adapting its functionality to the changing network conditions.
APA, Harvard, Vancouver, ISO, and other styles
12

Beu, Jesse Garrett. "Design of heterogeneous coherence hierarchies using manager-client pairing." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/47710.

Full text
Abstract:
Over the past ten years, the architecture community has witnessed the end of single-threaded performance scaling and a subsequent shift in focus toward multicore and manycore processing. While this is an exciting time for architects, with many new opportunities and design spaces to explore, this brings with it some new challenges. One area that is especially impacted is the memory subsystem. Specifically, the design, verification, and evaluation of cache coherence protocols becomes very challenging as cores become more numerous and more diverse. This dissertation examines these issues and presents Manager-Client Pairing as a solution to the challenges facing next-generation coherence protocol design. By defining a standardized coherence communication interface and permissions checking algorithm, Manager-Client Pairing enables coherence hierarchies to be constructed and evaluated quickly without the high design-cost previously associated with hierarchical composition. Further, Manager-Client Pairing also allows for verification composition, even in the presence of protocol heterogeneity. As a result, this rapid development of diverse protocols is ensured to be bug-free, enabling architects to focus on performance optimization, rather than debugging and correctness concerns, while comparing diverse coherence configurations for use in future heterogeneous systems.
APA, Harvard, Vancouver, ISO, and other styles
13

El-Shambakey, Mohammed Talat. "Real-Time Software Transactional Memory: Contention Managers, Time Bounds, and Implementations." Diss., Virginia Tech, 2013. http://hdl.handle.net/10919/23867.

Full text
Abstract:
Lock-based concurrency control suffers from programmability, scalability, and composability challenges. These challenges are exacerbated in emerging multicore architectures, on which improved software performance must be achieved by exposing greater concurrency. Transactional memory (TM) is an emerging alternative synchronization model for shared memory objects that promises to alleviate these difficulties. In this dissertation, we consider software transactional memory (STM) for concurrency control in multicore real-time software, and present a suite of real-time STM contention managers for resolving transactional conflicts. The contention managers are called ECM, RCM, LCM, PNF, and FBLT. RCM and ECM resolve conflicts using fixed and dynamic priorities of real-time tasks, respectively, and are naturally intended to be used with the fixed priority (e.g., G-RMA) and dynamic priority (e.g., G-EDF) multicore real-time schedulers, respectively. LCM resolves conflicts based on task priorities as well as atomic section lengths, and can be used with G-EDF or G-RMA schedulers. Transactions under ECM, RCM, and LCM may retry due to conflicts with higher priority tasks even when there are no shared objects, i.e., transitive retry. PNF avoids transitive retry and optimizes processor usage by lowering the priority of retrying transactions, thereby enabling other non-conflicting transactions to proceed. PNF, however, requires a priori knowledge of all requested objects for each atomic section, which is inconsistent with the semantics of dynamic STM. Moreover, its centralized design increases overhead. FBLT avoids transitive retry, do not require a priori knowledge of requested objects, and has a decentralized design. We establish upper bounds on transactional retry costs and task response times under the contention managers through schedulability analysis. Since ECM and RCM preserve the semantics of the underlying real-time scheduler, their maximum transactional retry cost is double the maximum atomic section length. This is improved in the design of LCM, which achieves shorter retry costs and tighter upper bounds. As PNF avoids transitive retry and improves processor usage, it yields shorter retry costs and tighter upper bounds than ECM, RCM, and LCM. FBLT\'s upper bounds are similarly tight because it combines the advantages of PNF and LCM. We formally compare the proposed contention managers with each other, with lock-free synchronization, and with multiprocessor real-time locking protocols. Our analysis reveals that, for most cases, ECM, RCM, and LCM achieve higher schedulability than lock-free synchronization only when the atomic section length does not exceed half of lock-free synchronization\'s retry loop length. With equal periods and greater access times for shared objects, atomic section length under ECM, RCM, and LCM can be much larger than the retry loop length while still achieving better schedulability. With proper values for LCM\'s design parameters, atomic section length can be larger than the retry loop length for better schedulability. Under PNF, atomic section length can exceed lock-free\'s retry loop length and still achieve better schedulability in certain cases. FBLT achieves equal or better schedulability than lock-free with appropriate values for design parameters. The schedulability advantage of the contention managers over multiprocessor real-time locking protocols such as Global OMLP and RNLP depends upon the value of $s_{max}/L_{max}$, the ratio of the maximum transaction length to the maximum critical section length. FBLT\'s schedulability is equal or better than Global OMLP and RNLP if $s_/L_ le 2$. Checkpointing enables partial roll-back of transactions by recording transaction execution states (i.e., checkpoints) during execution, allowing roll-back to a previous checkpoint instead of transaction start, improving task response time. We extend FBLT with checkpointing and develop CP-FBLT, and identify the conditions under which CP-FBLT achieves equal or better schedulability than FBLT. We implement the contention managers in the Rochester STM framework and conduct experimental studies using a multicore real-time Linux kernel. Our studies reveal that among the contention managers, CP-FBLT has the best average-case performance. CP-FBLT\'s higher performance is due to the fact that PNF\'s and LCM\'s advantages are combined into the design of FBLT, which is the base of CP-FBLT. Moreover, checkpointing improves task response time. The contention managers were also found to have equal or better average-case performance than lock-free synchronization: more jobs meet their deadlines using CP-FBLT, FBLT, and PNF than lock-free synchronization by 34.6%, 28.5%, and 32.4% (on average), respectively. The superiority of the contention managers is directly due to their better conflict resolution policies. Locking protocols such as OMLP and RNLP were found to perform better: more jobs meet their deadlines under OMLP and RNLP than any contention manager by 12.4% and 13.7% (on average), respectively. However, the proposed contention managers have numerous qualitative advantages over locking protocols. Locks do not compose, whereas STM transactions do. To allow multiple objects to be accessed in a critical section, OMLP assigns objects to non-conflicting groups, where each group is protected by a distinct lock. RNLP assumes that objects are accessed in a specific order to prevent deadlocks. In contrast, STM allows multiple objects to be accessed in a transaction in any order, while guaranteeing deadlock-freedom, which significantly increases programmability. Moreover, STM offers platform independence: the proposed contention managers can be entirely implemented in the user-space as a library. In contrast, real-time locking protocols such as OMLP and RNLP must be supported by the underlying platform (i.e., operating system or virtual machine).
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
14

Tsegaye, Melekam Asrat. "A model for a context aware machine-based personal memory manager and its implementation using a visual programming environment." Thesis, Rhodes University, 2007. http://hdl.handle.net/10962/d1006563.

Full text
Abstract:
Memory is a part of cognition. It is essential for an individual to function normally in society. It encompasses an individual's lifetime experience, thus defining his identity. This thesis develops the concept of a machine-based personal memory manager which captures and manages an individual's day-to-day external memories. Rather than accumulating large amounts of data which has to be mined for useful memories, the machine-based memory manager automatically organizes memories as they are captured to enable their quick retrieval and use. The main functions of the machine-based memory manager envisioned in this thesis are the support and the augmentation of an individual's biological memory system. In the thesis, a model for a machine-based memory manager is developed. A visual programming environment, which can be used to build context aware applications as well as a proof-of-concept machine-based memory manager, is conceptualized and implemented. An experimental machine-based memory manager is implemented and evaluated. The model describes a machine-based memory manager which manages an individual's external memories by context. It addresses the management of external memories which accumulate over long periods of time by proposing a context aware file system which automatically organizes external memories by context. It describes how personal memory management can be facilitated by machine using six entities (life streams, memory producers, memory consumers, a memory manager, memory fragments and context descriptors) and the processes in which these entities participate (memory capture, memory encoding and decoding, memory decoding and retrieval). The visual programming environment represents a development tool which contains facilities that support context aware application programming. For example, it provides facilities which enable the definition and use of virtual sensors. It enables rapid programming with a focus on component re-use and dynamic composition of applications through a visual interface. The experimental machine-based memory manager serves as an example implementation of the machine-based memory manager which is described by the model developed in this thesis. The hardware used in its implementation consists of widely available components such as a camera, microphone and sub-notebook computer which are assembled in the form of a wearable computer. The software is constructed using the visual programming environment developed in this thesis. It contains multiple sensor drivers, context interpreters, a context aware file system as well as memory retrieval and presentation interfaces. The evaluation of the machine-based memory manager shows that it is possible to create a machine which monitors the states of an individual and his environment, and manages his external memories, thus supporting and augmenting his biological memory.
APA, Harvard, Vancouver, ISO, and other styles
15

Manickavasagam, Senthilkumar. ""a+b" arithmetic - Theory and implementation." Ohio University / OhioLINK, 1996. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1178051605.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

BINHOTE, JULIANA MOLINA. "WHO BUILDS AND MANAGES THE HISTORY AND MEMORY OF ORGANIZATIONS?: A STUDY OF THE PRODUCTION COMPANIES OF ORGANIZATIONAL HISTORIES." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2017. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=33116@1.

Full text
Abstract:
A literatura predominante em Administração preconiza que a vantagem competitiva das organizações depende da capacidade da organização aprender a aprender e dinamizar seus processos e rotinas organizacionais de forma a internalizar o conhecimento gerado pela interação de seus atores organizacionais e o ambiente externo. Essa preocupação tem gerado diversos estudos voltados para a forma pela qual esse conhecimento é gerado, organizado e armazenado e, voltando-se cada vez mais para o papel da história na estratégia da empresa e na construção de sentido organizacional interno e externo através da construção de centros de memória e gestão de documentos históricos organizacionais. No campo prático, cientes desse valor estratégico externo e interno do papel da memória, vinculando-o até mesmo a um dever de memória, ocorreram processos de memorialização de diversas organizações, com o surgimento de vários Centros de Documentação e Memória (CDM) e com a gestão de documentos históricos organizacionais. Contudo, poucos estudos discutem a importância da forma pela qual essas informações são colhidas, selecionadas e incorporadas ao sistema de conhecimento da organização e por quem são construídas. A contribuição deste estudo é aumentar a compreensão das políticas de gestão de memória organizacional, seja para fins de recuperação e disseminação de histórias e memórias, bem como sua utilização estratégica para fins de legitimação social e diferenciação do mercado através da identificação do que chamamos de organizações produtoras de história empresarial. Neste estudo foram identificadas as seguintes empresas consideradas como produtoras de história e memória: Grifo, Tempo e Memória, Expomus, Memória e Identidade e Museu da Pessoa. Em relação aos procedimentos metodológicos, os dados foram coletados a partir de (a) entrevistas com as empresas identificadas como as principais construtoras de história e memória organizacional; (b) pesquisa documental através dos sites e livros institucionais das organizações analisadas; (c) entrevistas e visitas técnicas com clientes/empresas contratantes desses serviços de construção de história e memória; e (d) entrevistas com representantes da associação brasileira de comunicação empresarial. A análise dos dados se deu a partir dos procedimentos da análise de conteúdo e gerou categorias identificadas a priori (histórico e equipe, portfólio de produtos, portfólio de clientes, relação com o mercado e dificuldades encontradas) e a posteriori (perspectiva do cliente/empresa contratante de algum produto/serviço realizado por cada empresa examinada). Por fim, a análise evidenciou uma homogeneidade das políticas de gestão de memória, de acordo com o pretendido pela organização contratante a forma pela qual essas políticas de memória serão difundidas dentro e fora da empresa, a utilização de narrativas construídas para o fim pretendido pela organização; e uma preconização de um dever de memória e de responsabilidade social histórica. Contudo, identificamos que o uso estratégico dos CDM ainda não alcançou todo o seu potencial, o que permitiu elencar algumas implicações gerenciais e sugestões de pesquisa futura.
The mainstream literature of Administration recommends that the competitive advantage of organizations depend on the ability of the organization to learn how to learn and stimulate their processes and organizational routines in order to internalize the knowledge generated by the interaction of their organizational actors and the external environment. This concern has generated several studies focused on the way this knowledge is generated, organized and stored and, turning increasingly to the role of history in the company s strategy and in the construction of internal and external organizational sense through the construction of memory centers and management of historical organizational documents. In the practical field, aware that the external and internal strategic value of the role of memory, linking it even a duty of memory, there memorialization processes of different organizations, with the emergence of various Memory and Documentation Centers (MDC) and the management of organizational historical documents. However, few studies discuss the importance of the way in which this information is collected, selected and incorporated into the organization s knowledge system and by whom they are built. The contribution of this study is to increase understanding of organizational memory management policies, both for the purpose of retrieving and disseminating stories and memories, as well as their strategic use for social legitimation and market differentiation by identifying what we call organizations producers of business history. In this study, the following companies were considered as producers of history and memory: Grifo, Tempo e Memória, Expomus, Memória e Identidade and Museu da Pessoa. Regarding the methodological procedures, the data were collected from (a) interviews with the companies identified as the main builders of history and organizational memory; (b) documentary research through the websites and institutional books of the organizations analyzed; (c) interviews and technical visits with clients / contractors of these history and memory building services; and (d) interviews with representatives of the Brazilian business communication association. The analysis of the data was based on the procedures of the content analysis and generated categories identified a priori (history and team, portfolio of products, portfolio of clients, relationship with the market and difficulties encountered) and a posteriori (perspective of the client / contractor of any product / service offered by each company examined). Finally, the analysis revealed a homogeneity of memory management policies, according to the intention of the contracting organization how these memory policies will be disseminated inside and outside the company, the use of narratives built for the purpose intended by the organization; and a recommendation of a duty of memory and of historical social responsibility. However, we have identified that the strategic use of MDCs has not yet reached its full potential, which allowed us to list some managerial implications and suggestions for future research.
APA, Harvard, Vancouver, ISO, and other styles
17

Jardim, Mara Publio de Souza Veiga. "O guardião da Memória." Pontifícia Universidade Católica de Goiás, 2004. http://localhost:8080/tede/handle/tede/2279.

Full text
Abstract:
Made available in DSpace on 2016-08-10T10:36:37Z (GMT). No. of bitstreams: 1 Mara Publio de Souza Veiga Jardim.pdf: 521940 bytes, checksum: 2334a45d36bbcf3b613c37f4edb31756 (MD5) Previous issue date: 2004-08-09
After a survey on the feast of the Holy Ghost, carried out in the city of Santa Cruz de Goiás, a small city in the interior of the State, it was possible to see the shaping of a figure who became both the theme and the object of this dissertation. Centering the discussion upon cultural assets, memory and identity, I identified and nominated a character who represents a new category: the NATIVE MANAGER. It is known that the activity carried out by this manager has always existed and that he has already been recognized by organizations such as UNESCO and IPHAN, even though they might have given him a different title. Cognizant of the rituals, with a view of his own, often times acrimonious and disparaging, of the society where he lived, the character did not possess political power which would give him status, but he exerted a specific kind of power that emanated from his acknowledged wisdom, albeit not an erudite or academic one. Cultural manifestations were kept alive because within the community there was this manager who recognized the importance of their cultural asset. His unchallenged leadership and charisma, along with the ability to administrate people and events made his actions possible. Within a simple and resourceless universe, deep in the hinterland, the manager promoted scenically elaborate feasts, commemorations and pageants, challenging congeneric activities held by the erudite city dwellers. I was able to see for myself these characteristics, bearing witness throughout more than 20 years, in this community, of the way in which he was respected and followed by everyone. The manager interpreted and analyzed not only the History, but also his own history, thereby preventing documents, music, rites and mores from becoming lost. The person who incorporated this character was Alberto da Paz. His life history has its origins in his slave ancestors and is still in the making as even blind, he persists in his role as manager. An indispensable figure in rehearsals, chants, performances, in fact, everywhere, he is the "the guardian of memory" who gives these people their identity.
A partir de uma pesquisa sobre a festa do Divino Espírito Santo realizada em Santa Cruz de Goiás, pequena cidade do interior do Estado, foi se delineando uma figura que se tornou tema e objeto desta dissertação. Centrando a discussão em patrimônio cultural, memória e identidade, identifiquei e nomeei o personagem que representa uma categoria nova: o GESTOR NATIVO. É sabido que a atividade exercida por este gestor sempre existiu e órgãos como a UNESCO e o IPHAN o reconheceram, ainda que com outras denominações. Conhecedor dos rituais, com uma visão própria, muitas vezes mordaz e crítica da sociedade em que vivia, não era detentor do poder político que lhe conferiria status, mas exercia um poder que emanava de sua reconhecida sabedoria, não erudita e não acadêmica. Manifestações culturais se mantiveram vivas porque dentro da comunidade existia este gestor que percebeu a importância do patrimônio cultural. Sua inconteste liderança e grande carisma, aliadas à capacidade de administrar pessoas e acontecimentos possibilitaram suas ações. Dentro de um universo simples, interiorano e sem recursos, realizou festas, comemorações e representações de grande riqueza cênica, desafiando atividades congêneres realizadas por eruditos citadinos. Pude comprovar estas características, testemunhando ao longo de mais de vinte anos de vivência nesta comunidade a maneira como era respeitado e seguido por todos. Interpretou e analisou não só a História como também a sua própria história e não permitiu que se perdessem documentos, músicas, ritos e costumes. A pessoa que incorporou este personagem foi Alberto da Paz. Sua história de vida remonta à sua ascendência de escravos e alcança os dias de hoje, quando cego, persiste atuando como gestor. Figura imprescindível em ensaios, cantorias, representações, em tudo afinal, é "o guardião da memória" que confere identidade a um povo.
APA, Harvard, Vancouver, ISO, and other styles
18

Martins, Tiago Ferreira. "Processamento analítico de dados em aplicações de em aplicações de monitorização de performance de redes utilizando in-memory data grids." Master's thesis, Universidade de Aveiro, 2016. http://hdl.handle.net/10773/22719.

Full text
Abstract:
Mestrado em Sistemas de Informação
Nos últimos anos, assiste-se a um forte incremento no que diz respeito ao volume de dados e sua respetiva valorização por parte das organizações, sobretudo em aplicações de monitorização de performance de redes, como se observa na ferramenta Nokia Performance Manager (NPM). A situação atual justifica-se com o crescimento do setor das telecomunicações, com a necessidade de dar resposta à constante evolução das tecnologias de rede 2G, 3G e atualmente o 4G ou LTE, no processo de centralização, processamento e armazenamento de dados ou indicadores de performance de rede recolhidos por antenas espalhadas em diversas zonas geográficas. Neste contexto, a ferramenta NPM disponibiliza ao cliente uma visão global do desempenho da rede em tempo real, através da geração de relatórios pré-configurados e posterior armazenamento para futuras visualizações. Perante a necessidade de armazenar esses relatórios por um período mais alargado, sem limitações de armazenamento e de forma mais eficiente verifica-se que os tradicionais sistemas de gestão e armazenamento, os RDBMS em disco e cache, não respondem da melhor forma a estas necessidades. Posto isto, surgiu o conceito “In-Memory Data Grid” como uma plataforma de dados completamente distribuídos sobre uma arquitetura em memória. De modo a provar este conceito foi implementado um protótipo de software para o processo de caching de relatórios, funcionalidade do NPM, utilizando as tecnologias Apache Ignite/GridGain e Apache Geode/Pivotal Gemfire e posteriormente foi realizado um ambiente de testes que permitiu avaliar as duas soluções e determinar aquela que seria a solução ideal a integrar na ferramenta NPM.
In recent years, there has been a strong increase in the volume of data and its respective valuation by organizations, especially in network performance monitoring applications, as seen in the Nokia Performance Manager (NPM) tool. The current situation is justified by the growth of the telecommunications sector, with the need to respond to the constant evolution of 2G, 3G and currently 4G or LTE network technologies, in the process of centralizing, processing and storing data or indicators of Network performance collected by distributed antennas in different geographic areas. In this context, the NPM tool provides the customer with a global view of the network performance in real time, through the generation of pre-configured reports and later storage for future visualizations. Given the need to store these reports for a longer period, without storage limitations, and more efficiently, traditional storage and management systems, RDBMS on disk and cache, do not best meet these needs. That said, the "In-Memory Data Grid" concept emerged as a completely distributed data platform on an in-memory architecture. In order to prove this concept, a software prototype was implemented for the report caching process, NPM functionality, using the Apache Ignite/GridGain and Apache Geode/Pivotal Gemfire technologies, and later a test environment was carried out to evaluate the two Solutions and determine what would be the ideal solution to integrate into the NPM tool.
APA, Harvard, Vancouver, ISO, and other styles
19

Lindh, Jayesh. "Våra viktigaste arbeten och arbetsplatser i livet : En självbiografisk minnesstudie." Thesis, Högskolan i Gävle, Avdelningen för arbetshälsovetenskap och psykologi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-30709.

Full text
Abstract:
Syftet med studien var att undersöka våra viktigaste positiva och negativa självbiografiska arbetsplatsminnen hos chefer och ej-chefer relaterade till arbetsrelaterad identitet och arbetsplatsminnens fenomenologi. Totalt deltog 194 personer, 124 män och 70 kvinnor i åldern 25 - 65 år i studien. Mätinstrumentet för studien bestod av en enkät i tre delar vilken innehöll frågor baserat på tidigare forskning. De positiva arbetsplatsminnena (80%) innehöll i huvudsak två teman, arbetsmiljö och mina arbetsuppgifter medans de negativa arbetsplatsminnena (20%) visade mer tydligt att en majoritet av minnena var kopplade till ledarskap följt av arbetsmiljö. Designen för analyserna av den kvantitativa datan var en mellanpersondesign, med två oberoende variabler Arbetsplatsminnen (positiva/negativa) och Position (chef/ej-chef) och två beroende variabler Arbetsidentitet (emotion/kognition) samt de tio fenomenologiska dimensionerna i det självbiografiska minnet. Resultaten visade att chefer jämfört med ej-chefer upplevde en starkare arbetsidentitet gällande både de emotionella och kognitiva banden. Vidare visade resultaten på signifikanta skillnader av positiva och negativt arbetsplatsminnena i tre utav de totalt tio fenomenologiska dimensionerna. Även en interaktion mellan Positiva/Negativa arbetsplatsminnen och Chef/Ej-chef på tre utav de tio fenomenologiska dimensionerna gick att utläsa.
The purpose of the study was to investigate our most important positive and negative autobiographical work-related memories of managers and non-managers related to work-related identity and workplace memory phenomenology. The study included 194 people, 124 men and 70 women in the age span 25-65. Measuring instruments for the study consisted of a survey in three parts which contained questions based on previous research. The positive work-related memories (80%) mainly contained two themes, work environment and my working duties, while the negative workplace memories (20%) showed more clearly that a majority of the memories were linked to leadership and work environment. The design for the analyzes of the quantitative data was a between-subject-design, with two independent variables Work-related memories (positive/negative) and Position (manager/non-manager) and two dependent variables Work-related self (emotion/cognition) as well as the ten phenomenological dimensions of the autobiographical memory. The results showed that managers compared to nonmanagers experienced a stronger work-related self on both the emotional and cognitive components. Furthermore, the results showed significant differences between the positive and negative work-related memories in three out of the total ten phenomenological dimensions. An interaction between Positive/Negative work-related memories and Manager/Non-manager on three out of the ten phenomenological dimensions was also red out.
APA, Harvard, Vancouver, ISO, and other styles
20

Houvet-Carrau, Christiane. "Capital social de l'entreprise familiale : les patrimoines individuels d'habitudes des dirigeants membres de la famille comme clé d'exploration des dimensions cognitive et relationnelle." Thesis, Bordeaux, 2015. http://www.theses.fr/2015BORD0013.

Full text
Abstract:
Cette thèse propose une exploration du lien entre le capital social familial et les dimensionscognitive et relationnelle du capital social organisationnel de l'entreprise familiale (EF). La rechercheconduite allie fondamentaux théoriques de l'EF et théories de la sociologie, psychosociologie etpsychanalyse. En considérant que la famille, via sa culture, fournit un socle de lectures partagées dela réalité et influence les schèmes d'action et de pensée de ses membres, la question se pose del'impact sur le construit socio-culturel de l'EF, de l’entrechoquement ou de la superposition de cesschèmes (entre générations, membres d’une fratrie, ou membres de la famille et salariés exogènes).Pour aborder les ambivalences animant ces synergies famille-individu-entreprise, nous adaptons leconcept de PIH (Patrimoines Individuels d'Habitudes) développé par Kaufmann (2001) aux dirigeantsfamiliaux (DF), dans le cadre d'un cas unique (EF sous contrôle familial depuis le 19e siècle) précédéd'un cas exploratoire, et en enrichissant ce concept des dimensions émotions-psyché-affects. Unmodèle de cube dialectique est exploité pour une analyse et une mise en perspective inter ettransgénérationnelle conduisant à l'identification d'un "processus de gestion des PIH" des DF.L'analyse des risques attachés à ce processus offre un double axe de réflexion et d'action aux DF,l'un relatif au capital social interne de l'organisation, l'autre à la gouvernance. Le design de larecherche, abductif, de nature qualitative et interprétative, combine techniques de récits de vie,cartes cognitives, questionnaires, matrices processuelles, génogrammes, analyses des risques etélaboration de plans d'action
The objective of this thesis is to explore the link between family social capital and the cognitive andrelational dimensions of the family firm (FF) social capital. At the crossroads of managementsciences, sociology and psychoanalysis, this work contributes to a better knowledge of the FF, whichintermingles, because of its very nature, emotions and affects in a very specific and ambivalent way.We consider that the family, through its culture, provides a shared framework to tackle reality andinfluences therefore the patterns of action and thought of its members. Thus, the question arises ofthe impact of the confrontation or superposition of these patterns (between generations, betweenbrothers and sisters, or family members and non-family employees) on the organizational culture ofthe FF. To address the ambivalences animating these family-individual-business synergies, we adaptthe concept of ICH (Individual Capitals of Habits) developed by Kaufmann (2001) to family-membermanagers (FMM), throughout a single case (preceded by an exploratory case), and we enriched itwith emotion-psyche-affect dimensions. Thus we use a dialectical cube model as a framework ofanalysis to shed light, from the inter and transgenerational angle, on a ICH management process.The analysis of the risks induced by this transversal process provides FMM with means to improveinternal organizational social capital and FF governance. The design of the research is abductive,based on a qualitative and interpretative approach. It combines FMM life stories, cognitive maps,semi directive interviews (non-family-member managers), matrices of process analysis, genograms,and risk analysis
APA, Harvard, Vancouver, ISO, and other styles
21

"Scratchpad Management in Software Managed Manycore Architectures." Doctoral diss., 2017. http://hdl.handle.net/2286/R.I.46214.

Full text
Abstract:
abstract: Caches have long been used to reduce memory access latency. However, the increased complexity of cache coherence brings significant challenges in processor design as the number of cores increases. While making caches scalable is still an important research problem, some researchers are exploring the possibility of a more power-efficient SRAM called scratchpad memories or SPMs. SPMs consume significantly less area, and are more energy-efficient per access than caches, and therefore make the design of on-chip memories much simpler. Unlike caches, which fetch data from memories automatically, an SPM requires explicit instructions for data transfers. SPM-only architectures are thus named as software managed manycore (SMM), since the data movements of such architectures rely on software. SMM processors have been widely used in different areas, such as embedded computing, network processing, or even high performance computing. While SMM processors provide a low-power platform, the hardware alone does not guarantee power efficiency, if applications on such processors deliver low performance. Efficient software techniques are therefore required. A big body of management techniques for SMM architectures are compiler-directed, as inserting data movement operations by hand forces programmers to trace flow of data, which can be error-prone and sometimes difficult if not impossible. This thesis develops compiler-directed techniques to manage data transfers for embedded applications on SMMs efficiently. The techniques analyze and find out the proper program points and insert data movement instructions accordingly. The techniques manage code, stack and heap data of applications, and reduce execution time by 14%, 52% and 80% respectively compared to their predecessors on typical embedded applications. On top of managing local data, a technique is also developed for shared data in SMM architectures. Experimental results show it achieves more than 2X speedup than the previous technique on average.
Dissertation/Thesis
Doctoral Dissertation Computer Science 2017
APA, Harvard, Vancouver, ISO, and other styles
22

"Compiler and Runtime for Memory Management on Software Managed Manycore Processors." Doctoral diss., 2014. http://hdl.handle.net/2286/R.I.24758.

Full text
Abstract:
abstract: We are expecting hundreds of cores per chip in the near future. However, scaling the memory architecture in manycore architectures becomes a major challenge. Cache coherence provides a single image of memory at any time in execution to all the cores, yet coherent cache architectures are believed will not scale to hundreds and thousands of cores. In addition, caches and coherence logic already take 20-50% of the total power consumption of the processor and 30-60% of die area. Therefore, a more scalable architecture is needed for manycore architectures. Software Managed Manycore (SMM) architectures emerge as a solution. They have scalable memory design in which each core has direct access to only its local scratchpad memory, and any data transfers to/from other memories must be done explicitly in the application using Direct Memory Access (DMA) commands. Lack of automatic memory management in the hardware makes such architectures extremely power-efficient, but they also become difficult to program. If the code/data of the task mapped onto a core cannot fit in the local scratchpad memory, then DMA calls must be added to bring in the code/data before it is required, and it may need to be evicted after its use. However, doing this adds a lot of complexity to the programmer's job. Now programmers must worry about data management, on top of worrying about the functional correctness of the program - which is already quite complex. This dissertation presents a comprehensive compiler and runtime integration to automatically manage the code and data of each task in the limited local memory of the core. We firstly developed a Complete Circular Stack Management. It manages stack frames between the local memory and the main memory, and addresses the stack pointer problem as well. Though it works, we found we could further optimize the management for most cases. Thus a Smart Stack Data Management (SSDM) is provided. In this work, we formulate the stack data management problem and propose a greedy algorithm for the same. Later on, we propose a general cost estimation algorithm, based on which CMSM heuristic for code mapping problem is developed. Finally, heap data is dynamic in nature and therefore it is hard to manage it. We provide two schemes to manage unlimited amount of heap data in constant sized region in the local memory. In addition to those separate schemes for different kinds of data, we also provide a memory partition methodology.
Dissertation/Thesis
Ph.D. Computer Science 2014
APA, Harvard, Vancouver, ISO, and other styles
23

Lashgar, Ahmad. "Addressing software-managed cache development effort in GPGPUs." Thesis, 2017. https://dspace.library.uvic.ca//handle/1828/8483.

Full text
Abstract:
GPU Computing promises very high performance per watt for highly-parallelizable workloads. Nowadays, there are various programming models developed to utilize the computational power of GPGPUs. Low-level programming models provide full control over GPU resources and allow programmers to achieve peak performance of the chip. In contrast, high-level programming models hide GPU-specific programming details and allow programmers to mainly express parallelism. Later, the compiler parses the parallelization notes and translates them to low-level programming models. This saves tremendous development effort and improves productivity, often achieved at the cost of sacrificing performance. In this dissertation, we investigate the limitations of high-level programming models in achieving a performance near to low-level models. Specifically, we study the performance and productivity gap between high-level OpenACC and low-level CUDA programming models and aim at reducing the performance gap, while maintaining the productivity advantages. We start this study by developing our in-house OpenACC compiler. Our compiler, called IPMACC, translates OpenACC for C to CUDA and uses the system compile to generate GPU binaries. We develop various micro-benchmarks to understand GPU structure and implement a more efficient OpenACC compiler. By using IPMACC, we evaluate the performance and productivity gap between a wide set of OpenACC and CUDA kernels. From our findings, we conclude that one of the major reasons behind the big performance gap between OpenACC and CUDA is CUDA’s flexibility in exploiting the GPU software-managed cache. Identifying this key benefit in low-level CUDA, we follow three effective paths in utilizing software-managed cache similar to CUDA, but at a lower development effort (e.g. using OpenACC instead). In the first path, we explore the possibility of employing existing OpenACC directives in utilizing software-managed cache. Specifically, the cache directive is devised in OpenACC API standard to allow the use of software-managed cache in GPUs. We introduce an efficient implementation of OpenACC cache directive that performs very close to CUDA. However, we show that the use of the cache directive is limited and the directive may not offer the full-functionality associated with the software-managed cache, as existing in CUDA. In the second path, we build on our observation on the limitations of the cache directive and propose a new OpenACC directive, called the fcw directive, to address the shortcomings of the cache directive, while maintaining OpenACC productivity advantages. We show that the fcw directive overcomes the cache directive limitations and narrows down the performance gap between CUDA and OpenACC significantly. In the third path, we propose fully-automated hardware/software approach, called TELEPORT, for software-managed cache programming. On the software side, TELEPORT statically analyzes CUDA kernels and identifies opportunities in utilizing the software-managed cache. The required information is passed to the GPU via API calls. Based on this information, on the hardware side, TELEPORT prefetches the data to the software-managed cache at runtime. We show that TELEPORT can improve performance by 32% on average, while lowering the development effort by 2.5X, compared to hand-written CUDA equivalent.
Graduate
APA, Harvard, Vancouver, ISO, and other styles
24

Sartor, Jennifer Bedke. "Exploiting language abstraction to optimize memory efficiency." Thesis, 2010. http://hdl.handle.net/2152/ETD-UT-2010-08-1919.

Full text
Abstract:
The programming language and underlying hardware determine application performance, and both are undergoing revolutionary shifts. As applications have become more sophisticated and capable, programmers have chosen managed languages in many domains for ease of development. These languages abstract memory management from the programmer, which can introduce time and space overhead but also provide opportunities for dynamic optimization. Optimizing memory performance is in part paramount because hardware is reaching physical limits. Recent trends towards chip multiprocessor machines exacerbate the memory system bottleneck because they are adding cores without adding commensurate bandwidth. Both language and architecture trends add stress to the memory system and degrade application performance. This dissertation exploits the language abstraction to analyze and optimize memory efficiency on emerging hardware. We study the sources of memory inefficiencies on two levels: heap data and hardware storage traffic. We design and implement optimizations that change the heap layout of arrays, and use program semantics to eliminate useless memory traffic. These techniques improve memory system efficiency and performance. We first quantitatively characterize the problem by comparing many data compression algorithms and their combinations in a limit study of Java benchmarks. We find that arrays are a dominant source of heap inefficiency. We introduce z-rays, a new array layout design, to bridge the gap between fast access, space efficiency and predictability. Z-rays facilitate compression and offer flexibility, and time and space efficiency. We find that there is a semantic mismatch between managed languages, with their rapid allocation rates, and current hardware, causing unnecessary and excessive traffic in the memory subsystem. We take advantage of the garbage collector's identification of dead data regions, communicating information to the caches to eliminate useless traffic to memory. By reducing traffic and bandwidth, we improve performance. We show that the memory abstraction in managed languages is not just a cost to be borne, but an opportunity to alleviate the memory bottleneck. This thesis shows how to exploit this abstraction to improve space and time efficiency and overcome the memory wall. We enhance the productivity and performance of ubiquitous managed languages on current and future architectures.
text
APA, Harvard, Vancouver, ISO, and other styles
25

"Optimizing Heap Data Management on Software Managed Manycore Architectures." Master's thesis, 2017. http://hdl.handle.net/2286/R.I.45507.

Full text
Abstract:
abstract: Caches pose a serious limitation in scaling many-core architectures since the demand of area and power for maintaining cache coherence increases rapidly with the number of cores. Scratch-Pad Memories (SPMs) provide a cheaper and lower power alternative that can be used to build a more scalable many-core architecture. The trade-off of substituting SPMs for caches is however that the data must be explicitly managed in software. Heap management on SPM poses a major challenge due to the highly dynamic nature of of heap data access. Most existing heap management techniques implement a software caching scheme on SPM, emulating the behavior of hardware caches. The state-of-the-art heap management scheme implements a 4-way set-associative software cache on SPM for a single program running with one thread on one core. While the technique works correctly, it suffers from signifcant performance overhead. This paper presents a series of compiler-based efficient heap management approaches that reduces heap management overhead through several optimization techniques. Experimental results on benchmarks from MiBenchGuthaus et al. (2001) executed on an SMM processor modeled in gem5Binkert et al. (2011) demonstrate that our approach (implemented in llvm v3.8Lattner and Adve (2004)) can improve execution time by 80% on average compared to the previous state-of-the-art.
Dissertation/Thesis
Masters Thesis Computer Science 2017
APA, Harvard, Vancouver, ISO, and other styles
26

Ji, Wei-Jhong, and 紀韋仲. "Hardware Garbage Collecting Memory Manager for a Java Processor." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/36uqu5.

Full text
Abstract:
碩士
國立交通大學
資訊科學與工程研究所
106
This thesis is developed based on a Java Application IP(JAIP). We propose two component to manage heap space for JAIP. One is Garbage Collection Unit with reference counting algorithm and one is Memory Management Unit. Memory Management Unit provides an interface for creating objects and manages the heap space. Garbage Collection Unit collects garbage objects in background in order to reduce overheads. In this thesis, we used Free lists(BRAM) to record free space. If heap allocation size is less than 32 words, JAIP can efficiently allocate small memory blocks. We improve the utilization of the heap memory with this design.
APA, Harvard, Vancouver, ISO, and other styles
27

Tsegaye, Melekam Asrat. "A model for a context aware machine-based personal memory manager and its implementation using a visual programming environment /." 2006. http://eprints.ru.ac.za/887/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Gerace, Adam. "The influence of past experience on the process of perspective taking." 2009. http://arrow.unisa.edu.au/vital/access/manager/Repository/unisa:38313.

Full text
Abstract:
Perspective taking, the main cognitive component of empathy, is considered within the psychological literature to be a significant part of human interaction. Despite extensive investigation into the outcomes of this construct, the process by which people take another's psychological point of view has received comparatively little attention. The purpose of this thesis is to investigate in three studies what the individual does when attempting to take the perspective of another person. The first study investigated the particular strategies which individuals use when engaging in perspective-taking behaviours. Dominant themes to emerge from this qualitative assessment of the perspective-taking process were the use of self- (e.g., switching places) and other-information (e.g., targets personal characteristics), of which the former appears to play the central role. Further elements of the perspective-taking process were also identified with the self-information theme. Of these, having experienced a situation similar to that of a target (similar past experience) was found to be a particularly strong aspect of the perspective-taking process and subsequently informed the rationale for the remaining two studies. In view of the findings from Study 1, the second study tested whether past experiences made it easier for participants to take the perspective of another person in a new, but similar situation and the extent to which other aspects, such as switching places with the target, made the perspective-taking process less effortful. Results revealed that similar past experience was the strongest predictor of the ease of the perspective-taking task. Moreover, the extent to which similar past experience increased ease of the perspective-taking process was moderated by the extent to which the participant reflected on that past experience. The final study in this thesis picked up the notion of reflection and examined whether increasing the extent to which an individual engages in self-reflection leads to a concomitant increase in the tendency to take another perspective and the ease with which this can be accomplished. After completion of an intervention program which had a strong focus on self-reflection and understanding of self, there was a noticeable trend for participants to improve on their capacity for self-reflection. The findings also revealed that individuals with a tendency to self-reflect in an insightful and non-ruminative manner demonstrated a similarly high level of general perspective-taking propensity and ease of perspective taking. The studies support the vital role of reflection on similar past experience to that of a target when engaging in perspective taking. This research underscores the importance of examining perspective taking as a process.
APA, Harvard, Vancouver, ISO, and other styles
29

Ramashekar, Thejas. "Automatic Data Allocation, Buffer Management And Data Movement For Multi-GPU Machines." Thesis, 2013. http://etd.iisc.ernet.in/handle/2005/2627.

Full text
Abstract:
Multi-GPU machines are being increasingly used in high performance computing. These machines are being used both as standalone work stations to run computations on medium to large data sizes (tens of gigabytes) and as a node in a CPU-Multi GPU cluster handling very large data sizes (hundreds of gigabytes to a few terabytes). Each GPU in such a machine has its own memory and does not share the address space either with the host CPU or other GPUs. Hence, applications utilizing multiple GPUs have to manually allocate and managed at a on each GPU. A significant body of scientific applications that utilize multi-GPU machines contain computations inside affine loop nests, i.e., loop nests that have affine bounds and affine array access functions. These include stencils, linear-algebra kernels, dynamic programming codes and data-mining applications. Data allocation, buffer management, and coherency handling are critical steps that need to be performed to run affine applications on multi-GPU machines. Existing works that propose to automate these steps have limitations and in efficiencies in terms of allocation sizes, exploiting reuse, transfer costs and scalability. An automatic multi-GPU memory manager that can overcome these limitations and enable applications to achieve salable performance is highly desired. One technique that has been used in certain memory management contexts in the literature is that of bounding boxes. The bounding box of an array, for a given tile, is the smallest hyper-rectangle that encapsulates all the array elements accessed by that tile. In this thesis, we exploit the potential of bounding boxes for memory management far beyond their current usage in the literature. In this thesis, we propose a scalable and fully automatic data allocation and buffer management scheme for affine loop nests on multi-GPU machines. We call it the Bounding Box based Memory Manager (BBMM). BBMM is a compiler-assisted runtime memory manager. At compile time, it use static analysis techniques to identify a set of bounding boxes accessed by a computation tile. At run time, it uses the bounding box set operations such as union, intersection, difference, finding subset and superset relation to compute a set of disjoint bounding boxes from the set of bounding boxes identified at compile time. It also exploits the architectural capability provided by GPUs to perform fast transfers of rectangular (strided) regions of memory and hence performs all data transfers in terms of bounding boxes. BBMM uses these techniques to automatically allocate, and manage data required by applications (suitably tiled and parallelized for GPUs). This allows It to (1) allocate only as much data (or close to) as is required by computations running on each GPU, (2) efficiently track buffer allocations and hence, maximize data reuse across tiles and minimize the data transfer overhead, (3) and as a result, enable applications to maximize the utilization of the combined memory on multi-GPU machines. BBMM can work with any choice of parallelizing transformations, computation placement, and scheduling schemes, whether static or dynamic. Experiments run on a system with four GPUs with various scientific programs showed that BBMM is able to reduce data allocations on each GPU by up to 75% compared to current allocation schemes, yield at least 88% of the performance of hand-optimized Open CL codes and allows excellent weak scaling.
APA, Harvard, Vancouver, ISO, and other styles
30

Panwar, Ashish. "A Memory Allocation Framework for Optimizing Power Consumption and Controlling Fragmentation." Thesis, 2015. http://etd.iisc.ernet.in/2005/3873.

Full text
Abstract:
Large physical memory modules are necessary to meet performance demands of today's ap- plications but can be a major bottleneck in terms of power consumption during idle periods or when systems are running with workloads which do not stress all the plugged memory resources. Contribution of physical memory in overall system power consumption becomes even more signi cant when CPU cores run on low power modes during idle periods with hardware support like Dynamic Voltage Frequency Scaling. Our experiments show that even 10% of memory allocations can make references to all the banks of physical memory on a long running system primarily due to the randomness in page allocation. We also show that memory hot-remove or memory migration for large blocks is often restricted, in a long running system, due to allocation policies of current Linux VM which mixes movable and unmovable pages. Hence it is crucial to improve page migration for large contiguous blocks for a practical realization of power management support provided by the hardware. Operating systems can play a decisive role in effectively utilizing the power management support of modern DIMMs like PASR(Partial Array Self Refresh) in these situations but have not been using them so far. We propose three different approaches for optimizing memory power consumption by in- ducing bank boundary awareness in the standard buddy allocator of Linux kernel as well as distinguishing user and kernel memory allocations at the same time to improve the movability of memory sections (and hence memory-hotplug) by page migration techniques. Through a set of minimal changes in the standard buddy system of Linux VM, we have been able to reduce the number of active memory banks significantly (upto 80%) as well as to improve performance of memory-hotplug framework (upto 85%).
APA, Harvard, Vancouver, ISO, and other styles
31

La, Porta Louisa Carlotta. "Why some teams work better than others: An investigation of voice, TMS and leadership on team performance." Master's thesis, 2020. http://hdl.handle.net/10071/20694.

Full text
Abstract:
Teams have become the standard way of working in organizations and therefore the question of what differentiates highly performing units from other teams has attracted much research attention. Especially the utilization of knowledge such as team members sharing their knowledge and the awareness of other members’ expertise can be decisive for team performance. Looking into the relationship between team member’s behaviors, team cognition and self-management encouraged by the leader, this study connects three streams of literature, namely voice, TMS and leadership, to reach a better understanding how these interact and impact team performance. Specifically, in this study, we investigate the mediation role of TMS in the relationship between voice and team performance, and whether the leadership function of promoting team self-management moderates this relationship. A sample of 46 consulting teams has been analyzed through a moderated mediation model without finding support for the full model. However, this multi-level research showed that voice is positively related with performance but that the relationship between TMS and performance is less clear and depends on the extent to which leaders encourage the team to manage itself. In light of findings on both the individual and organizational level indicating that voice can have positive effects, this research extends the literature in terms of the applicability at the meso level. Also, further light is shed on the interaction between team cognition and team leadership warranting further research. Practical and theoretical implications of the findings and potential questions for future research are discussed.
O trabalho em equipa tornou-se padrão no trabalho nas organizações, portanto, a questão do que diferencia unidades de alto desempenho de outras equipas atraiu muita atenção para o seu estudo. Especialmente a utilização de conhecimento, como, a partilha de conhecimento de membros de equipa e a consciencialização da perícia de outros membros, como fator decisivo para o desempenho de equipa. Analisando o relacionamento entre comportamentos de membros de equipas, a noção e a auto-gestão de equipa incentivados pelo líder, três fontes de literatura, nomeadamente a voz, TMS e liderança, estão relacionadas com uma melhor compreensão de como interagem e afetam o desempenho da equipa. Especificamente, o efeito da voz no TMS e no desempenho de equipa foi estudado com o moderador gestão de equipa autónoma. Uma amostra de 46 equipas de consultoria foi analisada por meio de um modelo de mediação moderada sem encontrar suporte para o modelo completo. Todavia, a pesquisa em diferentes níveis mostrou que a voz está positivamente relacionada com o desempenho, mas a relação entre o TMS e o desempenho é menos clara, dependendo da extensão em que os líderes incentivam as suas equipas a gerirem-se independentemente. À luz dos resultados, a nível individual e organizacional, é indicado que a voz pode ter efeitos positivos, aplicando-se à literatura em termos de nível meso. Adicionalmente , contribuições feitas para melhor entendimento da interação entre noção e liderança de equipa que merecem mais pesquisa. São discutidas implicações práticas e teóricas dos resultados e possíveis perguntas para futuras pesquisas.
APA, Harvard, Vancouver, ISO, and other styles
32

Patel, Naman. "A Case for Protecting Huge Pages from the Kernel." Thesis, 2016. http://etd.iisc.ernet.in/handle/2005/2936.

Full text
Abstract:
Modern architectures support multiple size pages to facilitate applications that use large chunks of contiguous memory either for buffer allocation, application specific memory management, in-memory caching or garbage collection. Most general purpose processors support larger page sizes, for e.g. x86 architecture supports 2MB and 1GB pages while PowerPC architecture supports 64KB, 16MB, 16GB pages. Such larger size pages are also known as superpages or huge pages. With the help of huge pages TLB reach can be increased significantly. The Linux kernel can transparently use these huge pages to significantly bring down the cost of TLB translations. With Transparent Huge Pages (THP) support in Linux kernel the end users or the application developers need not make any change to their application. Memory fragmentation which has been one of the classical problems in computing systems for decades is a key problem for the allocation of huge pages. Ubiquitous huge page support across architectures makes effective fragmentation management even more critical for modern systems. Applications tend to stress system TLB in the absence of huge pages, for virtual to physical address translation, which adversely affects performance/energy characteristics in long running systems. Since most kernel pages tend to be unmovable, fragmentation created due to their misplacement is more problematic and nearly impossible to recover with memory compaction. In this work, we explore physical memory manager of Linux and the interaction of kernel page placement with fragmentation avoidance and recovery mechanisms. Our analysis reveals that not only a random kernel page layout thwarts the progress of memory compaction; it can actually induce more fragmentation in the system. To address this problem, we propose a new allocator which takes special care for the placement of kernel pages. We propose a new region which represents memory area having kernel as well as user pages. Using this new region we introduce a staged allocator which with change in fragmentation level adapts and optimizes the kernel page placement. Later we introduce Illuminator which with zero overhead outperforms default kernel in terms of huge page allocation success rate and compaction overhead with respect to each huge page. We also show that huge page allocation is not a one dimensional problem but a two fold concern with how the fragmentation recovery mechanism may potentially interfere with the page clustering policy of allocator and worsen the fragmentation. Our results show that with effective kernel page placements the mixed page block counts reduces upto 70%, which allows our system to allocate 3x-4x huge pages than the default Kernel. Using these additional huge pages we show up to 38% improvement in terms of energy consumed and reduction in execution time up to 39% on standard benchmarks.
APA, Harvard, Vancouver, ISO, and other styles
33

Farrell, Robert. "Making Change Happen in the Middle." 2013. http://hdl.handle.net/10150/299589.

Full text
Abstract:
This paper seeks to provide library managers with a theoretical framework for thinking about how change is effected by those in middle management positions. Starting from the principles that change takes place within socio-culturally bounded contexts and is most successful when approached indirectly, two scenarios characteristic of many situations requiring change middle managers commonly face are then put forward. Following each scenario, a possible solution or path towards change is advanced in order to provide the reader with models for putting into practice the theoretical ideas presented. A methodology that combines theoretical frameworks and practical scenarios is adopted in order to ground theory in practice and thereby lead readers toward what might be called a “praxis” of change making.
APA, Harvard, Vancouver, ISO, and other styles
34

Kraai, Vuyokazi, Sinval Benjamin Kahn, and Ramokhojoane Paul Motsoeneng. "Succession planning a development tool for developing middle managers in the Department of Science and Technology." Diss., 2015. http://hdl.handle.net/10500/19172.

Full text
Abstract:
The aim of this research is to investigate how succession planning can be used as a developmental tool in developing middle managers in the Department of Science and Technology (DST). The research shows that the DST has good strategies and policies in place for the development and retention of its employees, although there is no succession planning tool or system in place. Proposed interventions to address identified gaps in terms of succession planning include a quota system of positions that should be filled by internal candidates to encourage employees’ buy-in to succession planning; inculcating it in the institutional culture and therefore, preserving and retaining institutional memory; creating a robust reward management programme based upon a pay-scheme that seeks to reward middle managers for developing their competencies in line with institutional requirements
Public Administration
M.A. (Public Administration)
APA, Harvard, Vancouver, ISO, and other styles
35

Sanchez, Sandra. "Maestría En Gestión Pública: Una Contribución de Memoria para la Cualificación de la Administración Pública." Master's thesis, 2019. http://hdl.handle.net/10400.26/31319.

Full text
Abstract:
The following work, analyzes the academic courses offered by Colombia Universities for public servants or those aspiring to become public servents. The undergraduate, specialization and Master's programme curricula in universities running these programmes have been compared to similar courses offered in some of the best national and international universities to assess their effectiveness educational In the last decade, public management has taken great relevance in the administration and control of public goods, for which, specific knowledge is required for the performance of public servants; where the universities offer specialized programs in interdisciplinary areas that are related with the activities of the State. The implementation of the New Public Management, intend that the projects executed by public institutions are carried out with a transparent ethical with the purpose of satisfying the needs of citizens in a profitable, efficient and effective manner; that is, with Quality Management. As a professor at Business School of Rosario´s University, this topic of knowing about public administration becomes one of the main motivations to be able to investigate how the educational offer is currently in Colombia.
El siguiente trabajo, analiza los cursos académicos ofrecidos por las universidades de Colombia para los servidores públicos o aquellos que aspiran a convertirse en servidores públicos. El currículo de pregrado, especialización y programa de maestría en las universidades que administran estos programas se ha comparado con cursos similares ofrecidos en algunas de las mejores universidades nacionales e internacionales para evaluar su efectividad educativa. En la última década, la administración pública ha adquirido gran relevancia en la administración y el control de los bienes públicos, para lo cual se requieren conocimientos específicos para el desempeño de los servidores públicos; Donde las universidades ofrecen programas especializados en áreas interdisciplinarias que están relacionadas con las actividades del Estado. La implementación de la Nueva Gestión Pública, pretende que los proyectos ejecutados por instituciones públicas se lleven a cabo con una ética transparente con el propósito de satisfacer las necesidades de los ciudadanos de manera rentable, eficiente y efectiva; Es decir, con la Gestión de la Calidad. Como profesor de la Escuela de Negocios de la Universidad de Rosario, este tema de conocer la administración pública se convierte en una de las principales motivaciones para investigar cómo se encuentra actualmente la oferta educativa en Colombia.
APA, Harvard, Vancouver, ISO, and other styles
36

Kraai, Vuyokazi. "Succession planning : a development tool for developing middle managers in the Department of Science and Technology." Diss., 2015. http://hdl.handle.net/10500/19172.

Full text
Abstract:
The aim of this research is to investigate how succession planning can be used as a developmental tool in developing middle managers in the Department of Science and Technology (DST). The research shows that the DST has good strategies and policies in place for the development and retention of its employees, although there is no succession planning tool or system in place. Proposed interventions to address identified gaps in terms of succession planning include a quota system of positions that should be filled by internal candidates to encourage employees’ buy-in to succession planning; inculcating it in the institutional culture and therefore, preserving and retaining institutional memory; creating a robust reward management programme based upon a pay-scheme that seeks to reward middle managers for developing their competencies in line with institutional requirements
Public Administration and Management
M.A. (Public Administration)
APA, Harvard, Vancouver, ISO, and other styles
37

Dharmadeep, M. C. "Optimizations In Storage Area Networks And Direct Attached Storage." Thesis, 2007. http://hdl.handle.net/2005/574.

Full text
Abstract:
The thesis consists of three parts. In the first part, we introduce the notion of device-cache-aware schedulers. Modern disk subsystems have many megabytes of memory for various purposes such as prefetching and caching. Current disk scheduling algorithms make decisions oblivious of the underlying device cache algorithms. In this thesis, we propose a scheduler architecture that is aware of underlying device cache. We also describe how the underlying device cache parameters can be automatically deduced and incorporated into the scheduling algorithm. In this thesis, we have only considered adaptive caching algorithms as modern high end disk subsystems are by default configured to use such algorithms. We implemented a prototype for Linux anticipatory scheduler, where we observed, compared with the anticipatory scheduler, upto 3 times improvement in query execution times with Benchw benchmark and upto 10 percent improvement with Postmark benchmark. The second part deals with implementing cooperative caching for the Redhat Global File System. The Redhat Global File System (GFS) is a clustered shared disk file system. The coordination between multiple accesses is through a lock manager. On a read, a lock on the inode is acquired in shared mode and the data is read from the disk. For a write, an exclusive lock on the inode is acquired and data is written to the disk; this requires all nodes holding the lock to write their dirty buffers/pages to disk and invalidate all the related buffers/pages. A DLM (Distributed Lock Manager) is a module that implements the functions of a lock manager. GFS’s DLM has some support for range locks, although it is not being used by GFS. While it is clear that a data sourced from a memory copy is likely to have lower latency, GFS currently reads from the shared disk after acquiring a lock (just as in other designs such as IBM’s GPFS) rather than from remote memory that just recently had the correct contents. The difficulties are mainly due to the circular relationships that can result between GFS and the generic DLM architecture while integrating DLM locking framework with cooperative caching. For example, the page/buffer cache should be accessible from DLM and yet DLM’s generality has to be preserved. The symmetric nature of DLM (including the SMP concurrency model) makes it even more difficult to understand and integrate cooperative caching into it (note that GPFS has an asymmetrical design). In this thesis, we describe the design of a cooperative caching scheme in GFS. To make it more effective, we also have introduced changes to the locking protocol and DLM to handle range locks more efficiently. Experiments with micro benchmarks on our prototype implementation reveal that, reading from a remote node over gigabit Ethernet can be upto 8 times faster than reading from a enterprise class SCSI disk for random disk reads. Our contributions are an integrated design for cooperative caching and lock manager for GFS, devising a novel method to do interval searches and determining when sequential reads from a remote memory perform better than sequential reads from a disk. The third part deals with selecting a primary network partition in a clustered shared disk system, when node/network failures occur. Clustered shared disk file systems like GFS, GPFS use methods that can fail in case of multiple network partitions and also in case of a 2 node cluster. In this thesis, we give an algorithm for fault-tolerant proactive leader election in asynchronous shared memory systems, and later its formal verification. Roughly speaking, a leader election algorithm is proactive if it can tolerate failure of nodes even after a leader is elected, and (stable) leader election happens periodically. This is needed in systems where a leader is required after every failure to ensure the availability of the system and there might be no explicit events such as messages in the (shared memory) system. Previous algorithms like DiskPaxos are not proactive. In our model, individual nodes can fail and reincarnate at any point in time. Each node has a counter which is incremented every period, which is same across all the nodes (modulo a maximum drift). Different nodes can be in different epochs at the same time. Our algorithm ensures that per epoch there can be at most one leader. So if the counter values of some set of nodes match, then there can be at most one leader among them. If the nodes satisfy certain timeliness constraints, then the leader for the epoch with highest counter also becomes the leader for the next epoch (stable property). Our algorithm uses shared memory proportional to the number of processes, the best possible. We also show how our protocol can be used in clustered shared disk systems to select a primary network partition. We have used the state machine approach to represent our protocol in Isabelle HOL logic system and have proved the safety property of the protocol.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography