Academic literature on the topic 'Managed memory'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Managed memory.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Managed memory"

1

Schneider, Scott, Jae-Seung Yeom, and Dimitrios S. Nikolopoulos. "Programming Multiprocessors with Explicitly Managed Memory Hierarchies." Computer 42, no. 12 (December 2009): 28–34. http://dx.doi.org/10.1109/mc.2009.407.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Akram, Shoaib. "Performance Evaluation of Intel Optane Memory for Managed Workloads." ACM Transactions on Architecture and Code Optimization 18, no. 3 (June 2021): 1–26. http://dx.doi.org/10.1145/3451342.

Full text
Abstract:
Intel Optane memory offers non-volatility, byte addressability, and high capacity. It suits managed workloads that prefer large main memory heaps. We investigate Optane as the main memory for managed (Java) workloads, focusing on performance scalability. As the workload (core count) increases, we note Optane’s performance relative to DRAM. A few workloads incur a slight slowdown on Optane memory, which helps conserve limited DRAM capacity. Unfortunately, other workloads scale poorly beyond a few core counts. This article investigates scaling bottlenecks for Java workloads on Optane memory, analyzing the application, runtime, and microarchitectural interactions. Poorly scaling workloads allocate objects rapidly and access objects in Optane memory frequently. These characteristics slow down the mutator and substantially slow down garbage collection (GC). At the microarchitecture level, load, store, and instruction miss penalties rise. To regain performance, we partition heaps across DRAM and Optane memory, a hybrid that scales considerably better than Optane alone. We exploit state-of-the-art GC approaches to partition heaps. Unfortunately, existing GC approaches needlessly waste DRAM capacity because they ignore runtime behavior. This article also introduces performance impact-guided memory allocation (PIMA) for hybrid memories. PIMA maximizes Optane utilization, allocating in DRAM only if it improves performance. It estimates the performance impact of allocating heaps in either memory type by sampling. We target PIMA at graph analytics workloads, offering a novel performance estimation method and detailed evaluation. PIMA identifies workload phases that benefit from DRAM with high (94.33%) accuracy, incurring only a 2% sampling overhead. PIMA operates stand-alone or combines with prior approaches to offer new performance versus DRAM capacity trade-offs. This work opens up Optane memory to a real-life role as the main memory for Java workloads.
APA, Harvard, Vancouver, ISO, and other styles
3

Meyer, Meghan L., and Eleanor Collier. "Theory of minds: managing mental state inferences in working memory is associated with the dorsomedial subsystem of the default network and social integration." Social Cognitive and Affective Neuroscience 15, no. 1 (January 2020): 63–73. http://dx.doi.org/10.1093/scan/nsaa022.

Full text
Abstract:
Abstract We often interact with multiple people at a time and consider their various points of view to facilitate smooth social interaction. Yet, how our brains track multiple mental states at once, and whether skill in this domain links to social integration, remains underspecified. To fill this gap, we developed a novel social working memory paradigm in which participants manage two- or four-people’s mental states in working memory, as well as control trials in which they alphabetize two- or four-people’s names in working memory. In Study 1, we found that the dorsomedial subsystem of the default network shows relative increases in activity with more mental states managed in working memory. In contrast, this subsystem shows relative decreases in activity with more non-mental state information (the number of names alphabetized) managed in working memory. In Study 2, only individual differences in managing mental states in working memory, specifically on trials that posed the greatest mental state load to working memory, correlated with social integration. Collectively, these findings add further support to the hypothesis that social working memory relies on partially distinct brain systems and may be a key ingredient to success in a social world.
APA, Harvard, Vancouver, ISO, and other styles
4

Jantz, Michael R., Forrest J. Robinson, Prasad A. Kulkarni, and Kshitij A. Doshi. "Cross-layer memory management for managed language applications." ACM SIGPLAN Notices 50, no. 10 (December 18, 2015): 488–504. http://dx.doi.org/10.1145/2858965.2814322.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

O’Neill, Eoghan, John McGlone, Peter Kilpatrick, and Dimitrios Nikolopoulos. "Managed acceleration for In-Memory database analytic workloads." International Journal of Parallel, Emergent and Distributed Systems 32, no. 4 (May 6, 2016): 406–27. http://dx.doi.org/10.1080/17445760.2016.1170832.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Bok, Kyoungsoo, Seunghun Yoo, Dojin Choi, Jongtae Lim, and Jaesoo Yoo. "In-Memory Caching for Enhancing Subgraph Accessibility." Applied Sciences 10, no. 16 (August 9, 2020): 5507. http://dx.doi.org/10.3390/app10165507.

Full text
Abstract:
Graphs have been utilized in various fields because of the development of social media and mobile devices. Various studies have also been conducted on caching techniques to reduce input and output costs when processing a large amount of graph data. In this paper, we propose a two-level caching scheme that considers the past usage pattern of subgraphs and graph connectivity, which are features of graph topology. The proposed caching is divided into a used cache and a prefetched cache to manage previously used subgraphs and subgraphs that will be used in the future. When the memory is full, a strategy that replaces a subgraph inside the memory with a new subgraph is needed. Subgraphs in the used cache are managed by a time-to-live (TTL) value, and subgraphs with a low TTL value are targeted for replacement. Subgraphs in the prefetched cache are managed by the queue structure. Thus, first-in subgraphs are targeted for replacement as a priority. When a cache hit occurs in the prefetched cache, the subgraphs are migrated and managed in the used cache. As a result of the performance evaluation, the proposed scheme takes into account subgraph usage patterns and graph connectivity, thus improving cache hit rates and data access speeds compared to conventional techniques. The proposed scheme can quickly process and analyze large graph queries in a computing environment with small memory. The proposed scheme can be used to speed up in-memory-based processing in applications where relationships between objects are complex, such as the Internet of Things and social networks.
APA, Harvard, Vancouver, ISO, and other styles
7

Wang, Li, JingLing Xue, and XueJun Yang. "Acyclic orientation graph coloring for software-managed memory allocation." Science China Information Sciences 57, no. 9 (June 11, 2014): 1–18. http://dx.doi.org/10.1007/s11432-014-5131-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Krauter, Nicolas, Patrick Raaf, Peter Braam, Reza Salkhordeh, Sebastian Erdweg, and André Brinkmann. "Persistent software transactional memory in Haskell." Proceedings of the ACM on Programming Languages 5, ICFP (August 22, 2021): 1–29. http://dx.doi.org/10.1145/3473568.

Full text
Abstract:
Emerging persistent memory in commodity hardware allows byte-granular accesses to persistent state at memory speeds. However, to prevent inconsistent state in persistent memory due to unexpected system failures, different write-semantics are required compared to volatile memory. Transaction-based library solutions for persistent memory facilitate the atomic modification of persistent data in languages where memory is explicitly managed by the programmer, such as C/C++. For languages that provide extended capabilities like automatic memory management, a more native integration into the language is needed to maintain the high level of memory abstraction. It is shown in this paper how persistent software transactional memory (PSTM) can be tightly integrated into the runtime system of Haskell to atomically manage values of persistent transactional data types. PSTM has a clear interface and semantics extending that of software transactional memory (STM). Its integration with the language’s memory management retains features like garbage collection and allocation strategies, and is fully compatible with Haskell's lazy execution model. Our PSTM implementation demonstrates competitive performance with low level libraries and trivial portability of existing STM libraries to PSTM. The implementation allows further interesting use cases, such as persistent memoization and persistent Haskell expressions.
APA, Harvard, Vancouver, ISO, and other styles
9

Jost, Tiago Trevisan, Gabriel Luca Nazar, and Luigi Carro. "SoMMA: A software-managed memory architecture for multi-issue processors." Microprocessors and Microsystems 77 (September 2020): 103139. http://dx.doi.org/10.1016/j.micpro.2020.103139.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Nguyen, Khanh, Kai Wang, Yingyi Bu, Lu Fang, and Guoqing Xu. "Understanding and Combating Memory Bloat in Managed Data-Intensive Systems." ACM Transactions on Software Engineering and Methodology 26, no. 4 (February 23, 2018): 1–41. http://dx.doi.org/10.1145/3162626.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Managed memory"

1

Jost, Tiago Trevisan. "SoMMA : a software managed memory architecture for multi-issue processors." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2017. http://hdl.handle.net/10183/170975.

Full text
Abstract:
Processadores embarcados utilizam eficientemente o paralelismo a nível de instrução para atender as necessidades de desempenho e energia em aplicações atuais. Embora a melhoria de performance seja um dos principais objetivos em processadores em geral, ela pode levar a um impacto negativo no consumo de energia, uma restrição crítica para sistemas atuais. Nesta dissertação, apresentamos o SoMMA, uma arquitetura de memória gerenciada por software para processadores embarcados capaz de reduz consumo de energia e energy-delay product (EDP), enquanto ainda aumenta a banda de memória. A solução combina o uso de memórias gerenciadas por software com a cache de dados, de modo a reduzir o consumo de energia e EDP do sistema. SoMMA também melhora a performance do sistema, pois os acessos à memória podem ser realizados em paralelo, sem custo em portas de memória extra na cache de dados. Transformações de código do compilador auxiliam o programador a utilizar a arquitetura proposta. Resultados experimentais mostram que SoMMA é mais eficiente em termos de energia e desempenho tanto a nível de processador quanto a nível do sistema completo. A técnica apresenta speedups de 1.118x e 1.121x, consumindo 11% e 12.8% menos energia quando comparando processadores que utilizam e não utilizam SoMMA. Há ainda redução de até 41.5% em EDP do sistema, sempre mantendo a área dos processadores equivalentes. Por fim, SoMMA também reduz o número de cache misses quando comparado ao processador baseline.
Embedded processors rely on the efficient use of instruction-level parallelism to answer the performance and energy needs of modern applications. Though improving performance is the primary goal for processors in general, it might lead to a negative impact on energy consumption, a particularly critical constraint for current systems. In this dissertation, we present SoMMA, a software-managed memory architecture for embedded multi-issue processors that can reduce energy consumption and energy-delay product (EDP), while still providing an increase in memory bandwidth. We combine the use of software-managed memories (SMM) with the data cache, and leverage the lower energy access cost of SMMs to provide a processor with reduced energy consumption and EDP. SoMMA also provides a better overall performance, as memory accesses can be performed in parallel, with no cost in extra memory ports. Compiler-automated code transformations minimize the programmer’s effort to benefit from the proposed architecture. Our experimental results show that SoMMA is more energy- and performance-efficient not only for the processing cores, but also at full-system level. Comparisons were done using the VEX processor, a VLIW reconfigurable processor. The approach shows average speedups of 1.118x and 1.121x, while consuming up to 11% and 12.8% less energy when comparing two modified processors and their baselines. SoMMA also shows reduction of up to 41.5% on full-system EDP, maintaining the same processor area as baseline processors. Lastly, even with SoMMA halving the data cache size, we still reduce the number of data cache misses in comparison to baselines.
APA, Harvard, Vancouver, ISO, and other styles
2

Barua, Rajeev K. (Rajeev Kumar). "Maps : a compiler-managed memory system for software-exposed architectures." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/37194.

Full text
Abstract:
Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2000.
Includes bibliographical references (p. 155-161).
Microprocessors must exploit both instruction-level parallelism (ILP) and memory parallelism for high performance. Sophisticated techniques for ILP have boosted the ability of modern-day microprocessors to exploit ILP when available. Unfortunately, improvements in memory parallelism in microprocessors have lagged behind. This thesis explains why memory parallelism is hard to exploit in microprocessors and advocate bank-exposed architectures as an effective way to exploit more memory parallelism. Bank exposed architectures are a kind of software-exposed architecture: one in which the low level details of the hardware are visible to the software. In a bank-exposed architecture, the memory banks are visible to the software, enabling the compiler to exploit a high degree of memory parallelism in addition to ILP. Bank-exposed architectures can be employed by general-purpose processors, and by embedded chips, such as those used for digital-signal processing. This thesis presents Maps, an enabling compiler technology for bank-exposed architectures. Maps solves the problem of bank-disambiguation, i.e., how to distribute data in sequential programs among several banks to best exploit memory parallelism, while retaining the ability to disambiguate each data reference to a particular bank. Two methods for bank disambiguation are presented: equivalence-class unification and modulo unrolling. Taking a sequential program as input, a bank-disambiguation method produces two outputs: first, a distribution of each program object among the memory banks; and second, a bank number for every reference that can be proven to access a single, known bank for that data distribution. Finally, the thesis shows why non-disambiguated accesses are sometimes desirable. Dependences between disambiguated and non-disambiguated accesses are enforced through explicit synchronization and software serial ordering. The MIT Raw machine is an example of a software-exposed architecture. Raw exposes its ILP, memory and communication mechanisms. The Maps system has been implemented in the Raw compiler. Results on Raw using sequential codes demonstrate that using bank disambiguation in addition to ILP improves performance by a factor of 3 to 5 over using ILP alone.
by Rajeev Barua.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
3

Nagel, Fabian Oliver. "Efficient query processing in managed runtimes." Thesis, University of Edinburgh, 2015. http://hdl.handle.net/1842/15869.

Full text
Abstract:
This thesis presents strategies to improve the query evaluation performance over huge volumes of relational-like data that is stored in the memory space of managed applications. Storing and processing application data in the memory space of managed applications is motivated by the convergence of two recent trends in data management. First, dropping DRAM prices have led to memory capacities that allow the entire working set of an application to fit into main memory and to the emergence of in-memory database systems (IMDBs). Second, language-integrated query transparently integrates query processing syntax into programming languages and, therefore, allows complex queries to be composed in the application. IMDBs typically serve as data stores to applications written in an object-oriented language running on a managed runtime. In this thesis, we propose a deeper integration of the two by storing all application data in the memory space of the application and using language-integrated query, combined with query compilation techniques, to provide fast query processing. As a starting point, we look into storing data as runtime-managed objects in collection types provided by the programming language. Queries are formulated using language-integrated query and dynamically compiled to specialized functions that produce the result of the query in a more efficient way by leveraging query compilation techniques similar to those used in modern database systems. We show that the generated query functions significantly improve query processing performance compared to the default execution model for language-integrated query. However, we also identify additional inefficiencies that can only be addressed by processing queries using low-level techniques which cannot be applied to runtime-managed objects. To address this, we introduce a staging phase in the generated code that makes query-relevant managed data accessible to low-level query code. Our experiments in .NET show an improvement in query evaluation performance of up to an order of magnitude over the default language-integrated query implementation. Motivated by additional inefficiencies caused by automatic garbage collection, we introduce a new collection type, the black-box collection. Black-box collections integrate the in-memory storage layer of a relational database system to store data and hide the internal storage layout from the application by employing existing object-relational mapping techniques (hence, the name black-box). Our experiments show that black-box collections provide better query performance than runtime-managed collections by allowing the generated query code to directly access the underlying relational in-memory data store using low-level techniques. Black-box collections also outperform a modern commercial database system. By removing huge volumes of collection data from the managed heap, black-box collections further improve the overall performance and response time of the application and improve the application’s scalability when facing huge volumes of collection data. To enable a deeper integration of the data store with the application, we introduce self-managed collections. Self-managed collections are a new type of collection for managed applications that, in contrast to black-box collections, store objects. As the data elements stored in the collection are objects, they are directly accessible from the application using references which allows for better integration of the data store with the application. Self-managed collections manually manage the memory of objects stored within them in a private heap that is excluded from garbage collection. We introduce a special collection syntax and a novel type-safe manual memory management system for this purpose. As was the case for black-box collections, self-managed collections improve query performance by utilizing a database-inspired data layout and allowing the use of low-level techniques. By also supporting references between collection objects, they outperform black-box collections.
APA, Harvard, Vancouver, ISO, and other styles
4

Rose, Benjamin Aaron. "Intra- and Inter-chip Communication Support for Asymmetric Multicore Processors with Explicitly Managed Memory Hierarchies." Thesis, Virginia Tech, 2009. http://hdl.handle.net/10919/32824.

Full text
Abstract:
The use of asymmetric multi-core processors with on-chip computational accelerators is becoming common in a variety of environments ranging from scientific computing to enterprise applications. The focus of current research has been on making efficient use of individual systems, and porting applications to asymmetric processors. The use of these asymmetric processors, like the Cell processor, in a cluster setting is the inspiration for the Cell Connector framework presented in this thesis. Cell Connector adopts a streaming approach for providing data to compute nodes with high computing potential but limited memory resources. Instead of dividing very large data sets once among computation resources, Cell Connector slices, distributes, and collects work units off of a master data held by a single large memory machine. Using this methodology, Cell Connector is able to maximize the use of limited resources and produces results that are up to 63.3\% better compared to standard non-streaming approaches.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
5

Sandhu, Tahir S. Drake Frederick D. "Beyond American Memory technologies of library and office automation and their impact on multimedia computing for public education in the United States, 1963-present /." Normal, Ill. Illinois State University, 2001. http://wwwlib.umi.com/cr/ilstu/fullcit?p3006627.

Full text
Abstract:
Thesis (D.A.)--Illinois State University, 2001.
Title from title page screen, viewed April 18, 2006. Dissertation Committee: Frederick D. Drake (chair), Lawrence McBride, John B. Freed. Includes bibliographical references (leaves 351-398) and abstract. Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
6

Nejadfard, Kian. "Context-aware automated refactoring for unified memory allocation in NVIDIA CUDA programs." Cleveland State University / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=csu1624622944458295.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ayers, Andrew Edward. "M̲, a memory manager for Ḻ." Thesis, Massachusetts Institute of Technology, 1988. http://hdl.handle.net/1721.1/45691.

Full text
Abstract:
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1988.
Characters with an underscore appear as italic on the t.p.
Bibliography: leaves 94-95.
by Andrew Edward Ayers.
M.S.
APA, Harvard, Vancouver, ISO, and other styles
8

Harrison, Pascale. "How individuals with Fibromyalgia manage their memory problems." Thesis, University of the West of England, Bristol, 2012. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.572883.

Full text
Abstract:
Background: Previous studies have reported that there is both objective and subjective evidence that individuals with Fibromyalgia (FM) suffer from memory problems (Landro et al 1997; Katz et al 2004; Munoz et al 2005). The current literature has not examined how individuals cope with these difficulties. Aims of the study: The three aims were to: 1) measure the perceptions of managing memory problems for a FM population; 2) examine their coping response/strategies to establish if the responses can be explained by Models of Stress, Coping and Adjustment (The SRM and Transactional Model of Stress and Coping). 3) To explore the relationships between memory problems, mood, employment, health and pain. Methods: A mixed methodology design was adopted, within an internet survey. 100 participants (91 female, 4 males and 5 preferred to not answer) with FM were recruited from two FM Charity Websites. The modal age of the sample was 51-55 years old. 61 % of participants were from 'other' countries and 38% from the UK. Eight subjective questions (with comments) and psychometric tests were employed (Hospital Anxiety Depression Scale, Pain Catastrophizing Scale, Memory Aids Checklist and the Memory Complaint Questionnaire; and, demographic questions including employment). The questions were analysed using thematic analysis and inductive/deductive application of two theoretical models (Self Regulatory Model (Leventhal et al 2003) and Transactional Model of Stress and Coping (Lazarus et al 1984b). The data from the psychometric tests were analysed using SPSS (multiple regression and ANOVA). Results: Six themes emerged from the qualitative data, these were: "external memory aids"," impact upon daily life", "barriers that impact upon using strategies", "emotional well being", "other strategies and "denial of memory problems". The evidence from the themes implies that memory problems had a global impact upon a person's daily life: well being, home life, employment and coping behaviour. The SRM provided a framework for explaining the coping process. The Regression data proposed that depression significantly influenced perceptions of memory problems but not pain or anxiety. Conclusion: The outcome of this study suggests that both FM and memory problems impact upon well being by causing difficulties in the home and employment. The six themes suggest that memory problems can be managed by using external aids, family members/professionals and hobbies and interests. Recommendations from this research are that psycho-educational interventions would help to reduce memory problems and improve well being in this population.
APA, Harvard, Vancouver, ISO, and other styles
9

Rezaei, Mehran. "Intelligent Memory Manager: Towards improving the locality behavior of allocation-intensive applications." Thesis, University of North Texas, 2004. https://digital.library.unt.edu/ark:/67531/metadc4491/.

Full text
Abstract:
Dynamic memory management required by allocation-intensive (i.e., Object Oriented and linked data structured) applications has led to a large number of research trends. Memory performance due to the cache misses in these applications continues to lag in terms of execution cycles as ever increasing CPU-Memory speed gap continues to grow. Sophisticated prefetcing techniques, data relocations, and multithreaded architectures have tried to address memory latency. These techniques are not completely successful since they require either extra hardware/software in the system or special properties in the applications. Software needed for prefetching and data relocation strategies, aimed to improve cache performance, pollutes the cache so that the technique itself becomes counter-productive. On the other hand, extra hardware complexity needed in multithreaded architectures decelerates CPU's clock, since "Simpler is Faster." This dissertation, directed to seek the cause of poor locality behavior of allocation--intensive applications, studies allocators and their impact on the cache performance of these applications. Our study concludes that service functions, in general, and memory management functions, in particular, entangle with application's code and become the major cause of cache pollution. In this dissertation, we present a novel technique that transfers the allocation and de-allocation functions entirely to a separate processor residing in chip with DRAM (Intelligent Memory Manager). Our empirical results show that, on average, 60% of the cache misses caused by allocation and de-allocation service functions are eliminated using our technique.
APA, Harvard, Vancouver, ISO, and other styles
10

Muthukumaraswamy, Sivakumar Vijay. "An Evaluation of the Linux Virtual Memory Manager to Determine Suitability for Runtime Variation of Memory." Thesis, Virginia Tech, 2007. http://hdl.handle.net/10919/31608.

Full text
Abstract:
Systems that support virtual memory virtualize the available physical memory such that the applications running on them operate under the assumption that these systems have a larger amount of memory available than is actually present. The memory managers of these systems manage the virtual and the physical address spaces and are responsible for converting the virtual addresses used by the applications to the physical addresses used by the hardware. The memory managers assume that the amount of physical memory is constant and does not change during their period of operation. Some operating scenarios however, such as the power conservation mechanisms and virtual machine monitors, require the ability to vary the physical memory available at runtime, thereby making invalid the assumptions made by these memory managers. In this work we evaluate the suitability of the Linux Memory Manager, which assumes that the available physical memory is constant, for the purposes of varying the memory at run time. We have implemented an infrastructure over the Linux 2.6.11 kernel that enables the user to vary the physical memory available to the system. The available physical memory is logically divided into banks and each bank can be turned on or off independent of the others, using the new system calls we have added to the kernel. Apart from adding support for the new system calls, other changes had to be made to the Linux memory manager to support the runtime variation of memory. To evaluate the suitability for varying memory we have performed experiments with varying memory sizes on both the modified and the unmodified kernels. We have observed that the design of the existing memory manager is not well suited to support the runtime variation of memory; we provide suggestions to make it better suited for such purposes. Even though applications running on systems that support virtual memory do not use the physical memory directly and are not aware of the physical addresses they use, the amount of physical memory available for use affects the performance of the applications. The results of our experiments have helped us study the influence the amount of physical memory available for use has on the performance of various types of applications. These results can be used in scenarios requiring the ability to vary the memory at runtime to do so with least degradation in the application performance.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Managed memory"

1

Robinson, Phillip R. The one minute memory manager: Upgrading and using your PC's memory for faster, more efficient computing. Hemel Hempstead: Prentice Hall, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Robinson, Phillip R. The one minute memory manager: Upgrading and using your PC's memory for faster, more efficient computing. Redwood City, CA: M&T Books, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Gagliardi, Isabella, ed. Le vestigia dei gesuati. Florence: Firenze University Press, 2020. http://dx.doi.org/10.36253/978-88-5518-228-7.

Full text
Abstract:
The book analyses the history of the Jesuat congregation, highlighting the elements of connection and comparison with the social contexts, then describing the origin and the most ancient events of the female branch of the congregation, and the memory of the meeting between the "founder" of the Jesuats and the "foundress" of the Gesuate. The iconographic memory of the initiator of the congregation, Giovani Colombini, the collection of the lauds of the Jesuat Bianco da Siena, and the fortune of the 15th-century Life of Giovanni Colombini, written by Feo Belcari, are also investigated. Then the research reconstructs the constellation of groups, religious experiments and bearers of ideas and devotions that were linked to the Jesuats and, in particular, to the convents of Milan, Siena, Lucca, Venice and Rome and the sanctuaries managed by the congregation. The congregational sociability is analysed along its lines: the practice of work as pharmacists and the cultivation of spiritual friendships with prominent people such as the Countess of Guastalla, Lodovica Torelli. Finally, the erudite use of Colombini's Epistolario as a language text is studied. The volume closes with a documentary appendix on the Jesuat convent of Chiusi.
APA, Harvard, Vancouver, ISO, and other styles
4

Drury, Harry. A banker's account: Memoirs of a bank manager. Leignton Buzzard: Rushmere Wynne, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Dave, Schnee, Mullin Peter, and International Business Machines Corporation. International Technical Support Organization., eds. Automatic partition resource manager for System i and iSeries. [Poughkeepsie, NY]: IBM, International Technical Suppport Organization, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

A view from the dressing room: The memoirs of a cricket manager. Knysna: Amorique, 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Then people will traffick: Memoirs of an Indian professional manager, 1950s to 1990s. New Delhi: Minerva Press, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Stakel, Charles J. Memoirs of Charles J. Stakel: Cleveland-Cliffs mining engineer, mine superintendent, and mining manager. Marquette, Mich: John M. Longyear Research Library, Marquette County Historical Society, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Maretzek, Max. Further revelations of an opera manager in 19th century America: The third book of memoirs. Sterling Heights, Mich: Harmonie Park Press, 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

In search of happiness: Memoirs of an agriculturist, research manager, adventurer, community worker, and corporate leader. Cheras, Kuala Lumpur: Utusan Publications & Distributors, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Managed memory"

1

Kinney, Dale. "Managed Memory in S. Maria in Trastevere." In Monuments & Memory: Christian Cult Buildings and Constructions of the Past, 337–47. Turnhout: Brepols Publishers, 2016. http://dx.doi.org/10.1484/m.acsha-eb.4.2018029.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wang, Chenxi, Ting Cao, John Zigman, Fang Lv, Yunquan Zhang, and Xiaobing Feng. "Efficient Management for Hybrid Memory in Managed Language Runtime." In Lecture Notes in Computer Science, 29–42. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-47099-3_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Neth, Brandon, Thomas R. W. Scogland, Alejandro Duran, and Bronis R. de Supinski. "Beyond Explicit Transfers: Shared and Managed Memory in OpenMP." In OpenMP: Enabling Massive Node-Level Parallelism, 183–94. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-85262-7_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Metzlaff, Stefan, Irakli Guliashvili, Sascha Uhrig, and Theo Ungerer. "A Dynamic Instruction Scratchpad Memory for Embedded Processors Managed by Hardware." In Architecture of Computing Systems - ARCS 2011, 122–34. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-19137-4_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Grounds, Nicolas G., John K. Antonio, and Jeff Muehring. "Cost-Minimizing Scheduling of Workflows on a Cloud of Memory Managed Multicore Machines." In Lecture Notes in Computer Science, 435–50. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-10665-1_40.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Pit-Claudel, Clément, Peng Wang, Benjamin Delaware, Jason Gross, and Adam Chlipala. "Extensible Extraction of Efficient Imperative Programs with Foreign Functions, Manually Managed Memory, and Proofs." In Automated Reasoning, 119–37. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-51054-1_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Nanri, Takeshi, Hiroyuki Sato, and Masaaki Shimasaki. "Cost estimation of coherence protocols of software managed cache on distributed shared memory system." In Lecture Notes in Computer Science, 335–42. Berlin, Heidelberg: Springer Berlin Heidelberg, 1997. http://dx.doi.org/10.1007/bfb0024228.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Alam, Irina, Lara Dolecek, and Puneet Gupta. "Lightweight Software-Defined Error Correction for Memories." In Dependable Embedded Systems, 207–32. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-52017-5_9.

Full text
Abstract:
AbstractReliability of the memory subsystem is a growing concern in computer architecture and system design. From on-chip embedded memories in Internet-of-Things (IoT) devices and on-chip caches to off-chip main memories, the memory subsystems have become the limiting factor in the overall reliability of computing systems. This is because they are primarily designed to maximize bit storage density; this makes memories particularly sensitive to manufacturing process variation, environmental operating conditions, and aging-induced wearout. This chapter of the book focuses on software managed techniques and novel error correction codes to opportunistically cope with memory errors whenever they occur for improved reliability at minimal cost.
APA, Harvard, Vancouver, ISO, and other styles
9

Elmlinger, Stefan. "SAP HANA - Einsatzmöglichkeiten des SAP Solution Managers." In In-Memory-Datenbank SAP HANA, 107–29. Wiesbaden: Springer Fachmedien Wiesbaden, 2017. http://dx.doi.org/10.1007/978-3-658-18603-6_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Vaynberg, Alexander, and Zhong Shao. "Compositional Verification of a Baby Virtual Memory Manager." In Certified Programs and Proofs, 143–59. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-35308-6_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Managed memory"

1

Knight, Timothy J., Ji Young Park, Manman Ren, Mike Houston, Mattan Erez, Kayvon Fatahalian, Alex Aiken, William J. Dally, and Pat Hanrahan. "Compilation for explicitly managed memory hierarchies." In the 12th ACM SIGPLAN symposium. New York, New York, USA: ACM Press, 2007. http://dx.doi.org/10.1145/1229428.1229477.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Gueraich, Sonia, and Bouchra Bouzobra. "Corporate memory managed by semantic web service." In 2015 First International Conference on New Technologies of Information and Communication (NTIC). IEEE, 2015. http://dx.doi.org/10.1109/ntic.2015.7368760.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Moazeni, Maryam, Alex Bui, and Majid Sarrafzadeh. "A memory optimization technique for software-managed scratchpad memory in GPUs." In 2009 IEEE 7th Symposium on Application Specific Processors (SASP). IEEE, 2009. http://dx.doi.org/10.1109/sasp.2009.5226334.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ren, Manman, Ji Young Park, Mike Houston, Alex Aiken, and William J. Dally. "A tuning framework for software-managed memory hierarchies." In the 17th international conference. New York, New York, USA: ACM Press, 2008. http://dx.doi.org/10.1145/1454115.1454155.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Park, Jungho, Choonki Jang, and Jaejin Lee. "A Software-Managed Coherent Memory Architecture for Manycores." In 2011 International Conference on Parallel Architectures and Compilation Techniques (PACT). IEEE, 2011. http://dx.doi.org/10.1109/pact.2011.46.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Jantz, Michael R., Forrest J. Robinson, Prasad A. Kulkarni, and Kshitij A. Doshi. "Cross-layer memory management for managed language applications." In SPLASH '15: Conference on Systems, Programming, Languages, and Applications: Software for Humanity. New York, NY, USA: ACM, 2015. http://dx.doi.org/10.1145/2814270.2814322.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Rutgers, Jochem H., Marco J. G. Bekooij, and Gerard J. M. Smit. "Portable Memory Consistency for Software Managed Distributed Memory in Many-Core SoC." In 2013 IEEE International Symposium on Parallel & Distributed Processing, Workshops and Phd Forum (IPDPSW). IEEE, 2013. http://dx.doi.org/10.1109/ipdpsw.2013.14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ware, Fred, Javier Bueno, Liji Gopalakrishnan, Brent Haukness, Chris Haywood, Toni Juan, Eric Linstadt, et al. "Architecting a hardware-managed hybrid DIMM optimized for cost/performance." In MEMSYS '18: The International Symposium on Memory Systems. New York, NY, USA: ACM, 2018. http://dx.doi.org/10.1145/3240302.3240303.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Bock, Santiago, Bruce R. Childers, Rami Melhem, and Daniel Mosse. "Characterizing the Overhead of Software-Managed Hybrid Main Memory." In 2015 IEEE 23rd International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems (MASCOTS). IEEE, 2015. http://dx.doi.org/10.1109/mascots.2015.20.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Hassan, Ahmad, Hans Vandierendonck, and Dimitrios S. Nikolopoulos. "Software-managed energy-efficient hybrid DRAM/NVM main memory." In CF'15: Computing Frontiers Conference. New York, NY, USA: ACM, 2015. http://dx.doi.org/10.1145/2742854.2742886.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Managed memory"

1

Kommareddy, Vamsee, Clayton Hughes, Simon David Hammond, and Amro Awad. Opal: A Centralized Memory Manager for Investigating Disaggregated Memory Systems. Office of Scientific and Technical Information (OSTI), August 2018. http://dx.doi.org/10.2172/1467164.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lever, C. Remote Direct Memory Access - Connection Manager (RDMA-CM) Private Data for RPC-over-RDMA Version 1. RFC Editor, June 2020. http://dx.doi.org/10.17487/rfc8797.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography