Academic literature on the topic 'Memory management'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Memory management.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Memory management"

1

Raghuvanshi, Durgesh. "Memory Management in Operating System." International Journal of Trend in Scientific Research and Development Volume-2, Issue-5 (August 31, 2018): 2346–47. http://dx.doi.org/10.31142/ijtsrd18342.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Bharti, Bhavna, and Prof Avinash Sharma. "Memory Management in BigData: A Perpective View." International Journal of Trend in Scientific Research and Development Volume-2, Issue-4 (June 30, 2018): 1993–98. http://dx.doi.org/10.31142/ijtsrd14436.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Karunarathna, Harshinie, Jennifer Brown, Riccardo Briganti, Dabravka Pockrajac, Ivan Haigh, Christian Beck, Pushpa Dissanayake, Diego Pedrozo-Acuna, Kathryn Sparrow, and Mathew Wadey. "MODELLING MEMORY OF COASTAL FLOOD SYSTEMS." Coastal Engineering Proceedings 1, no. 34 (October 28, 2014): 19. http://dx.doi.org/10.9753/icce.v34.management.19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Nilakanta, Sree, L. L. Miller, and Dan Zhu. "Organizational Memory Management." Journal of Database Management 17, no. 1 (January 2006): 85–94. http://dx.doi.org/10.4018/jdm.2006010106.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Milligan, Peter. "Distributed memory management." Microprocessing and Microprogramming 38, no. 1-5 (September 1993): 97–98. http://dx.doi.org/10.1016/0165-6074(93)90131-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Hung, Ting-Wei, Yen-Hao Chen, and Yi-Yu Liu. "Memory management for dual-addressing memory architecture." IEICE Electronics Express 10, no. 15 (2013): 20130467. http://dx.doi.org/10.1587/elex.10.20130467.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

JOHNSON, THEODORE, and TIMOTHY A. DAVIS. "PARALLEL BUDDY MEMORY MANAGEMENT." Parallel Processing Letters 02, no. 04 (December 1992): 391–98. http://dx.doi.org/10.1142/s0129626492000544.

Full text
Abstract:
Shared memory multiprocessor systems need efficient dynamic storage allocators, both for system purposes and to support parallel programs. Memory managers are often based on the buddy system, which provides fast allocation and release. Previous parallel buddy memory managers made no attempt to coordinate the allocation, splitting and release of blocks, and as a result needlessly fragment memory. We present a fast and simple parallel buddy memory manager that is also as space efficient as a serial buddy memory manager. We test our algorithms using memory allocation/deallocation traces collected from a parallel sparse matrix algorithm.
APA, Harvard, Vancouver, ISO, and other styles
8

Perez, Gilberto, and Isabel Ramos. "Understanding Organizational Memory from the Integrated Management Systems (ERP)." Journal of Information Systems and Technology Management 10, no. 3 (December 30, 2013): 541–60. http://dx.doi.org/10.4301/s1807-17752013000300005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Milenkovic, M. "Microprocessor memory management units." IEEE Micro 10, no. 2 (April 1990): 70–85. http://dx.doi.org/10.1109/40.52948.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Aiken, Alexander, Manuel Fähndrich, and Raph Levien. "Better static memory management." ACM SIGPLAN Notices 30, no. 6 (June 1995): 174–85. http://dx.doi.org/10.1145/223428.207137.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Memory management"

1

Panthulu, Pradeep. "Intelligent Memory Management Heuristics." Thesis, University of North Texas, 2003. https://digital.library.unt.edu/ark:/67531/metadc4399/.

Full text
Abstract:
Automatic memory management is crucial in implementation of runtime systems even though it induces a significant computational overhead. In this thesis I explore the use of statistical properties of the directed graph describing the set of live data to decide between garbage collection and heap expansion in a memory management algorithm combining the dynamic array represented heaps with a mark and sweep garbage collector to enhance its performance. The sampling method predicting the density and the distribution of useful data is implemented as a partial marking algorithm. The algorithm randomly marks the nodes of the directed graph representing the live data at different depths with a variable probability factor p. Using the information gathered by the partial marking algorithm in the current step and the knowledge gathered in the previous iterations, the proposed empirical formula predicts with reasonable accuracy the density of live nodes on the heap, to decide between garbage collection and heap expansion. The resulting heuristics are tested empirically and shown to improve overall execution performance significantly in the context of the Jinni Prolog compiler's runtime system.
APA, Harvard, Vancouver, ISO, and other styles
2

Mårtensson, Henrik. "Memory Management of Manycore Systems." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-107879.

Full text
Abstract:
This thesis project is part of the MANY-project hosted by ITEA2. The objective of Many is to provide developers with tools for developing using multi and manycore, as well as to provide a knowledge-base about software on manycore. This thesis project has the following objectives: to investigate the complex subject of e_ectively managing system memory in a manycore environment, propose a memory management technique for implementation in OSE and to investigate the Tilera manycore processor TILEPro64 and Enea OSE in order to be able to continue the ongoing project of porting OSE to TILEPro64. Several memory management techniques were investigated for managing memory access on all tiers of the system. Some of these techniques require modi_cations to hardware while some are made directly in software. The porting of OSE to the TILEPro64 processor was continued and contributions where made to the Hardware Abstraction Layer of OSE.
APA, Harvard, Vancouver, ISO, and other styles
3

Stojanovic, Marta. "Automatic memory management in Java." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp05/MQ65392.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Sánchez, Navarro Francisco Jesús. "Smart memory management through locality analysis." Doctoral thesis, Universitat Politècnica de Catalunya, 2001. http://hdl.handle.net/10803/5965.

Full text
Abstract:
Las memorias caché fueron incorporadas en los microprocesadores ya desde los primeros tiempos, y representan la solución más común para tratar la diferencia de velocidad entre el procesador y la memoria. Sin embargo, muchos estudios señalan que la capacidad de almacenamiento de la caché es malgastada muchas veces, lo cual tiene un impacto directo en el rendimiento del procesador. Aunque una caché está diseñada para explotar diferentes tipos de localidad, todas la referencias a memoria son tratadas de la misma forma, ignorando comportamientos particulares de localidad. El uso restringido de la información de localidad para cada acceso a memoria puede limitar la eficiencia de la cache. En esta tesis se demuestra como un análisis de localidad de datos puede ayudar al investigador a entender dónde y porqué ocurren los fallos de caché, y proponer entonces diferentes técnicas que hacen uso de esta información con el objetivo de mejorar el rendimiento de la memoria caché. Proponemos técnicas en las cuales la información de localidad obtenida por el analizador de localidad es pasada desde el compilador al hardware a través del ISA para guiar el manejo de los accesos a memoria.
Hemos desarrollado un análisis estático de localidad de datos. Este análisis está basado en los vectores de reuso y contiene los tres típicos pasos: reuso, volumen y análisis de interferencias. Comparado con trabajos previos, tanto el análisis de volúmenes como el de interferencias ha sido mejorado utilizando información de profiling así como un análisis de interferencias más preciso. El analizador de localidad de datos propuesto ha sido incluido como un paso más en un compilador de investigación. Los resultados demuestran que, para aplicaciones numéricas, el análisis es muy preciso y el overhead de cálculo es bajo. Este análisis es la base para todas las otras partes de la tesis. Además, para algunas propuestas en la última parte de la tesis, hemos usado un análisis de localidad de datos basado en las ecuaciones de fallos de cache. Este análisis, aunque requiere más tiempo de cálculo, es más preciso y más apropiado para cachés asociativas por conjuntos. El uso de dos análisis de localidad diferentes también demuestra que las propuestas arquitectónicas de esta tesis son independientes del análisis de localidad particular utilizado.
Después de mostrar la precisión del análisis, lo hemos utilizado para estudiar el comportamiento de localidad exhibido por los programas SPECfp95. Este tipo de análisis es necesario antes de proponer alguna nueva técnica ya que ayuda al investigador a entender porqué ocurren los fallos de caché. Se muestra que con el análisis propuesto se puede estudiar de forma muy precisa la localidad de un programa y detectar donde estan los "puntos negros" así como la razón de estos fallos en cache. Este estudio del comportamiento de localidad de diferentes programas es la base y motivación para las diferentes técnicas propuestas en esta tesis para mejorar el rendimiento de la memoria.
Así, usando el análisis de localidad de datos y basándonos en los resultados obtenidos después de analizar el comportamiento de localidad de un conjunto de programas, proponemos utilizar este análisis con el objetivo de guiar tres técnicas diferentes: (i) manejo de caches multimódulo, (ii) prebúsqueda software para bucles con planificación módulo, y (iii) planificación de instrucciones de arquitecturas VLIW clusterizadas.
El primer uso del análisis de localidad propuesto es el manejo de una novedosa organización de caché. Esta caché soporta bypass y/o está compuesta por diferentes módulos, cada uno orientado a explotar un tipo particular de localidad. La mayor diferencia de esta caché con respecto propuestas previas es que la decisión de "cachear" o no, o en qué módulo un nuevo bloque es almacenado, está controlado por algunos bits en las instrucciones de memoria ("pistas" de localidad). Estas "pistas" (hints) son fijadas en tiempo de compilación utilizando el análisis de localidad propuesto. Así, la complejidad del manejo de esta caché se mantiene bajo ya que no requiere ningún hardware adicional. Los resultados demuestran que cachés más pequeñas con un manejo más inteligente pueden funcionar tan bien (o mejor) que cachés convencionales más grandes.
Hemos utilizado también el análisis de localidad para estudiar la interacción entre la segmentación software y la prebúsqueda software. La segmentación software es una técnica muy efectiva para la planificación de código en bucles (principalmente en aplicaciones numéricas en procesadores VLIW). El esquema más popular de prebúsqueda software se llama planificación módulo. Muchos trabajos sobre planificación módulo se pueden encontrar en la literatura, pero casi todos ellos consideran una suposición crítica: consideran un comportamiento optimista de la cache (en otras palabras, usan siempre la latencia de acierto cuando planifican instrucciones de memoria). Así, los resultados que presentan ignoran los efectos del bloqueo debido a dependencias con instrucciones de memoria. En esta parte de la tesis mostramos que esta suposición puede llevar a planificaciones cuyo rendimiento es bastante más bajo cuando se considera una memoria real. Nosotros proponemos un algoritmo para planificar instrucciones de memoria en bucles con planificación módulo. Hemos estudiado diferentes estrategias de prebúsqueda software y finalmente hemos propuesto un algoritmo que realiza prebúsqueda basándose en el análisis de localidad y en la forma del grafo de dependencias del bucle. Los resultados obtenidos demuestran que el esquema propuesto mejora el rendimiento de las otras heurísticas ya que obtiene un mejor compromiso entre tiempo de cálculo y de bloqueo.
Finalmente, el último uso del análisis de localidad estudiado en esta tesis es para guiar un planificador de instrucciones para arquitecturas VLIW clusterizadas. Las arquitecturas clusterizadas están siendo una tendencia común en el diseño de procesadores empotrados/DSP. Típicamente, el núcleo de estos procesadores está basado en un diseño VLIW el cual particiona tanto el banco de registros como las unidades funcionales. En este trabajo vamos un paso más allá y también hacemos la partición de la memoria caché. En este caso, tanto las comunicaciones entre registros como entre memorias han de ser consideradas. Nosotros proponemos un algoritmo que realiza la partición del grafo así como la planificación de instrucciones en un único paso en lugar de hacerlo secuencialmente, lo cual se demuestra que es más efectivo. Este algoritmo es mejorado añadiendo una análisis basado en las ecuaciones de fallos de cache con el objetivo de guiar en la planificación de las instrucciones de memoria para reducir no solo comunicaciones entre registros, sino también fallos de cache.
Cache memories were incorporated in microprocessors in the early times and represent the most common
solution to deal with the gap between processor and memory speeds. However, many studies point out that the cache storage capacity is wasted many times, which means a direct impact in processor performance. Although a cache is designed to exploit different types of locality, all memory references are handled in the same way, ignoring particular locality behaviors. The restricted use of the locality information for each memory access can limit the effectivity of the cache. In this thesis we show how a data locality analysis can help the researcher to understand where and why cache misses occur, and then to propose different techniques that make use of this information in order to improve the performance of cache memory. We propose techniques in which locality information obtained by the locality analyzer is passed from the compiler to the hardware through the ISA to guide the management of memory accesses.

We have developed a static data locality analysis. This analysis is based on reuse vectors and performs the three typical steps: reuse, volume and interfere analysis. Compared with previous works, both volume
and interference analysis have been improved by using profile information as well as a more precise inter-
ference analysis. The proposed data locality analyzer has been inserted as another pass in a research compiler. Results show that for numerical applications the analysis is very accurate and the computing overhead is low. This analysis is the base for all other parts of the thesis. In addition, for some proposals in the last part of the thesis we have used a data locality analysis based on cache miss equations. This analysis, although more time consuming, is more accurate and more appropriate for set-associative caches. The usage of two different locality analyzers also shows that the architectural proposals of this thesis are independent from the particular locality analysis.

After showing the accuracy of the analysis, we have used it to study the locality behavior exhibited by the SPECfp95 programs. This kind of analysis is necessary before proposing any new technique since can help the researcher to understand why cache misses occur. We show that with the proposed analysis we can study very accurately the locality of a program and detect where the hot spots are as well as the reason for these misses. This study of the locality behavior of different programs is the base and motivation for the different techniques proposed in this thesis to improve the memory performance.

Thus, using the data locality analysis and based on the results obtained after analyzing the locality behavior of a set of programs, we propose to use this analysis in order to guide three different techniques: (i) management of multi-module caches, (ii) software prefetching for modulo scheduled loops, and (iii) instruction scheduling for clustered VLIW architectures.

The first use of the proposed data locality analysis is to manage a novel cache organization. This cache supports bypassing and/or is composed of different modules, each one oriented to exploit a particular type of locality. The main difference of this cache with respect to previous proposals is that the decision of caching or not, or in which module a new fetched block is allocated is managed by some bits in memory instructions (locality hints). These hints are set at compile time using the proposed locality analysis. Thus, the management complexity of this cache is kept low since no additional hardware is required. Results show that smaller caches with a smart management can perform as well as (or better than) bigger conventional caches.

We have also used the locality analysis to study the interaction between software pipelining and software prefetching. Software pipelining has been shown to be a very effective scheduling technique for loops (mainly in numerical applications for VLIW processors). The most popular scheme for software pipelining is called modulo scheduling. Many works on modulo scheduling can be found in the literature, but almost all of them make a critical assumption: they consider an optimistic behavior of the cache (in other words, they use the hit latency when a memory instruction is scheduled). Thus, the results they present ignore the effect of stalls due to dependences with memory instructions. In this part of the thesis we show that this assumption can lead to schedules whose performance is rather low when a real memory is considered. Thus, we propose an algorithm to schedule memory instructions in modulo scheduled loops. We have studied different software prefetching strategies and finally proposed an algorithm that performs prefetching based on the locality analysis and the shape of the loop dependence graph. Results obtained shows that the proposed scheme outperforms other heuristic approaches since it achieves a better trade-off between compute and stall time than the others. Finally, the last use of the locality analysis studied in this thesis is to guide an instruction scheduler for a clustered VLIW architecture. Clustered architectures are becoming a common trend in the design of embedded/DSP processors. Typically, the core of these processors is based on a VLIW design which partitionates both register file and functional units. In this work we go a step beyond and also make a partition of the cache memory. Then, both inter-register and inter-memory communications have to be taken into account. We propose an algorithm that performs both graph partition and instruction scheduling in a single step instead of doing it sequentially, which is shown to be more effective. This algorithm is improved by adding an analysis based on the cache miss equations in order to guide the scheduling of memory instructions in clusters with the aim of reducing not only inter-register communications, but also cache misses.
APA, Harvard, Vancouver, ISO, and other styles
5

Hanai, Ryo. "Memory management for real-time applications." 京都大学 (Kyoto University), 2007. http://hdl.handle.net/2433/135980.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Wilk, Daniel. "Hierarchical application-oriented physical memory management." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp01/MQ29419.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Mohapatra, Dushmanta. "Coordinated memory management in virtualized environments." Diss., Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/54454.

Full text
Abstract:
Two recent advances are the primary motivating factors for the research in my dissertation. First, virtualization is no longer confined to the powerful server class machines. It has already been introduced into smart-phones and will be a part of other high-end embedded systems like automobiles in the near future. Second, more and more resource intensive and latency sensitive applications are being used in devices which are rather resource constrained and introducing virtualization into the software stack just exacerbates the resource allocation issue. The focus of my research is on memory management in virtualized environments. Existing memory-management mechanisms were designed for server class machines and their implementations are geared towards the applications running primarily on data centers and cloud setups. In these setups, appropriate load balancing and achieving fair division of resources are the goals and over-provisioning may be the norm. Latency involved in resource management mechanisms may not be a big concern. But in case of smart phones and other hand held devices, applications like media streaming, social-networking are prevalent, which are both resource intensive and latency sensitive. Moreover, the bursty nature of their memory requirement results in spikes in memory needs of the virtual machines. As over provisioning is not an option in these domains, fast and effective (memory) resource management mechanisms are necessary. The overall thesis of my dissertation is: with appropriate design and implementation, it is possible to achieve inter-VM memory management with a latency comparable to the latency involved in intra-VM memory management mechanisms like ‘malloc’. Towards realizing and validating this goal, I have made the following research contributions through my dissertation: (1) I analyzed the memory requirement pattern of prevalent applications, which exhibit bursty behavior and showcased the need for fast memory management mechanisms. (2) I designed and implemented a Coordinated Memory Management mechanism in Xen based virtualized setup, based on the split driver principle (3) I analyzed this mechanism and did a comparative evaluation with parallel memory management mechanisms. (4)I analyzed the extent of interference from the schedulers in the operation of the mechanism and implemented constructs that help in reducing the interference and latency. (5) Based on my analysis, I revised the implementation of the mechanism to one in which Xen hypervisor plays a more significant and active role in the coordination of the mechanism and I did a detailed analysis to showcase the latency improvements due to this design change. (6) In order to validate my hypothesis, I did a comparative analysis of inter-vm and intra-vm memory management mechanisms as final part of my dissertation.
APA, Harvard, Vancouver, ISO, and other styles
8

Feeley, Michael Joseph. "Global memory management for workstation networks /." Thesis, Connect to this title online; UW restricted, 1996. http://hdl.handle.net/1773/6997.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Yang, Shufan. "Memory interconnect management on a chip multiprocessor." Thesis, University of Manchester, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.520682.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Österlund, Erik. "Automatic memory management system for automatic parallelization." Thesis, Linnéuniversitetet, Institutionen för datavetenskap, fysik och matematik, DFM, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-13693.

Full text
Abstract:
With Moore’s law coming to an end and the era of multiprocessor chips emerging, the need for ways of dealing with the essential problems with concurrency is becoming imminent. Automatic parallelization for imperative languages and pure functions in functional programming languages all try to prove independence statically. This thesis argues that independence is dynamic in nature. Static analysis for automatic parallelization has failed to do anything but trivial optimizations. This thesis shows a new approach where dynamic analysis about the system is provided for very low costs using a garbage collector that has to go through all live cells anyway. Immutable sub-graphs of objects that cannot change state are found. Their methods become pure functions that can be parallelized. The garbage collector implemented is a kind of replicating collector. It is about three times faster than Boehm’s collector in garbage collection, fully concurrent and provides the dynamic analysis almost for free.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Memory management"

1

Baler, Henry G., ed. Memory Management. Berlin, Heidelberg: Springer Berlin Heidelberg, 1995. http://dx.doi.org/10.1007/3-540-60368-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Bekkers, Yves, and Jacques Cohen, eds. Memory Management. Berlin, Heidelberg: Springer Berlin Heidelberg, 1992. http://dx.doi.org/10.1007/bfb0017181.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

American Institute of Certified Public Accountants. Information Technology Division., ed. Memory management. New York: American Institute of Certified Public Accountants, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

J, Neuberger Marc, ed. C⁺⁺ memory management. New York: Windcrest/McGraw-Hill, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Dorfman, Len. C[plusplus] memory management. New York: Windcrest/McGraw-Hill, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Dorfman, Len. C memory management techniques. Blue Ridge Summit, PA: Windcrest/McGraw Hill, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Dorfman, Len. C memory management techniques. Blue Ridge Summit, PA: Windcrest/McGraw Hill, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Plattner, Hasso, and Alexander Zeier. In-Memory Data Management. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-29575-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Plattner, Hasso, and Alexander Zeier. In-Memory Data Management. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-19363-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kokosa, Konrad. Pro .NET Memory Management. Berkeley, CA: Apress, 2018. http://dx.doi.org/10.1007/978-1-4842-4027-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Memory management"

1

Lister, A. M., and R. D. Eager. "Memory Management." In Fundamentals of Operating Systems, 51–75. New York, NY: Springer New York, 1993. http://dx.doi.org/10.1007/978-1-4757-2252-9_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kävrestad, Joakim. "Memory Management." In Fundamentals of Digital Forensics, 175–78. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-96319-8_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Aycock, John. "Memory Management." In Retrogame Archeology, 17–35. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-30004-7_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Villela, Roger. "Memory Management." In Introducing Mechanisms and APIs for Memory Management, 1–12. Berkeley, CA: Apress, 2019. http://dx.doi.org/10.1007/978-1-4842-5416-5_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Lee, Keith. "Memory Management." In Pro Objective-C, 53–74. Berkeley, CA: Apress, 2013. http://dx.doi.org/10.1007/978-1-4302-5051-7_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Lazzarini, Victor. "Memory Management." In Computer Music Instruments II, 95–103. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-13712-0_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Gabbrielli, Maurizio, and Simone Martini. "Memory Management." In Undergraduate Topics in Computer Science, 91–118. London: Springer London, 2010. http://dx.doi.org/10.1007/978-1-84882-914-5_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Halvorsen, Ole Henry, and Douglas Clarke. "Memory Management." In OS X and iOS Kernel Programming, 99–117. Berkeley, CA: Apress, 2011. http://dx.doi.org/10.1007/978-1-4302-3537-8_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Olsson, Mikael. "Memory Management." In C Quick Syntax Reference, 63–66. Berkeley, CA: Apress, 2015. http://dx.doi.org/10.1007/978-1-4302-6500-9_19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Dalrymple, Mark, and Scott Knaster. "Memory Management." In Learn Objective-C on the Mac, 161–77. Berkeley, CA: Apress, 2009. http://dx.doi.org/10.1007/978-1-4302-1816-6_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Memory management"

1

Jones, Richard. "Dynamic memory management." In the 2007 International Lisp Conference. New York, New York, USA: ACM Press, 2009. http://dx.doi.org/10.1145/1622123.1622126.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Egger, Bernhard, Jaejin Lee, and Heonshik Shin. "Scratchpad memory management for portable systems with a memory management unit." In the 6th ACM & IEEE International conference. New York, New York, USA: ACM Press, 2006. http://dx.doi.org/10.1145/1176887.1176933.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Berger, Emery. "Session details: Explicit memory management." In ISMM07: International Symposium on Memory Management. New York, NY, USA: ACM, 2007. http://dx.doi.org/10.1145/3259088.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Jain, Jignesh, Meet Satra, Pratik Kumar Jain, and Era Johri. "Memory management in android." In 2017 International Conference on Big Data Analytics and Computational Intelligence (ICBDAC). IEEE, 2017. http://dx.doi.org/10.1109/icbdaci.2017.8070844.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Aiken, Alexander, Manuel Fähndrich, and Raph Levien. "Better static memory management." In the ACM SIGPLAN 1995 conference. New York, New York, USA: ACM Press, 1995. http://dx.doi.org/10.1145/207110.207137.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Koca, Caglar, and Ozgur B. Akan. "Quantum Memory Management Systems." In NANOCOM' 15: ACM The Second Annual International Conference on Nanoscale Computing and Communication. New York, NY, USA: ACM, 2015. http://dx.doi.org/10.1145/2800795.2800810.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Gay, David, Rob Ennals, and Eric Brewer. "Safe manual memory management." In the 6th international symposium. New York, New York, USA: ACM Press, 2007. http://dx.doi.org/10.1145/1296907.1296911.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

"Session 9: Memory Management." In 2015 IEEE Real-Time and Embedded Technology and Applications Symposium (RTAS). IEEE, 2015. http://dx.doi.org/10.1109/rtas.2015.7108451.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Li, Zhijuan. "Session details: Memory management." In LCTES03: ACM SIGPLAN Symposium on Languages, Compilers, and Tools for Embedded Systems. New York, NY, USA: ACM, 2003. http://dx.doi.org/10.1145/3244029.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Shrivastava, Aviral. "Session details: Memory management." In ESWEEK'12: Eighth Embedded System Week. New York, NY, USA: ACM, 2012. http://dx.doi.org/10.1145/3250281.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Memory management"

1

Carr, R. Virtual Memory Management. Office of Scientific and Technical Information (OSTI), June 2018. http://dx.doi.org/10.2172/1453979.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Black, David L., Anoop Gupta, and Wolf-Dietrich Weber. Competitive Management of Distributed Shared Memory. Fort Belvoir, VA: Defense Technical Information Center, January 1988. http://dx.doi.org/10.21236/ada207322.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

McCullough, Daryl, and Tanya Korelsky. EmmA, An Evolution Memory Management Assistant. Fort Belvoir, VA: Defense Technical Information Center, January 2000. http://dx.doi.org/10.21236/ada373403.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ralston, Lynda M., Andrew M. Bardos, William R. Beaudet, and Lee M. Burberry. Database Management Using Optical Associative Memory. Fort Belvoir, VA: Defense Technical Information Center, February 1998. http://dx.doi.org/10.21236/ada342499.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Terauchi, Tachio, and Alex Aiken. Memory Management with Use-Counted Regions. Fort Belvoir, VA: Defense Technical Information Center, March 2004. http://dx.doi.org/10.21236/ada603317.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Bartlett, Roscoe Ainsworth. Teuchos C++ memory management classes, idioms, and related topics, the complete reference : a comprehensive strategy for safe and efficient memory management in C++ for high performance computing. Office of Scientific and Technical Information (OSTI), May 2010. http://dx.doi.org/10.2172/992335.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Awad, Amro, Simon David Hammond, Gwendolyn Renae Voskuilen, and Robert J. Hoekstra. Samba: A Detailed Memory Management Unit (MMU) for the SST Simulation Framework. Office of Scientific and Technical Information (OSTI), December 2016. http://dx.doi.org/10.2172/1528796.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Su, Chun-Yi. Energy-aware Thread and Data Management in Heterogeneous Multi-core, Multi-memory Systems. Office of Scientific and Technical Information (OSTI), December 2014. http://dx.doi.org/10.2172/1179104.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Vineyard, Craig Michael, and Stephen Joseph Verzi. A Case Study on Neural Inspired Dynamic Memory Management Strategies for High Performance Computing. Office of Scientific and Technical Information (OSTI), September 2017. http://dx.doi.org/10.2172/1396076.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Brown, J. C., and A. A. Mirin. MICA, a facility to achieve portability for message-passing and dynamic memory management in FORTRAN. Office of Scientific and Technical Information (OSTI), January 1994. http://dx.doi.org/10.2172/10123838.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography